A lot of people assume the origin of wiretapping laws has to do with protecting citizens from the police. This isn't the case, at least not originally. Wiretapping laws were about protecting citizens from each other. Let me explain:
Once upon a time, it was too expensive to run a phone line to each house. AT&T had the brilliant idea of running one phone line and having many people on a street share the line (the rich, of course, enjoyed private communications). So if there were seven houses, and you were in house number 5, you would only answer calls that rang 5 times. The problem is that since all the houses on the block shared one phone line, any one nosy enough to but in could eavesdrop just by taking their phone off hook. The original wiretapping laws were setup to protect citizens from this, not the Government.
As a side note: before the North American Numbering plan, phone numbers were names followed by numbers. An example might be Oxford7 or OX7 on the number pad, which would ring the 7th house in the Oxford area (Oxford is a placeholder and doesn't refer to Oxford proper.)
Thanks for this. I wasn't aware of this bit of history.
But I don't care. We need to be protected from the government and need new wiretapping laws for that end.
Also, people don't realize the global negative consequences of supplying government research grants towards analysis and monitoring of its citizenry.
For more on this subject, check this out: http://thepbxblog.com/2013/05/08/how-wiretapping-in-the-us-h... (DISCLAIMER: I wrote this blog this morning).
at least at the federal level, everything i'd read places the catalyst for modern wiretapping laws as the conviction appeal of seattle bootlegger Roy Olmstead's arrest based on evidence gathered by a warrantless wiretap by "rogue federal law enforcement officers." his appeal reached the supreme court in 1928 and was upheld 5-4 on 18th century trespass laws. in 1934, congress passed The Communications Act of 1934, which made wiretapping a federal criminal offense and any evidence obtained from such would be inadmissible. it wasn't until the The Omnibus Crime Control Act of 1968 that the constitutionality of wiretapping was articulated for investigative purposes.
this gives a better overview/detail compared wikipedia:
At common law, “eavesdroppers, or such as listen under walls or windows, or the eaves of a
house, to hearken after discourse, and thereupon to frame slanderous and mischievous tales, are a
common nuisance and presentable at the court-leet; or are indictable at the sessions, and
punishable by fine and finding of sureties for [their] good behavior.”  Very Early Eavesdropping law.
but it wasn't prosecuted very often and faded from the common parlance:
“Eavesdropping is indictable at the common law, not only in England but in our states. It is seldom brought to the
attention of the courts, and our books contain too few decisions upon it to enable an author to define it with
confidence.... It never occupied much space in the law, and it has nearly faded from the legal horizon.”
The first wiretapping laws were enacted by congress during world war 1:
40 Stat.1017-18 (1918)(“whoever during the period of governmental operation of the telephone and telegraph systems
of the United States ... shall, without authority and without the knowledge and consent of the other users thereof, except
as may be necessary for operation of the service, tap any telegraph or telephone line ... or whoever being employed in
any such telephone or telegraph service shall divulge the contents of any such telephone or telegraph message to any
person not duly authorized or entitled the receive the same, shall be fined not exceeding $1,000 or imprisoned for not
more than one year or both”); 56 Cong.Rec. 10761-765 (1918).
And you're right, that wiretaps weren't constitutional until 1968, but this doesn't discuss neighbors spying on neighbors.
I think we found the same document: http://www.fas.org/sgp/crs/intel/98-326.pdf
I'll ping my buddy to see if he can shed a little more light, but this is related to wiretaps, which is slightly different from eavesdropping.
 4 BLACKSTONE,COMMENTARIES ON THE LAWS OF ENGLAND, 169 (1769).
1 BISHOP, COMMENTARIES ON THE CRIMINAL LAW, 670 (1882)."
"It's funny -- one of the ideas I had batted around with my publisher for a second book was a history of wiretapping. :-) But, I never got around to doing any research and don't know the definitive answers to your questions off hand.
In the thread you linked to below, mrexroad is correct about the Communications Act of 1934 (and section 605) being the first wiretapping federal law, at least that I'm aware of. And Olmstead is indeed a seminal case in the field, but whether it was the first, I don't know. The American Bar Association article he linked to seems pretty definitive.
I had not ever heard the theory that wiretap/eavesdropping laws originated due to party lines. Not saying it isn't true, just that I've never heard that."
So that's that. I'll have to blame some of the old engineers I talked to at AT&T for my miseducation, although they may have been referring to local (non-federal) laws.
This has been fun :).
How would the person at house 6 ever get a call if house 5 always picked up the phone before it had a chance to ring 6 times? :)
The actual way this worked was the ringing cadence was different for each subscriber: http://en.wikipedia.org/wiki/Party_line_(telephony)
Maybe the NSA has it cracked, maybe not. But the IRS sure doesn't, nor do the FDA, DEA, FCC, FBI, Google, Apple, Facebook, Microsoft for that matter.
Here's how it works: I run PGP/GPG (GPG is the open source/free version of PGP, they're the same protocol) locally and the first time I use it, it creates two files, a "public key" and a "private key". I post/share my "public key" online (and you assume it's really me sharing it with you - this is important).
gpg --armor --output Desktop/mqudsi.asc --export "Mahmoud Al-Qudsi"
Anyone can now use this key to send me a text message encrypted only for me. They just need this file and their own pgp/gpg private key; they tell it to encrypt message X for user with public key Y.
The result of that command (gpg --encrypt --recipient 'Mahmoud Al-Qudsi <firstname.lastname@example.org>' toencrypt.txt) will be encrypted text that no one but the holder of the private key (me) (which should never be shared) will be able to decode.
The result is plaintext. You can send it via email, text message, snail mail, whatever. There are apps that automate the encryption procedure as part of the sending process.
Sending an encrypted or signed message to someone who doesn't have your keys throws all kinds of scary messages in most email clients. Maybe not a problem for us™, but a huge problem for most people who find they can't send messages to their friends without remembering to do step X for people A, B, and D, but not C or Q. And even if you explained this to them, will they remember a month down the line when their friend calls them because they think their computer got hacked? (true story. nobody reads error messages, and encryption-related ones are among the most cryptic and scary looking.)
Then try convince them to keep using encryption / signatures for you, when it breaks for other people and makes sending an email more complicated, and they have to keep track of who C and Q are. Then try to convince them that, even though they have nothing to hide, encryption is still useful.
Then do this all over again when they get a new computer and forget to install PGP and have already lost their entire keychain.
The crypto is here, and yes, it has never been more accessible. The software using it is still garbage.
Seriously. The way PGP should work is: Whenever you send an email, it puts a header in the outgoing message with your public key. The header is not normally shown to the user and is ignored as unknown by non-compliant email clients. Then, whenever you send a message to anyone you've ever received an email from, you already have their public key, so your email client automatically encrypts the message before sending it, and their client decrypts any message it receives encrypted.
This is obviously ignoring a whole bunch of problems. What happens if your public key changes? What happens if the attacker sends an email from your address to the user? (Presumably in both cases the recipient will get a message complaining that the key has changed, and DKIM and the fact that your email server authenticates you will help with the second.) But here's the thing: Those problems don't happen normally. The average user doesn't encounter them in the first six months of using the software. They just install a client that supports the protocol and automatically get encryption for messages exchanged with anyone else using a supported client, without having to do anything special.
I think this is one of those "the perfect is the enemy of the good" scenarios. The people who want encrypted email want it to be secure against the NSA coordinating with AT&T and your email provider. Which would be great, if it didn't make the UX so terrible that no one uses it and causes everyone to default to no encryption. Do the above and you still have good security if you verify public key fingerprints manually, but it makes the process of encrypting your email as simple as installing the software, and if you don't verify keys then you're still safer against a large variety of attacks than the primary alternative of not using any encryption at all.
Without that, it's not just useless, it's detrimental to the system, as there could be lots of bogus keys accepted by people (imagine a virus that automatically generates and adds a PGP key to mail clients before sending to everyone in the address book, just to make it more likely to pass spam filters). Bogus keys in the web of trust would be a big problem.
In fact, if PGP/GPG were more popular, I imagine there would be the accompanying glut of horrible passwords used (or duplication from easily gleaned passwords), and pretty soon some virus would start automatically signing things it shouldn't on infected systems, and then the web of trust that the system relies on for third party verification wouldn't be so trustworthy.
It's not that I forgot that part, it's that that's the hard part. That's the reason PGP is hard to use: They try to make sure you do it securely. And you can't have some third party do that part for you without trusting them, and the whole idea is not to have to trust any third parties. What public servers are you going to use here? Does each email user have to run their own server? Unless you have a single central server, how do you know which server corresponds to which user?
Automating web of trust could be interesting though. Imagine you get an email from a new user that you've never received any email from before. There is some new P2P network where if you have someone's public key, you ask that user whether they know the new user's public key, and they send back a signed response (either "this is the key I have" or "I don't have a key", signed either way with the known user's public key). Then if all your friends who have the new user's key agree on what it is, it's probably right. If nobody has it, you get encouraged to verify it manually (i.e. in person). And if they don't all agree you get the nasty warnings about something fishy going on.
>imagine a virus that automatically generates and adds a PGP key to mail clients before sending to everyone in the address book, just to make it more likely to pass spam filters
That seems like a low-effectiveness method of sending spam, given that the public key is uniquely identifying and tied to a sender address, so once the spam filter realizes everyone is marking all those messages as spam it can just blacklist everything sent using that key. Also, how is it different from existing PGP other than that more people would be using it? If you've infected a machine with a virus you can do whatever you want to it. You could just write the spam directly to the user's inbox, or send it out from their own address and sign it with their actual key. Compromised machine = you're screwed.
>the accompanying glut of horrible passwords used (or duplication from easily gleaned passwords)
This isn't even necessary for a virus. The problem with viruses is that they can stay resident until you type your password and then it doesn't matter how hard the password was.
I'm certainly not going to argue with this, it's the basic gist of my original reply. :)
> Automating web of trust could be interesting though. Imagine you get an email from a new user that you've never received any email from before. There is some new P2P network where if you have someone's public key, you ask that user whether they know the new user's public key, and they send back a signed response (either "this is the key I have" or "I don't have a key", signed either way with the known user's public key). Then if all your friends who have the new user's key agree on what it is, it's probably right. If nobody has it, you get encouraged to verify it manually (i.e. in person). And if they don't all agree you get the nasty warnings about something fishy going on.
Exactly. This is similar to what I was envisioning when I was talking about confidence levels. Having different levels such as "I have personally verified (signed)" and "I know of and reasonably trust this key based on people I trust" and making that public in some manner would allow a slew of interesting techniques to verifying public keys to different assurance levels.
Come to think or it, it sounds like what we need is for a social network to adopt this. Google+ with it's real name requirements might make a good fit, but maybe real name isn't what we care about, maybe we just care about email. Alternatively, some alterations to diaspora might work out well (I know little about it other than it's a roll your own social network that I think can work as a node of a larger network).
> That seems like a low-effectiveness method of sending spam, given that the public key is uniquely identifying and tied to a sender address, so once the spam filter realizes everyone is marking all those messages as spam it can just blacklist everything sent using that key. Also, how is it different from existing PGP other than that more people would be using it? If you've infected a machine with a virus you can do whatever you want to it. You could just write the spam directly to the user's inbox, or send it out from their own address and sign it with their actual key. Compromised machine = you're screwed.
I'm imagining a virus that generates one on the infected system for the address the mail client is configured for. That could be a LOT of new keys.
The problem is the thousands or millions of bogus keys that start being sent from addresses that previously didn't have ANY key associated with them (or did, but not through that machine), clog the web of trust if they make it on there. If they are automatically added to mail client/PGP systems on the recipients end, that's a lot of bogus keys in users mail clients (even if it's just the 10% that arrive before spam filters react). If clients end up syncing their known keys to some central repo at some point, that's a LOT of bad data. I can imagine a case where someone generates a legitimate key and gets it personally signed by a few people, only to find that it's verified by hundreds of people on some public servers.
As for low-effectiveness, if it evades more filters by just a few percent, at the scales spam is sent that's a BIG deal.
> This isn't even necessary for a virus. The problem with viruses is that they can stay resident until you type your password and then it doesn't matter how hard the password was.
True. I imagine the really fast spreading and pervasive virus's need to be quicker than that though, but I have nothing other than a hunch to base that on.
That said, I'm still somewhat worried about MITM in the scenarios you describe above - should probably expose "you haven't verified this sender, it could be spoofed" somewhere, but somewhere icon/color-y rather than scary-error-message-y. Details, though...
Can't reply, but anyway, DNSSEC isn't really needed. That's just icing on the cake. What matters is a DNS record that specifies either the keyserver or the actual public keys themselves. A DNS TXT record or a custom PGP record will do just fine.
However, there's a huge training burden that needs to be overcome. People don't understand what it is, how it works, how to manage keys, what they can do securely or not, etc.
People complain about the tools, which I agree are insufficient right now for most people's general use—even with e.g. the very nice https://gpgtools.org —but I'm not convinced that's the primary hurdle.
Awareness of the technology is very low. I think it should be taught in public schools, and not as a specialized course but as part of general "life skills". I would love it if someone with a huge, visible platform like Google or Apple would push PGP integration into their tools for general use. Like, why isn't my PGP pubkey a normal field in my G+ profile? Or Facebook?
There are obvious answers there, of course, beyond the obscurity of the technology: if people start PGP-encrypting everything, those content platforms would lose value.
It would be like visiting Google's house to have a private conversation with a friend that Google couldn't hear. You can do it, but what do they get out of it anymore?
I'm not sure what the solutions are here, but I hope someone solves them.
Dunno if anyone has made one but till then we have no hope and Bob Hope
"OpenPGP encryption for Webmail"
Nothing that requires your average user to open a command prompt seems likely to be used outside of the most security conscious of circles.
HTTPS is less secure in practice (due to just being transport security and lots of intermediaries to attack), but still decent, and fairly widely used.
SSH is still the only cryptographic system which was so well implemented (in all ways, not just the cryptography) that it ended up taking over the entire market, displacing the non-secure options.
People should be building the next ssh, not the next PGP.
Yeah. I'd like something that simple and secure (where users can use the same private key across sites) for web browsers. X509 client authentication, as far as I can tell, doesn't cut the mustard. Among other problems, it requires trusted certificate authorities, which causes... problems.
It's really not an open crypto research problem; it's design and software engineering and entrepreneurship/marketing.
I killed it because in all that time, I received no encrypted messages. The only value I received was that it verified the signatures on a few mailing list posts.
I love the idea of ubiquitous public key encryption, but I think it just has a few too many moving pieces to get traction in it's current incarnation.
I'm not saying anything about how good or bad that is for American society, but I can absolutely see an interpretation of the 4th amendment where information you share with a third party doesn't get protected as if it were private. There may be other laws that protect your emails and tweets and whatnot, but the 4th amendment specifically may not be the best place to look for that protection.
Regarding phones, those also have specific laws attempting to protect wireless (cellular) communications from eavesdropping (with the exception of legal wiretaps, which cellular providers are required to be able to support).
Likewise for pagers, those have specific laws making it illegal to intercept the communications sent to pagers.
For normal phones, a specific law had to be passed to require warrants for interception of communication (the "Wiretap Statute" from 1968, later extended by the Electronic Communications Privacy Act of 1986).
So the point to all of this is that those privacy "rights" that you talk about are not 4th Amendment rights at all, they are protections granted by specific legal action on the part of Congress. Had that legal action not been taken then you'd be right back in this same "interesting 4th Amendment question" that we're talking about for this.
The bottom line is that if you're willing to give info to a 3rd party unencrypted you need to assume the government can be given access to the same information unless there are specific laws forbidding it.
The statute fixed that privacy issue and then put requirements on when wiretaps could be used by the government. Before this law and the Supreme Court decision the government was very... lax regarding warrants for wiretaps.
On the other hand, it's not a privacy violation for the DHL guy to voluntarily turn over material to the government and the government to use that as evidence against you. Think of the stereotypical "guy at the photo lab" who notices that someone dropped off child porn to be developed, you (sadly) used to see that in the news yearly, and no one thought anything of that being turned over to the government.
In some cases third-party services have privacy protections added by law. E.g. the cellular communications interception thing, it used to be very easy indeed to intercept a cell phone call just by sticking up an antenna. In 1994 the law was changed to make it illegal to intercept those calls in most scenarios, and require search warrants even for that "public transmission" of communications (but the law also required cell phone companies to make it possible to wiretap, the "CALEA" provisions).
Given how strictly regulated banks are I would be very surprised if there are not similar laws providing some semblance of legal protection to the contents of safe deposit boxes, but I'm not sure and don't have time to Google it.
Also: the government could always ask your friend at the other end about what you said, not just Facebook.
The intention of the founders re: the 4th amendment was not to protect communications between people, it was to protect people from personal searches. Read the text: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures."
Persons, houses, papers, and effects? What do all those things have in common? They are personal to you. You'd feel violated if someone else entered your house or rifled through your effects. That same reasoning doesn't apply to something like a Facebook message that any of a number of people at Facebook could read. Maybe you assume that they wouldn't do that, but they certainly can.
Maybe we need a law to keep the government from getting Facebook messages without a warrant. But that's a separate thing from the Constitutional issue. Personally, I don't even think we need such a law. If you're okay with Facebook or Google looking at your communications, you should be okay with the government doing so.
If you could help me follow the idea further: In my mind, a communication is mine until I send it, at which point it is my property in the trust of another, until delivered to whom I'm communicating with, at which point it becomes their property.
I know legal reasoning doesn't always follow what I may consider to be common sense- can you shine any light on what I'm missing?
(Prefer not to focus on the DHL/UPS etc side of things, because I understand their right to snoop is a part of their TOS.)
Think of this hypothetical. Say Google reads your g-mail and tells people you ordered green shirts. Do you have a lawsuit against Google for disclosing your private information? I doubt it. If the information is sensitive for a different reason (e.g. it's about your herpes diagnosis), you might have a suit, but then you'd also have a suit against a friend to whom you told that information in confidence. I.e. it's a suit based on the nature of the sensitive information, not the disclosure by itself.
The idea of this "information trustee" doesn't really exist in the law. Google is treated no differently than a guy on the street you tell to relay a message to your friend. And as for the 4th amendment, as I said, wiretapping aside it's more of a "personal search" protection than a "communication protection." This is a good summary of the relevant precedent: http://www.cs.duke.edu/~chase/cps49s/carnivore-history.html
| Legally, once you hand your communication over
| to Google, it's their property.
If I store files on Dropbox, I am arguably paying for them to store the files for me, not giving them to Dropbox to do with what they like. It should not be treated any differently than a rented storage space in the physical world (regardless of implementation details like de-duplication).
It's like right now you have the idea, but haven't written the code.
But as it stands things are even more lopsided towards Google. Go check their terms of service, I'd bet they have a half dozen different ways of indemnifying themselves from liability in the event they don't deliver that email.
| If you're okay with Facebook or Google looking at
| your communications, you should be okay with the
| government doing so.
- Government employees wield much more power than Facebook employees.
- If it was common knowledge that all Google/Facebook employees had unfettered access to all communications going through their systems, then people would treat the system differently. As it stands, employees should only be looking at specific communications to deal with work functions (investigate complaints from users, look at a message that caused a back-end system to blow up, etc).
But you're not hiding anything. You're transmitting it in clear text through Google/Facebook's services. You're not worried about the right to hide your information, you're worried about the right to hide it against the government specifically. That's the key distinction. I do have things to hide--I don't share those thing with either Google or the government, and those are the things I want protected. If I choose to share something with Google, I'm okay with the government having access to it too.
> Government employees wield much more power than Facebook employees.
In my mind, the government has a lot more power to screw me over, but private industry has a lot more incentive. I'm much more worried about credit ratings agencies, insurance companies, potential employers, etc, having access to my information than I am about the NSA looking at my information.
> As it stands, employees should only be looking at specific communications to deal with work functions
That's the same for government employees. Maybe the difference between us is that you implicitly trust the employees of Facebook not to do stuff like this, and don't trust government employees, while I feel the opposite way.
| But you're not hiding anything. You're
| transmitting it in clear text through
| Google/Facebook's services. You're not worried
| about the right to hide your information, you're
| worried about the right to hide it against the
| government specifically.
| If I choose to share something with Google, I'm
| okay with the government having access to it too.
| Maybe the difference between us is that you
| implicitly trust the employees of Facebook not
| to do stuff like this, and don't trust government
| employees, while I feel the opposite way.
When was the last time that an FBI agent was fired for overstepping their authority? On the other hand, I feel more trust that Facebook/Google would fire an employee that overstepped their authority.
 And they are always testing the bounds of how they can apply the law. See the case of trying to charge the mother that drove her daughter's classmate to suicide, where they tried to charge her with 'hacking' because she violated the Terms of Service of MySpace.
How do you figure that?
> To be fair, there are bad actors in all places, but the FBI/US Attorney General has more incentive to pull people into court, and send them to jail than Facebook employees do.
The difference is that only a small fraction of people do things that causes the federal government to become interested in putting them in jail. Most people do things that arise the interest of private industry. See, e.g., the credit ratings agency mafia. Those companies would have a field day if they had access to Facebook's information. See also, all the hiring managers that would have to have access to peoples' social networking information in order to blacklist them for jobs. Or insurance companies looking for any reason to drop people from healthcare plans, etc.
How much you worry about something is generally proportional to the product of how likely that something is to happen and how bad that thing would be if it did happen. It's exceedingly unlikely that you'll be at the receiving end of a federal prosecution, even if that would be a really bad thing. But having trouble with your credit, having trouble getting a job, being dropped from your health insurance--all of these things are much more likely, and they can be pretty bad in and of themselves. Yeah, sure, the government can bankrupt me defending a prosecution, but then again so can my HMO dropping me from my health plan if I get sick.
As I said, I imagine this is a matter of outlook. I don't see myself as a revolutionary who might get railroaded by the government for fighting for a just cause. I do see myself as a guy with a wife and a kid looking to buy a house some day, worrying about health care costs for my aging parents, etc. Given that, the ways private industry can screw me over are a lot more real to me.
I'm pretty sure that "reasonable expectation
of privacy" is found no where in the
Employment agreements are also subject to contract law concepts such as consideration, which would come into play when someone stops and asks "what's in it for me?" A single job as described at the beginning of the relationship through the employment agreement is probably not going to be seen as valuable enough for someone to have legitimately signed away his future productivity.
I'm not sure what you mean by "you can't sign your rights away". IP is of course transferable.
Not without a signature on a document itemizing the IP being transferred.
Same goes for FB, Google and other providers. They have the ability to read your mail and occasionally do just like the phone company.
Bonus example: voice mail. It's just like email since its a message sitting on the service providers computer. Can the Feds just listen to those messages?
It's when we are being data mined by the government there's a problem. I think we all don't mind being data mined for advertising (within reason) but when we are being watched to be 'kept in line' that's where there's a problem. At that point it's more like 'Minority Report' where you are being scrutinized before you even do anything.
Excerpt from the 6th Circuit's Warshak opinion: "Since the advent of e-mail, the telephone call and the letter have waned in importance, and an explosion of Internet-based communication has taken place. People are now able to send sensitive and intimate information, instantaneously, to friends, family, and colleagues half a world away... By obtaining access to someone's e-mail, government agents gain the ability to peer deeply into his activities."
As for the telephone conversations, read this thread a bit more... the government didn't need permission to wiretap until a specific law was passed in 1934 that required it.
Sure, Facebook/Google have more access than a bank has to the safety deposit box, but the likelihood that anyone at Google or Facebook has actual knowledge of the information the government is looking for is slim to none.
Maybe a better analogy would be a warehouse. Just because things are stored in the warehouse, and employees have access, doesn't mean that the employees have rifled through them.
The law should conform to what's best for society but you need to understand where the law is not best for society before you can understand where to change it.
The whole reason OP gave that the DOJ policy "makes sense" is because there's not actually 4th Amendment protection for this case. And he's right, so don't shoot the messenger, he's telling you what needs to be fixed in order to restore privacy controls closer to what we envision as ideal.
And that is important because a government that can affect things around the world does not run on "expectation", it runs on the law. You can complain until you're blue in the face about how the government is violating your expectation of privacy.... or you can fix the law, as was done in 1934, 1968, 1986, and many more times in between.
What do "we the people" want? Do we want the government to seek out and exploit every possible loophole it can find and to make up excuses to eavesdrop on us?
Or do we want to make sure that gentlemen don't read each others letters?
I would very much prefer the latter. State the law as "if you are a public servant, don't listen to the conversation of people - don't even think about or try to listen - unless you have a very specific reason to do otherwise and then only if you must".
"They", the government, are the people having guns and the authority to lock you up. Once we get used to the idea that "they are listening" whenever they want ... we will start to self censor what we say and very shortly thereafter what we do and think. You don't want to be the one sticking out, you don't want to catch their attention.
And that is when we have killed creativity, stopped future development of society and made the entrepreneurs walk in the line at the same pace as everyone else.
The quest for privacy is not about protecting the fact that I like to watch videos of midgets dressed up in leather underwear or my protecting friend who is ordering weed online.
It is about the future of our society.
You can't rely on policy. It moves too slowly and there are always loopholes.
It's a shame there's no mainstream or accessible solutions to encrypt email while using services like gmail. Although they exist for IM.
But the real issue, as you indicate, is stored data. Full Gmail encryption would presumably be done in conjunction with the client via the browser. Because the server wouldn't have access to the plaintext, search becomes tricky and contextual ads problematic. One solution would be for Google to make fully encrypted Gmail a paid service.
Google's happy to charge when the market demand exists for a paid service (GAE and GAB come to mind). It's true that the market demand may be insufficient here to make it worthwhile, but on the other hand privacy-sensitive companies and agencies might be more willing to switch to paid Gmail if their email were seen as more private than it is today.
Don't you think technology has advanced a tiny bit since then? I'm thinking of Ajax, CSS, XHTML, XMLHttpRequest, DOM, JSON, HTML5, Chrome, Firefox, etc. The tools available today (and the platforms, and the machines) are far more capable.
Your second point is more interesting. Hushmail was ahead of its time in ~2000 when turning to client-side encryption via Java. But nobody really loves Java in the browser. So about five years later the company began to offer a webmail service with server-side encryption. Well, duh. That's vulnerable to a court order allowing the intercept of someone's passphrase the next time they log on. Which is what happened.
So you're right to say that Hushmail was vulnerable. But that was because of their server-side decision, and is unrelated to the vulnerability of properly implemented client-side encryption.
The tools to be sure of the executable you're running really only exist for desktop software right now.
Are you really okay with the police reading your email at any time for any reason?
I would understand your attitude if email was already protected and encryption could only serve to hide from search warrants and wiretap orders, but that's not the case. The government can and does read our email whenever it pleases. How are you okay with this?
I'm not a criminal, but I don't want the government reading my email.
Or google could offer ad free encrypted gmail that costs money.
There are plenty of business solutions to this problem and we will see many of them tried.
EDIT: Ah no, no Chrome plugin! I may have to go back to Firefox. I've been using it lately for development and it's come a long way in 2-3 years.
EDIT II: I found a very interesting open-source Chrome OpenPGP plugin built by a research team. http://gpg4browsers.recurity.com/ (Turns out project was merged to mymail-crypt project here: http://www.openpgpjs.org/) But I'll be sure to check back with Penango since they have said that a Chrome plugin in coming mid-2013.
that's pretty scary.
So, have to ask, again, why the hell is any one wanting privacy from the authorities, who we know don't have our best interests at heart, still using electronic communications? It never was, isn't and never will be secure from them. If nothing else, almost every "terrorist" trial features some electronic communications evidence. Doesn't that tell us enough?
Look, just ask your self why organisations like MI6 still use old skool drop boxes in Moscow.
Tor? Its got government finger prints all over it. I trust it as far as I can throw it.
A warrant is simply a permission by a judge to obtain or enter certain private property, right? Wouldn't the DOJ be able to find a judge that will grant a warrant? I don't know anything about judicial law, so I'm interested in learning about how this all works.
Definitely not. It has always been a public communications channel. I don't think anyone ever had any privacy expectations regarding IRC.
I am pretty sure it's not. One day there was a news story on Drudge Report: http://www.drudgereportarchives.com/data/2005/05/10/20050510... ("feds investigate huge computer attack; worldwide hunt for 'stakkato'")
As a joke, my roommate at university logged into a IRC channel with the nickname and said:
[01:59.16] * Stakkato (email@example.com) has joined #C++
[01:59.17] * ChanServ sets mode: +o Stakkato
[01:59.21] <Stakkato> look i made drudgereport headlines!
[01:59.26] <Stakkato> http://www.drudgereport.com
[02:01.09] * Stakkato (firstname.lastname@example.org) Quit (Quit: )
I suppose there is a chance that an informant reported the joke to the FBI, but due to the specifics of the situation, I think it is likely that the text conversation above was caught in a a general FBI dragnet of some kind (IRC server, ISP, etc.) and logged for eventual investigation. It did not seem to be a serious line of investigation by the FBI - more of a "follow all leads" situation. Someone had run a 'grep' for 'stakkato' and my friend's IP address showed up.
That was the day when it became clear to me that everything in plaintext transiting the Internet is probably available to the FBI. At the time it was shocking; even though the conversation happened over a public network, it was surprising to me that the conversation was actually logged and later found. I hesitate to share this story, but I hope it illustrates in harsh relief the probable capabilities of incentivized investigators. Keep in mind this was 2005 - investigative capabilities have surely grown since then.
If you self-host IRC and use SSL you'll (probably) at least have the luxury of knowing when someone initiates a proceeding to acquire logs, although there's still the risk of someone going to your hosting provider or datacenter and compromising your server physically.
A hosted IRC service seems like a bad idea overall; in addition to the Justice Department's ability to compel them to watch you (just like with Facebook Chat), there's no way for you to know if they're compromised by another third party.
I understand, but it's a lot more convenient for my less tech-savvy friends than a bouncer, I would think.
Of the top of my head, I can't think of a lever to get enough people encrypting at once so as to overcome that disadvantage.
on cons side - it take a while to connect to tor network, but once started it is quite fast.
i am not saying government should be allowed violate our rights, but it is much easier to assume that anything gtalk/facebook/gmail/etc is with in reach of government and do yourself a favor - use p2p encryption