Hacker News new | past | comments | ask | show | jobs | submit login
The network is hostile (cryptographyengineering.com)
276 points by pmh on Aug 16, 2015 | hide | past | favorite | 93 comments

> hostile to the core values of Western democracies.

It seems the governments have a very different idea of what those values are than the people. Until those ideas are aligned, governments are out to get the people. There is no point in any of this. Because ultimately, no matter what technical solutions you can come up with, force and law always trump those.

Perhaps at some point you could make the argument that we don't explicitly know what the government does and that's why it's doing it and getting away with it. That's no longer the case. We know exactly what the government does, we don't think that it's right, and yet we can do nothing to stop it. So either we need to overhaul government or accept the status quo and quit bitching about it or trying to create technical solutions to fix social problems.

If the government can mandate networks spy on computers, it can mandate manufacturers spy on users. As they are already doing this, fixing the network solves nothing. As for foreign adversaries spying on users, well if you are not in the US avoiding that is impossible as most of your computing experience is under regulatory capture by the US government.

> Because ultimately, no matter what technical solutions you can come up with force and law always trump those.

That's a very important point that I think many who champion technical solutions conveniently set aside. Remember that in the US in the 90s--in other words, not that long ago--PGP was illegal and Phil Zimmerman was criminally investigated for developing and distributing it.

Today PGP and GPG are some of the most powerful encryption tools we have available to us--tools that Snowden says we can still trust. If someone makes a better tool, or if these tools start to bother governments again, then governments could just declare them illegal. Then you're faced with the question of: "Do I encrypt my mail and risk a SWAT team blowing up my front door and shooting my dog, then spending hundreds of thousands of dollars in a protracted legal battle against the full might of the US government, or do I just forget about it?"

The tech genie is out of the bottle, and the answer isn't more tech--it's getting people, and thus governments, to understand and champion these tools just like we do.

The tech genie is out of the bottle, and the answer isn't more tech--it's getting people, and thus governments, to understand and champion these tools just like we do.

No, the answer includes better tech. OpenSSH and LibreSSL are doing good work in removing insecure codecs. HTTP/2 looks like it should do well, because no client implementation accepts plaintext. I’m hoping that protocols like OTR and Darkmail become more widespread.

PGP is basically unusable. You can use it to dump secrets on journalists, after you have trained the journalist. You can’t use it to lead a quiet, private life. I tried. It’s not integrated with anything, and nobody else uses it. It’s fatally flawed.

Being riddled with fatal flaws is an aspect PGP shares with the lucrative Certificate Authority system, with S/MIME, with the DNS security system, with IPsec, with cell phone encryption.

Worse, actual solutions are orthogonal to big business interests. Decentralized trust? Google has no need for this; Google has its own intermediate CA (which, through the totally broken CA model, means Google can issue valid TLS and S/MIME certificated for any domain in the world). Federated content sharing? Facebook has no need for this; Facebook needs all your data to keep it safe for you, and also to analyze and monetize it.

What we need is to follow the article’s plea: Truly understand that the network is hostile, and design our systems to make security the easy default. And drop insecure protocols already, wherever there is a viable alternative.

Amen; I'm still flabbergasted by programmers not acknowledging that it's entirely possible to create a programming language which embeds our concept of privacy in cryptographic reality.

All you would need for this is to model each user's interaction with the system as a statement, where a statement could be either code or data, and then sign and encrypt these statements by default and encrypt the statement's decryption key with recipients' public keys as needed.

This might sound esoteric but we could already do this today just by taking something like Unison [1] and adding this encryption layer.

As geeks we should really own up to the fact that it's perfectly feasible to do this... and get working on it.

P.S. For a fun fact; at it's core such a language would pretty much be what John McCarthy described as 'the language for the year 2015.' [2]

[1] http://unisonweb.org [2] http://lambda-the-ultimate.org/node/3180

> It’s not integrated with anything...

It's integrated with Thunderbird via Enigmail. It works at least as well as MS Outlook's integrated message signing. If your reply to that is something like "Everyone uses webmail.", I say "I've been using IMAP to access Gmail since five minutes after the feature was added to Gmail.".

Noted. And, yeah, that sucks... a lot. However, my habit of always composing plain-text messages and exclusively using attached signatures seems to have inadvertently shielded me from those issues. :) It has been more than a year since that article; I wonder if those bugs have been cleaned up.

This[0] seems to indicate that the only email client that has real problems with attached signatures is Outlook Express. I can't bring myself to care much about Outlook Express users. (Is OE even installed on Windows 7 and later?)

[0] https://www.phildev.net/pgp/pgp_clear_vs_mime.html

That issue has been in the Thunderbird/Enigmail Rube Goldberg setup forever, and publicly discussed since at least 2006. Always, the response is, Don’t use Enigmail with HTML email. At this point, it would be bigger news if they manage to fix it, so I expect to be informed if it happens.

The bigger problems are that it’s not integrated and it’s not effective for shielding metadata. With difficulty and being mindful of the sharp edges, you can use OpenPGP on desktop. You need a clunky third-party email client to get it on mobile. Best case scenario, you send lots of extra data that gets ignored. Worst case, all those attached signatures get downloaded as files, causing confusion and anger. And there’s no interoperability with S/MIME, which actually is supported by most proprietary email clients.

OpenPGP does not protect metadata. It piggybacks on SMTP, which publishes the sender, recipient, and subject line in the clear, along with the identity and timing of each server in the transport path from when you send the email to when it lands in your recipient’s mailbox. PGP was originally conceived as the “envelope” to protect your message contents, but in this era of unlimited surveillance we need to do better.

If we're concerned about metadata and timing analysis, we could use something like a Mixmaster, along with TLS-only connections between email servers and between the server and its clients. This would essentially be Tor for SMTP, with the recipient's email server as the "exit node".

"Why aren't we doing this now?" Probably for the same reasons that we're going back to the 1990's world of Instant Messaging walled gardens; techies and cypherpunks are winning some battles but not most of them, and non-technical users don't understand what they give up when they choose a centralized Web and ISPs that make them incapable of acting as a peer on The Internet. [0]

To the rest of your comment:

To the best of my knowledge, I've never had Enigmail or Thunderbird mangle my PGP signatures. I participate in a few technical mailing lists; the folks there absolutely would tell me if my signatures were getting fucked up. For mail that doesn't go to these lists, I can double-check the message copied to my sent mail folder. :)

Installation of Enigmail on Windows, Linux and -I presume- Mac is trivial. Based on reports, the sharp edges that you speak of appear to be there; send only plaintext email and attach -rather than inline- your signature and you will -in my experience- avoid all of them.

The world of mobile software is largely a cesspool. Maybe I'm ignorant, but it seems like the only folks doing good privacy protection mobile software are Open Whisper Systems, the Tor Foundation (by way of Orbot), and Whatsapp. (The Whatsapp folks are in this list because of the work that Open Whisper Systems did to integrate TextSecure's near-zero-effort crypto into Whatsapp's software.)

[0] Yes, I totally understand that almost no one in the US has a real choice in ISP. Even here in Silicon Valley, I find myself prevented at every turn from giving my money to local, independent ISPs. There's a little comfort in the fact that Comcast allocates and routes up to a /60, and performs very, very little inbound filtering.

> Being riddled with fatal flaws is an aspect PGP shares with the lucrative Certificate Authority system, with S/MIME, with the DNS security system, with IPsec, with cell phone encryption.

which fatal flaws are present in "IPsec"?

No. The tech genie is only half out and it must continue to get out. Preferably before the legal solution comes out. We have crypto, but it's not even remotely ubiquitous. Consider yourself - how many of your peers would you be able to send OpenPGP-encrypted email message today?

The important thing to worry is that once some lawmaking passes and and journalists will happily announce the legal battle's won and the spies won't spy anymore (ha!) most will be happy about this, and continue to ignore any technical issues. That is, stay easy targets until the next scandal.

Both technical and legal solutions are important. We just can't ditch one and say that it's not important anymore - neither works well without the other.

Right. In Vinge's True Names, all network traffic is monitored by governments. So it's necessary to use cover traffic and steganography, in addition to encryption. That's where Tor seems headed, with ever better pluggable transports for bridges, and maybe even ephemeral website-like bridges from AWS etc.

But eventually, it may be necessary to get serious about steganography. With pervasive streaming HD video, covert channels providing even 0.1% of throughput would be workable. The harder challenge would be plausible deniability. If the steganography fails and that SWAT team arrives, you want them to discover that you've been compromised by covert-channel botnets. Just like every other user and server on the Internet.

Yes this works in fiction novels, in real life it usually doesn't work. I could list the reasons why, but show me a case where it actually worked and we can talk further. One big hurdle is that there are too many flaws in complex systems, and you have to mess up just once and that includes flaws in your technology for them to get whatever they are after. Perfection does not exist in reality. Using any sort of excessive technology makes you stand out as a target that's in need to introspection. All complexity has costs associated with it, to set up all of this is time consuming and expensive. etc. etc.

Many people use VPN services for privacy, to get around geo-blocking, hide torrenting and so on. Where I live, at least 25% of Internet users do so. Tor use is less common. And so I access Tor through VPN services. And don't obviously stand out more than 25% of my neighbors.

But yes, it's complicated. I've been writing how-to guides for years, and people's eyes do tend to glaze. Still, I'm sure that it could be automated.

Smartphones are the huge problem. They're intentionally designed for tracking and surveillance. And they are the primary channel for Internet access in many places. So it does seem pretty hopeless :(

>show me a case where it actually worked and we can talk further.

You mean like how people can anonymously buy drugs on the internet? It's working fine. It's not perfect, but it doesn't have to be.

There's no "anonymously" there. Maybe buyers use Bitcoins, and maybe they've mixed them well enough to hide the money trail. But for the most part, they provide meatspace mail addresses. To sellers who truly are anonymous. Who may be LEA. Who may ship illegal drugs at homeopathic dilution, and SWAT right after delivery.

And yet, despite all these worst-case scenarios you've outlined, it works great most of the time (from what I hear).

It didn't work out so well for Curtis Green.[0]

> Starting in April 2012, a DEA undercover agent in Maryland (the UC), began communicating with Ulbricht about selling illegal drugs on Silk Road. That agent was one of several assigned to the Baltimore Silk Road Task Force. The UC claimed to be a smuggler who specialized in moving substantial quantities of illegal drugs. In December 2012, Ulbricht set out to find a drug dealer on Silk Road who could purchase large quantities of drugs from the UC and directed his administrators, including Green, to assist. Green assisted the UC to establish contact with a buyer, who was an established seller of drugs on Silk Road (the Vendor). The UC and the Vendor negotiated a deal for one kilogram of cocaine for approximately $27,000 in Bitcoin, a digital currency that has no association with a national government, is difficult to track, and easy to move online.

> Without the knowledge of either Ulbricht or the UC, Green agreed to act as a middle-man for the Vendor and take delivery of drugs. As a result, the Vendor provided Green’s address to the UC as the place to which the cocaine was to be delivered. On January 17, 2013, an undercover U.S. Postal Inspector delivered the cocaine [very highly cut] to Green at his residence. Shortly after Green accepted delivery of the cocaine, federal agents with the HSI, DEA, U.S. Postal Inspectors and the U.S. Secret Service executed a search warrant at Green’s residence and recovered the kilogram of cocaine. U.S. Secret Service agents also conducted a forensic examination of Green’s computers and digital media seized during the search.

[0] http://www.justice.gov/usao-md/pr/administrator-silk-road-we...

Yes, there are specific instances where it does not work. What is your point? The same goes for buying drugs on the street. It seems to work quite well most of the time.

Governments are concerned with entrepreneurs engaged through the net. Because governments are made up of former dealer/slangers who went "legit" liek bushes and others of that ilk

It doesn't have to work in the sense of stopping an invasion of privacy. Name your governmental - or even non-governmental - organization will always be able to read any individual. The question is one of effort. In an optimal world Government would have some set of checks on who it monitors - there would be judges, warrants and probable cause. As things stand that's seeming less and less the case. In a slightly less better world the effort to monitor would be great enough that governments would be forced to pick and choose just who they're monitoring. Ideally that'd be picking bad people.

If you can prove that it worked, it didn't actually work. Unless someone boasts about it later, all anyone will ever see of a successful play for plausible deniability is the denial. As far as you can tell, their machine really was a victim of a botnet.

PGP/GPG are tools we have because the technical solution trumped attempts at force and law. Governments can do nothing to stop it. They have no control over mathematical realities. Force and law have no bearing on mathematics. A technical solution will always trump a violent or political solution.

Government force and law have no bearing on plant biochemistry, either. But they can punish those in possession of illegal plants. They can even punish those with illegal substances in their bodies. Just sayin' ;)

They'll have to outlaw crypto first. And it's hard to put this genie back to the bottle, given that it's somewhat out already.

To outlaw but not make virtually anyone guilty (because some stuff like TLS is extremely common) is a hard task alone, and any attempts at this would be surely fiercely countered.

Some politicians and government officials are already saying companies like Apple and Google are aiding terrorists. They don't necessarily have to outlaw it. They can just put such pressure on the providers of the biggest services, in order for them to use weaker encryption.

Governments can use force to make you reveal your private keys. It doesn't have to be legal. NSA's mass surveillance isn't legal but they are still doing it, which proves that the government is above the law.

If it is 'impossible' for your key to be revealed...technical trumps force/law/politics once again.

How would it be impossible for your key to be revealed?

Here's what I'm trying to say: there's absolutely nothing that prevents the government from forcing you to grant them access to your private (i.e. encrypted) content. If they decide that you're a threat to national security, they can do whatever they please to gain access to anything you have secured.

Before you say "but it's technologically impossible!" let me remind you that in any system, humans are the weakest link. How long do you think you can last under torture? The CIA has hundreds of "secret sites" overseas. What makes you think that the Department of Homeland Security doesn't have any equivalents here in the US, that they can imprison you in and torture you until you reveal everything you know?

It's really naive to think that technology can solve all problems. Think outside the box (or the computer, in this case) and really try to grasp what the most powerful nation in the history of the world is capable of. They can force the private jets of foreign officials to land in allied airports and search them thoroughly. If you think your fileserver is safe simply because you have its contents encrypted, you have a lot to learn.

Well, if you used a one time pad and burned the pad, it's essentially impossible to reveal the key. It does not matter how much some one chooses to torture you.

While I agree that technical solutions aren't enough, the point isn't only to achieve security. It is to raise the stakes to the point that the government has to do things that general public doesn't support to keep their capabilities. Right now it's mostly a walkover in favor of the NSA.

> Do I encrypt my mail and risk a SWAT team blowing up my front door and shooting my dog

That's a best case. Worse case is they shoot you.

It's a constant refrain in techie circles that the people are against mass surveillance but are powerless to stop it.

I don't believe this to be true. I think techies are looking at opinions within their own circles and making the incorrect assumption that the general population has similar opinions.

From what I can see, we have mass surveillance because it is what the people want. We have the TSA because it is what the people want. We have drone strikes because it is what the people want.

People are terrified of terrorism and want the government to take strong measures to fight it. This attitude may well be partly caused by the government, which benefits from this fear, but it is no less real for it.

Our governments aren't the best but they are far from being lost causes when it comes to being responsive to what the people want. The problem is that what the people want is often bad. Efforts to make the government better represent the wishes of the population will fail to make any difference with stuff like this, because that goal is already achieved in this case.

If we want this stuff to stop, we must first convince most of the population that it should. Without that, nothing else will be effective. And with it, change will come easily.

It's a good, reasonable, reasoned response. Unfortunately, it's mostly bollocks.


A majority of Americans (54%) disapprove of the U.S. government’s collection of telephone and internet data as part of anti-terrorism efforts,

More broadly, most Americans don’t see a need to sacrifice civil liberties to be safe from terrorism: In spring 2014, 74% said they should not give up privacy and freedom for the sake of safety, while just 22% said the opposite.

Most say it is important to control who can get their information (93%), as well as what information about them is collected (90%).

Fully 87% are aware of the federal surveillance programs;

In most countries we surveyed in 2014, majorities opposed U.S. government monitoring of emails and phone calls of foreign leaders or their citizens.

I think one reason there is a disconnect between the survey results and reality is that the answers are the answers one gets in an imagined ideal framework. Kind of like asking people if they would rather eat healthy or known unhealthy food. And most people will answer healthy --because that is their idealized answer but in reality it's too much work for many people so they eat unhealthy food fully understanding the consequences.

I think the same applies to the survey. Ideally people don't want intrusive government surveillance but at the same time it's not important enough nor egregious enough for them to mobilize over it, so they let it be --as long as it's not "too much".

> Kind of like asking people if they would rather eat healthy or known unhealthy food. And most people will answer healthy --because that is their idealized answer but in reality it's too much work for many people so they eat unhealthy food fully understanding the consequences.

I think you've hit on a good point, and following your analogy, most of us are just too damned lazy or apathetic to get worked up about it (bad food or bad government). It's easier and cheaper to buy and consume unhealthy food, just as it's easier and (mentally) cheaper to just pretend everything is fine with the world the way it is. Sure, we're going to get fat and die of a heart attack at 50, and sure, we're going to steadily lose our privacy and the ability to exercise our rights; but in the here and now, we're living in blissful ignorance and it feels just good enough to not bother changing things. In the back of our mind we know there's a better path, but taking that path requires effort and sacrifice, so we take the path of least resistance.

Do they want these activities to stop, though, or do they just want them to be done better in some way?

Questions about privacy are tricky, because what exactly do people mean? You and I probably think that "privacy" means the government doesn't collect our info. But I've seen a lot of people defend the NSA on the basis that their strict controls mean their surveillance isn't a privacy breach in the first place.

I also don't think the question about approval of the NSA's activities is all that useful here. When people disapprove, do they disapprove because they think it shouldn't be done at all, or because they think the NSA isn't doing it properly in some way?

I think there are a lot of people who believe that broad surveillance to combat terrorism are a good thing, that if done properly would not harm privacy or civil liberties, and that the NSA's current approach is in some way less than perfect. (Note that I don't agree with any of this.)

Now, I realize I have no actual data to back this up. And while I don't trust polls to be accurate (they can be reasonably accurate about the question they're asking, but the exact wording of the question is hugely important and trying to extrapolate beyond precisely what is said is tough), I have to admit that this poll does suggest I'm wrong. But I don't think it disproves my idea.

In addition to what others have said, I think there's some lag time involved. If you had taken the survey in 2002, I bet you would have found much stronger support for surveillance. Majority opinion has drifted away from that, but not strongly enough yet to stop the government momentum in that direction.

The people can eventually get what they want. But the government default behavior switched in 2001, and the people don't want it strongly enough - yet - to over-ride that.

From what I can see, we have mass surveillance because it is what the people want. We have the TSA because it is what the people want. We have drone strikes because it is what the people want.

From what I can see, most people just don’t understand and don’t care. Just like I don’t understand and don’t care about my local football team, most people don’t understand and don’t care about security.

Worse, they have the relentless propaganda machine of government and access journalism to mislead them. Ask an ordinary American to apply common sense, and I think the majority will say: of course I don’t want the government looking at my dick pics[1], of course I don’t want to be processed like cattle before I enter the airport, of course I don’t want to execute people without trial with lots of collateral damage. But hide all of that in a veneer of: the experts say we need this, the government says it’s fine, and look at this cute puppy and the weather is beautiful and who is the latest celebrity and our sports team is doing… and I can understand why most Americans don’t do anything about mass surveillance.


>We know exactly what the government does, we don't think that it's right, and yet we can do nothing to stop it.

We can do nothing to stop it because we are not in the majority. That is how democracy works. If you believe we should be able stop it anyway then you do not believe in democracy.

Educate the public. Tell people why they should care. Get them to vote on it. Because right now most people are apathetic or positive on surveillance as a way of protecting them.

The majority's rule is ochlocracy - a spoiled form of democracy - and that's another, much more general problem. Not sure about whenever this is the actual case or we have something worse. But I'm sure this (majority rule) is certainly not how modern democracy's supposed to work.

I don't think democracy is in any way relevant. The UK just had an election where surveillance and GCHQ wasn't even allowed to be debated at all. No politician is even willing to acknowledge this stuff exists, let alone build an entire credible political party around it. Party democracy is a very low resolution way to hold people accountable. Ideally we'd have something better.

> We know exactly what the government does, we don't think that it's right, and yet we can do nothing to stop it

That's true to some extent (meaning that if everyone was fully against it and loud about it, I'm not so sure "We the People" could change that much).

However, it's not that many people that know exactly what the government is doing. It's mostly the readers of tech media and sites like Reddit and HN that do. Most others may have heard that "NSA spies on everything", but deep down either they aren't very sure it's true, despite all the reports, or they actually don't understand what they are doing.

John Oliver showed us that in one of his shows. When he told most people on the street about it they were like "Oh, okay. But they are doing it to catch terrorists right?" - and when he told them that their programs actually includes taking their nude photos too, most of those were then like "Wait - they are doing WHAT?!"

That's proof most "normal" people aren't aware of the implications of NSA's spying programs - which are now also FBI's programs, and DEA's programs, and even the police's programs. Most people just aren't very well aware of all of that or of what it truly means.

And the government along with the friendly media keep spinning it to misinform the public, too. For instance, when Bush passed the Patriot Act, he told in his speech that "we've passed these powers so we can read the terrorists' emails" (paraphrasing). Of course, most people heard that and thought "Hmm, so they need to read the terrorists' emails - okay, I don't see why that's a bad thing. Sounds like a very good thing, actually".

Except, what Bush really meant was "We've passed these powers so we can read all of your emails and then determine, based on everything you've ever written, whether you are a terrorist".

If he would've phrased it like that (which is actually its true meaning), the outcome may have been different for the Patriot Act.

So bottom line is "we" (as in most people) don't actually know or understand what the government is truly doing, and in fact most people still need to be much more educated about this issue.

The more techies will understand this the better :) Any adequate response to continuing big-brotherification of the internet must be threefold:

* technological (strong crypto everywhere, decentralization of data collection and retention)

* UX (technology must be usable for a common person without security and CS background)

* political (strict laws protecting privacy and related human rights AND general consensus that these rights are important)

We technologists naturally tend to focus on the first point but getting it right is not nearly enough.

It seems the governments have a very different idea of what those values are than the people.

I'm not convinced that's even it. There appears to be a large segment of the US population that believes Snowden is a traitor, and that the NSA were just doing their jobs to "keep us safer". Franklin's famous quote about those who trade essential liberty for security deserving neither comes to mind... but it seems that many people do believe that their right to privacy ends when there's the threat of terrorism involved.

I hope that attitude is changeable, but if we want to have even a chance of reigning in our governments, we need to get these fearful government apologists to understand why their thinking is in error.

I somewhat agree, but I think that solving the technological problems pushes the government into taking actions that are more and more outside the realm of what generally-ambivalent people care about. People are generally less okay with the government opening their mail and inserting spy devices into their computers than they are with the government sniffing their network traffic, for example.

You were on to something and then stopped...

"no matter what technical solutions you can come up with, force and law always trump those."

^this applies much deeper and further then you might have realized.

Yes I like to win arguments rather than ...

Same could be said for the people you're complaining about ...

As the world turns ...

> and yet we can do nothing to stop it

You live in a democratic country, right? So you can. But you won't with that sentiment. If everybody thinks like you then you indeed are helpless.

> You live in a democratic country

If we are talking about USA, than there is data[1] which indicates that not, its not a democratic country.

[1]: http://scholar.princeton.edu/sites/default/files/mgilens/fil...

If someone expresses the sentiment that they feel powerless to stand up to those in authority, then they probably don't live in a democratic country.

> If someone expresses the sentiment that they feel powerless to stand up to those in authority, then they probably don't live in a democratic country.

Nonsense: in a democratic country, the minority typically are powerless to stop the majority. That's kinda the definition of a democracy.

Now, some republics provide some protection for the rights of minorities, but even those can be removed with sufficiently strong majorities.

You can stand up to authority all you want, it doesn't mean you'll achieve anything though. The whole point of democracy is to ensure that a single person with an idea can't just go and change things for everyone on his own.

On the contrary, technical solutions trump force and law. Force and law only defeat political and military solutions.

In the US, and perhaps other Western democracies, governments are the people and people are the government, not separate from each other. If a government is doing something, it's because the people value it, at least enough to not change it.

Edit: Downmods? Really? Refute my claims rather than attempt to bury them.

Your comment isn't inflammatory, nor do you seem to just be talking out your ass, so have an imaginary Internet point. I've up voted worse. However...

> If a government is doing something, it's because the people value it, at least enough to not change it.

I'm tempted to tell you that's bullshit, and others might feel strongly enough about it that they're clicking the down arrow. The NSA, for example, doesn't care which way I voted. So we have to sit around "knowing" they do stuff, but with no evidence. Finally, evidence comes out ala Snowden. Wait ten more years while we finally get around to passing laws to reign in the offending agencies/people. Add it all up and thirty years have gone by. Meanwhile, using the NSA example, they've found other ways to sneak around or outright flaunt laws, lather, rinse, repeat.

It might look like no one cares, (rather than actually value it, as you say) but even if they do care the change doesn't come overnight.

In the US, the people (OP's words, not mine), actually do control the government. Total take over can be accomplished in six years or less. Shut down can be accomplished in two or less.

Yes, the US government has done and probably continues to do very bad stuff. NSA wiretaps are probably one of the least innocuous things it has done.

IF the US people actually did want to shut it down and change the behavior of government, they could. I don't think most do (or at least, don't regularly think about it [but may agree if prompted to consider]), know how they could, or know what actual behavior/policy they would replace things with.

After 30 years even if it changes, let's say during the Clinton era much changed. It lasts a decade and then the course corrects to the trajectory it was on before. The people that affected the system are now irrelevant or soon dead. The new generation has forgotten the past.

Im not sure about your first sentence, but Im kinda with you. Seems to me, at the very least, the majority of people aren't that bothered by what our governments are up to. I think it somewhat trite for people who are against these things simply just snap to a position that says these governments are essentially omnipotent and rule regardless of our concerns. If the mass public were outraged enough, it would stop. In a democracy we cannot divorce ourselves form what our governments doing in out name. Its a cheap lazy cop out.

Well, the perhaps frightening scenario is that you arrive at a point where everyone assumes that someone else is 'taking care of it', when it fact, that person may be the only one even aware of it.

I'm not sure we're quite in that situation yet, but certainly there are severe inequities in how similar situations are treated (which is supposedly not allowed).

You would expect to see exactly that in a degraded democracy.

This blog post isn't served over HTTPS, either:

    Secure Connection Failed

    The connection to blog.cryptographyengineering.com was interrupted while the page was loading.

    The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
    Please contact the website owners to inform them of this problem.

> Anyone who has taken a network security class knows that the first rule of Internet security is that there is no Internet security.

True, but not a useful observation because we're stuck with the core of what we have. I think it's more worrying atm that nobody can be bothered to even deploy what we do have: TLS, OCSP stapling, HSTS, HPKP, DNSSEC. This stuff isn't difficult to deploy at the individual level. Especially for this crowd. You can make a difference.

> We don't encrypt nearly enough

Ironic from a security conscious cryptographer and blogger who isn't protecting his readers or himself with TLS. Ok, Matt isn't using WordPress, but many do, and I wonder how many of them ever log in to moderate, or edit a post, over networks they don't entirely trust? WordPress has a built-in file editor and stores its config file in the docroot by default for crying out loud... if someone gets your admin session cookie you're toast. They're one patch away from your password, and your commentators passwords and email addresses, if they trust you with such, and can plant as much malware on your site as they please.

> It's the metadata, stupid

Yet Matt Green and Troy Hunt both use Blogger, effectively allowing their readers interests and comments to be further pervasively catalogued by Google.

I'm not saying these minor hypocrisies are even 1 millionth as grievous as failing to prevent the NSA from wiretapping the UN, or even terribly important at all, but damnit... there are things we can all do instead of just pining for a privacy utopia that isn't going to come. If you want privacy to be the norm then protect everything in your power, and aggressively, NOW, everyway you know how.

I think it's more worrying atm that nobody can be bothered to even deploy what we do have: TLS, OCSP stapling, HSTS, HPKP, DNSSEC. This stuff isn't difficult to deploy at the individual level.

That’s because this stuff is difficult.

TLS: The Let’s Encrypt[1] ceremonies seem to be going apace, perhaps to be finally launched a month from now, but in the meanwhile you get only 1 free certificate per year that actually works with most clients: The StartSSL product.[2]

Which means you can encrypt only 1 hostname. Have multiple domains? Too bad, you pay money. Have multiple hosts? Same thing. One of the things that made tech so accessible is that you didn’t need to pay to start playing, and TLS breaks that.

Also, want to support Android Gingerbread clients? Then you need an IP address per TLS certificate. No SNI for you. You do know we’re in an IPv4 address space crunch, right?

OCSP, HSTS, HPKP: Need a functioning TLS, first.

DNSSEC: Have you actually tried to implement DNSSEC? My personal domain is validated using DNSSEC. A whole lot of pain for dubious gain.

And these security technologies are not set and forget. Microsoft seems fond of getting TLS maintenance wrong, causing failures in cloud services[3] or the basic security model[4]. DNSSEC also is supposed to do regular key rotations. Which individual has time for all of that?

That’s if you even have access to enable security. A whole lot of content is now published in centralized silos: Twitter, Facebook, Google, Wordpress. No federation, no outside control: no need for individuals and organizations to care about privacy. You are totally free to set up or join a diaspora* pod,[5] but you will find yourself forever alone.

I think technology can be developed to make privacy easier, and I think insecure defaults and fallbacks should be eliminated, but I am convinced that it will not be easy.

[1]https://letsencrypt.org [2]https://www.startssl.com [3]http://blogs.msmvps.com/peterritchie/2013/03/01/azure-table-... [4]http://arstechnica.com/security/2015/03/microsoft-takes-4-ye... [5]https://diasporafoundation.org


Wosign are giving away gratis SHA256 certs valid for up to 3 years, and supporting up to 100 AltNames[0]. They're in all the major trust stores, and cross-signed by Startcom (StartSSL).

I believe free basic wildcard certs will come. Someone will eventually break formation.

> want to support Android Gingerbread clients?

No. Gingerbread is down to <5% of the Android user share and lots of apps already don't support it. XP is arguably more of a problem at 10-12%.

IPv4 exhaustion is still mostly a problem of allocation. Big vendors already have a glut of IPs. Even the budget VPS providers don't seem to care if you spin up a dozen VMs just for the IP space, which isn't much more expensive than the $2/mo many charge for additional IPs.

> Have you actually tried to implement DNSSEC?

Yes, I have. It's one easy command with PowerDNS.

[0] https://buy.wosign.com/free/

I didn't know about that cert deal, and I was looking just a couple days ago. It's not that easy. Anyway, I'll take a good look at it. Thanks!

Such a simple thought: "the network is hostile". Yet when you consider the implications of that statement across the board you see stop-gap after stop-gap to fill the void. And as Green points out - the state of the state is bleak when it comes to the surveillance state.

His closing point is very open ended. But, thinking about this as to how "network security" sells products in today's landscape if Green's suggestion that these new systems would fulfill the goal of not having to worry about the network because the systems are designed with an inherent zero-trust model, how does the landscape of "network security" change? If the data path is immune from protections (firewall, IPS, URL, etc.) then does the endpoint radically change? Do we all end up with a containerized laptop with a front-end NGFW/UTM/security blob with which is locally routed within to my guest operation system of choice? And are the general functions broken out into secure segments so that I can work and play while minimizing risk of a malicious actor exfiltrating corporate data while I browse the questionable reaches of the Internet?

Thought provoking, although - as Green states, I don't see many moving the ball quickly (yet?).

One piece of what you wrote highlights a big part of the problem. My apologies for lifting just four words out of your longer sentence, but the point is, I believe, critical:

> "network security" sells products

Because people desire fire-and-forget solutions. We in the field know that security is not a product, or even (pun intended) a product of products. It's an ongoing process, where a term like "arms race" fails to convey the full meaning or scope.

And above all, it's hard. If you want network and communications security at scale, you will need to solve how to deploy a fairly large set of keys. Secure key distribution depends on auditable trust. Bootstrapping trust relationships is HARD.

Let's take an easy example. The entire CA industry is built around the premise of seeking rent on trust. And then, to work around the problems that arose from having a CA system in place, we came up with solutions like certificate pinning. Which really boils down to imposing distrust upon the very system that exists because they sell trust.

While people want secure solutions, they really want to buy a product. Precisely they want something that "just works", and in effect make the complexities of security someone else's problem.

This should be completely clear to the people running mass-market internet communication and storage services. And yet none of them encrypt payloads.

Ephemeral keys and forward secrecy are a solved problem for real time and near-real-time communication. Why don't we have a Hangouts or Skype or Yahoo Messenger that are secure against the state-actor threat?

At some point we have to assume the companies providing these services have been persuaded to sell us all out.

>> At some point we have to assume the companies providing these services have been persuaded to sell us all out.

Therein lies the problem.

Money controls everything. It's scary sometimes how fast people forget the moral obligation to their clients and sell them out as soon as someone drops a few million dollars in their laps.

It's not just about money.

After 9/11, AT&T etc were arguably motivated to cooperate with the NSA out of fear and patriotism. From Bamford, I get that the NSA and its precursors have done the same since the 1800s.

If these were paying customers, selling out would look much less attractive.

But we enjoy and expect certain services for free. In this case, as we all know, we are necessarily the product being sold.

I don't want to detract from the valid point that if you are not paying for a service, you are the product, not the customer.But I do pay for online services from Google and Amazon and others, and would happily pay for secure communication. None of them encrypt real time communication, and none encrypt my data at rest with my key. Any portal that has directories or a social graph is very well placed to implement a web-of-trust that would not rely on CAs. But there's nothing like that out there.

All networks are hostile, not just the internet or "external" network.

Google's BeyondCorp [1] initiative recognizes this and treats the internal network as untrusted.

Instead of trusting a privileged network or VPN, securely identify devices and users assuming untrusted networks.

[1] http://static.googleusercontent.com/media/research.google.co... [PDF]

I keep coming back to the idea that one should assume the computer is untrusted. Which leads to two unpopular suggestions, keyboards that encrypt typed characters to prevent spoofing by entities without physical access. Displays with a separate side channel overlay system to bypass the core CPU for certain security operations like entering pins.

I've often thought about this, but came to the opposite conclusion: a laptop is the smallest thing you can trust. The keyboard sends data to the CPU. The CPU sends data to the screen. The CPU does encryption and talks to the network.

I feel like there's no point in trying to chop it down any smaller than that. All you're doing is shifting trust from "system A" to "system B", without really changing anything.

The keyboard has a CPU that can do encryption and signing, in addition to decoding for key presses. Ditto for touch screens.

The reason for putting the encryption in system B AKA the keypad and display controllers is, if you want end to end encryption, the keypad decoder and the display controller are the closest to the end points you can get. And they are single purpose devices that don't run arbitrary code nor accept arbitrary input.

Compare that with laptop CPU's and complex operating systems with their huge attack surfaces. Running software written by completely untrustworthy corporations and individuals.

> And they are single purpose devices that don't run arbitrary code nor accept arbitrary input.

But be careful designing firmware upgrades though. Also I really don't see what kind of attack it prevents when you have a trusted keyboard and display but otherwise compromised OS.

Absolutely. I take the view that if one needs very high security, you steer well clear of anything electronic and go back to the old ways. The idea of secure electronic communications is well and truly dead. If one's life depends on it, why even begin to trust IT? Over the last few years just about every aspect of IT has been exposed as a risk. I'm sorry, but its game over. Personally, I'd use the hell out of IT creating a nice normal profile for the spies, then go completely off piste for anything I absolutely needed to be secure.

What I'd like to know is why people though it was ever fully secure in the first place. It really never was.

The thing is, as Prof. Green points out, we've all always known this, but we've ignored it. If the protocol one uses isn't secure when used over Tor (because some Romanian exit node is able to snarf your password), then it's not secure enough to use across the Internet in general.

XPKI simply isn't enough: it's a worst-of-all-worlds solution in which there's not just one global trust root, there are hundreds.

Using the blockchain as a globally-verifiable data store is interesting, but comes with an incredible cost (and may still be vulnerable to manipulation).

Better, I think, would be to embrace the reality that human beings are citizens of states, and to leverage that: if the governments of the United States, Iran, Germany, Russia, Australia, Uzbekistan, Chad, Chile and Peru all agree to a statement, then it's very probably true. We could use that kind of unanimous (or supermajority) agreement as a trust root for identity, since it's extraordinarily unlikely that ever state in the world would agree to the same lie.

Once we have a global trust root, it's easy enough to carve off namespaces within it. States could have authorised textual namespaces (e.g. '(global-root us)' for the United States: in a very real sense, '(global-root us foo)' is whatever the US government wants it to be).

With this scheme, anyone would still be free to have his own, additional, alternate roots; an assertion of '(global-root uk british-airways)' would not apply to '(billy-joe random-orgs ba)' unless the objects thus named shared the same key.

One glaringly obvious problem with this concept is that this very article requires some basic insight in security engineering, and even for people that are interested, it can be hard to digest.

How do we (blanket statement) try to address the overall level of understanding that people have around this topic and make them understand that the problem is real, serious and needs significant thought?

I've thought about this a fair bit. Think about your average non-super-technical co-worker. How do we get them to see the problem in a clear way, and how do we rally people around the problem? I don't know how to, but I try and fail and try again. It's a tough gig. I do have an enormous man crush on Matthew Green though.

Yet half of everyone on HN actually wants IPv6 so we can have less privacy...

IPv6 gives you no less privacy. An IPv6 /64 identifies you just as much as a regular /32 IPv4.

"In IPv6 when using address auto-configuration, the Interface Identifier (MAC address) of an interface port is used to make its public IP address unique, exposing the type of hardware used and providing a unique handle for a user's online activity." - wikipedia on ipv6. ipv4 is more fungible than your MAC address.

The current versions of Windows, Mac OS X, iOS and Android all use RFC 4941 Privacy Extensions by default to randomize the address used

So that's where this retardation is coming from... Of all the people that edited this page in the last ten year, no one checked if this was ever deployed...

I was basing this on an ipv6 book I read a while ago. Just used wikipedia as the reference.

Disagree, respectfully. With IPV4, it's extremely common, especially on consumer home routers to be behind a NAT. IPv6 destroys your privacy because it's now very easy for Google, click networks, NSA? to track a device behind a firewall. There's a reason Google is pushing IPv6 so hard.

It would be nice to agree to disagree, but we really need IPv6 to displace IPv4.

Google is pushing IPv6 so hard because IPv4 just can’t expand enough to fulfill our needs. And not just Internet of Things, your light bulb needs to be online, that sort of silliness. IPv4 can’t expand enough to raise each community and each organization to a current first-world level of Internet usage.

Want to build a Chinese Facebook? Can’t get enough IPv4 address space for all the servers. Want to build Facebook in America, can’t get enough IPv4 address space for all the servers.[1] Vint Cerf doesn’t need to be employed by Google to say that IPv4 just wasn’t intended for the Internet of 2015. He made IPv4. He knows.[2]

Using your MAC address for your IPv6 host address is now obsolete. Privacy Extensions[3] are the current best practice. IPv6 address space is so astonishingly vast, that it’s possible though not common to have a separate IP address for each host you would want to contact during a single session. IPv6 is not inherently less private than IPv4.

[1]http://www.internetsociety.org/deploy360/blog/2014/06/facebo... [2]http://www.networkworld.com/article/2227543/software/why-ipv... [3]https://tools.ietf.org/html/rfc4941

You are wrong and don't understand how IPv6 works. With privacy extensions, hosts frequently regenerate their interface IDs, making IPv6 addresses no better at uniquely identifying hosts than the TCP port number does with NATed IPv4 connections.

If IPv6 becomes widely deployed, it would make a more decentralized, peer-to-peer Internet possible, and make it easier for small companies to compete with large megacorps like Google. If Google were truly as selfish as you imply, they would oppose IPv6 - a future where IP addresses are difficult to obtain unless you have deep pockets only benefits companies like Google.

> There's a reason Google is pushing IPv6 so hard.

Any sources for this? Otherwise, one could simply assume that they do it because the IPv4 address space is about to be exhausted.

> hostile to the core values of Western democracies.

The US Gov, and the NSA, act hostile to the core values of Western democracies.

2 tru, all networks are hostile until you takeover

then they're frienly, to a pt.

by that I mean witness the fitness of network evolution in a hostile environment

Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact