It seems the governments have a very different idea of what those values are than the people. Until those ideas are aligned, governments are out to get the people. There is no point in any of this. Because ultimately, no matter what technical solutions you can come up with, force and law always trump those.
Perhaps at some point you could make the argument that we don't explicitly know what the government does and that's why it's doing it and getting away with it. That's no longer the case. We know exactly what the government does, we don't think that it's right, and yet we can do nothing to stop it. So either we need to overhaul government or accept the status quo and quit bitching about it or trying to create technical solutions to fix social problems.
If the government can mandate networks spy on computers, it can mandate manufacturers spy on users. As they are already doing this, fixing the network solves nothing. As for foreign adversaries spying on users, well if you are not in the US avoiding that is impossible as most of your computing experience is under regulatory capture by the US government.
That's a very important point that I think many who champion technical solutions conveniently set aside. Remember that in the US in the 90s--in other words, not that long ago--PGP was illegal and Phil Zimmerman was criminally investigated for developing and distributing it.
Today PGP and GPG are some of the most powerful encryption tools we have available to us--tools that Snowden says we can still trust. If someone makes a better tool, or if these tools start to bother governments again, then governments could just declare them illegal. Then you're faced with the question of: "Do I encrypt my mail and risk a SWAT team blowing up my front door and shooting my dog, then spending hundreds of thousands of dollars in a protracted legal battle against the full might of the US government, or do I just forget about it?"
The tech genie is out of the bottle, and the answer isn't more tech--it's getting people, and thus governments, to understand and champion these tools just like we do.
No, the answer includes better tech. OpenSSH and LibreSSL are doing good work in removing insecure codecs. HTTP/2 looks like it should do well, because no client implementation accepts plaintext. I’m hoping that protocols like OTR and Darkmail become more widespread.
PGP is basically unusable. You can use it to dump secrets on journalists, after you have trained the journalist. You can’t use it to lead a quiet, private life. I tried. It’s not integrated with anything, and nobody else uses it. It’s fatally flawed.
Being riddled with fatal flaws is an aspect PGP shares with the lucrative Certificate Authority system, with S/MIME, with the DNS security system, with IPsec, with cell phone encryption.
Worse, actual solutions are orthogonal to big business interests. Decentralized trust? Google has no need for this; Google has its own intermediate CA (which, through the totally broken CA model, means Google can issue valid TLS and S/MIME certificated for any domain in the world). Federated content sharing? Facebook has no need for this; Facebook needs all your data to keep it safe for you, and also to analyze and monetize it.
What we need is to follow the article’s plea: Truly understand that the network is hostile, and design our systems to make security the easy default. And drop insecure protocols already, wherever there is a viable alternative.
All you would need for this is to model each user's interaction with the system as a statement, where a statement could be either code or data, and then sign and encrypt these statements by default and encrypt the statement's decryption key with recipients' public keys as needed.
This might sound esoteric but we could already do this today just by taking something like Unison  and adding this encryption layer.
As geeks we should really own up to the fact that it's perfectly feasible to do this... and get working on it.
P.S. For a fun fact; at it's core such a language would pretty much be what John McCarthy described as 'the language for the year 2015.' 
It's integrated with Thunderbird via Enigmail. It works at least as well as MS Outlook's integrated message signing. If your reply to that is something like "Everyone uses webmail.", I say "I've been using IMAP to access Gmail since five minutes after the feature was added to Gmail.".
This seems to indicate that the only email client that has real problems with attached signatures is Outlook Express. I can't bring myself to care much about Outlook Express users. (Is OE even installed on Windows 7 and later?)
The bigger problems are that it’s not integrated and it’s not effective for shielding metadata. With difficulty and being mindful of the sharp edges, you can use OpenPGP on desktop. You need a clunky third-party email client to get it on mobile. Best case scenario, you send lots of extra data that gets ignored. Worst case, all those attached signatures get downloaded as files, causing confusion and anger. And there’s no interoperability with S/MIME, which actually is supported by most proprietary email clients.
OpenPGP does not protect metadata. It piggybacks on SMTP, which publishes the sender, recipient, and subject line in the clear, along with the identity and timing of each server in the transport path from when you send the email to when it lands in your recipient’s mailbox. PGP was originally conceived as the “envelope” to protect your message contents, but in this era of unlimited surveillance we need to do better.
"Why aren't we doing this now?" Probably for the same reasons that we're going back to the 1990's world of Instant Messaging walled gardens; techies and cypherpunks are winning some battles but not most of them, and non-technical users don't understand what they give up when they choose a centralized Web and ISPs that make them incapable of acting as a peer on The Internet. 
To the rest of your comment:
To the best of my knowledge, I've never had Enigmail or Thunderbird mangle my PGP signatures. I participate in a few technical mailing lists; the folks there absolutely would tell me if my signatures were getting fucked up. For mail that doesn't go to these lists, I can double-check the message copied to my sent mail folder. :)
Installation of Enigmail on Windows, Linux and -I presume- Mac is trivial. Based on reports, the sharp edges that you speak of appear to be there; send only plaintext email and attach -rather than inline- your signature and you will -in my experience- avoid all of them.
The world of mobile software is largely a cesspool. Maybe I'm ignorant, but it seems like the only folks doing good privacy protection mobile software are Open Whisper Systems, the Tor Foundation (by way of Orbot), and Whatsapp. (The Whatsapp folks are in this list because of the work that Open Whisper Systems did to integrate TextSecure's near-zero-effort crypto into Whatsapp's software.)
 Yes, I totally understand that almost no one in the US has a real choice in ISP. Even here in Silicon Valley, I find myself prevented at every turn from giving my money to local, independent ISPs. There's a little comfort in the fact that Comcast allocates and routes up to a /60, and performs very, very little inbound filtering.
which fatal flaws are present in "IPsec"?
The important thing to worry is that once some lawmaking passes and and journalists will happily announce the legal battle's won and the spies won't spy anymore (ha!) most will be happy about this, and continue to ignore any technical issues. That is, stay easy targets until the next scandal.
Both technical and legal solutions are important. We just can't ditch one and say that it's not important anymore - neither works well without the other.
But eventually, it may be necessary to get serious about steganography. With pervasive streaming HD video, covert channels providing even 0.1% of throughput would be workable. The harder challenge would be plausible deniability. If the steganography fails and that SWAT team arrives, you want them to discover that you've been compromised by covert-channel botnets. Just like every other user and server on the Internet.
But yes, it's complicated. I've been writing how-to guides for years, and people's eyes do tend to glaze. Still, I'm sure that it could be automated.
Smartphones are the huge problem. They're intentionally designed for tracking and surveillance. And they are the primary channel for Internet access in many places. So it does seem pretty hopeless :(
You mean like how people can anonymously buy drugs on the internet? It's working fine. It's not perfect, but it doesn't have to be.
> Starting in April 2012, a DEA undercover agent in Maryland (the UC), began communicating with Ulbricht about selling illegal drugs on Silk Road. That agent was one of several assigned to the Baltimore Silk Road Task Force. The UC claimed to be a smuggler who specialized in moving substantial quantities of illegal drugs. In December 2012, Ulbricht set out to find a drug dealer on Silk Road who could purchase large quantities of drugs from the UC and directed his administrators, including Green, to assist. Green assisted the UC to establish contact with a buyer, who was an established seller of drugs on Silk Road (the Vendor). The UC and the Vendor negotiated a deal for one kilogram of cocaine for approximately $27,000 in Bitcoin, a digital currency that has no association with a national government, is difficult to track, and easy to move online.
> Without the knowledge of either Ulbricht or the UC, Green agreed to act as a middle-man for the Vendor and take delivery of drugs. As a result, the Vendor provided Green’s address to the UC as the place to which the cocaine was to be delivered. On January 17, 2013, an undercover U.S. Postal Inspector delivered the cocaine [very highly cut] to Green at his residence. Shortly after Green accepted delivery of the cocaine, federal agents with the HSI, DEA, U.S. Postal Inspectors and the U.S. Secret Service executed a search warrant at Green’s residence and recovered the kilogram of cocaine. U.S. Secret Service agents also conducted a forensic examination of Green’s computers and digital media seized during the search.
To outlaw but not make virtually anyone guilty (because some stuff like TLS is extremely common) is a hard task alone, and any attempts at this would be surely fiercely countered.
Here's what I'm trying to say: there's absolutely nothing that prevents the government from forcing you to grant them access to your private (i.e. encrypted) content. If they decide that you're a threat to national security, they can do whatever they please to gain access to anything you have secured.
Before you say "but it's technologically impossible!" let me remind you that in any system, humans are the weakest link. How long do you think you can last under torture? The CIA has hundreds of "secret sites" overseas. What makes you think that the Department of Homeland Security doesn't have any equivalents here in the US, that they can imprison you in and torture you until you reveal everything you know?
It's really naive to think that technology can solve all problems. Think outside the box (or the computer, in this case) and really try to grasp what the most powerful nation in the history of the world is capable of. They can force the private jets of foreign officials to land in allied airports and search them thoroughly. If you think your fileserver is safe simply because you have its contents encrypted, you have a lot to learn.
That's a best case. Worse case is they shoot you.
I don't believe this to be true. I think techies are looking at opinions within their own circles and making the incorrect assumption that the general population has similar opinions.
From what I can see, we have mass surveillance because it is what the people want. We have the TSA because it is what the people want. We have drone strikes because it is what the people want.
People are terrified of terrorism and want the government to take strong measures to fight it. This attitude may well be partly caused by the government, which benefits from this fear, but it is no less real for it.
Our governments aren't the best but they are far from being lost causes when it comes to being responsive to what the people want. The problem is that what the people want is often bad. Efforts to make the government better represent the wishes of the population will fail to make any difference with stuff like this, because that goal is already achieved in this case.
If we want this stuff to stop, we must first convince most of the population that it should. Without that, nothing else will be effective. And with it, change will come easily.
A majority of Americans (54%) disapprove of the U.S. government’s collection of telephone and internet data as part of anti-terrorism efforts,
More broadly, most Americans don’t see a need to sacrifice civil liberties to be safe from terrorism: In spring 2014, 74% said they should not give up privacy and freedom for the sake of safety, while just 22% said the opposite.
Most say it is important to control who can get their information (93%), as well as what information about them is collected (90%).
Fully 87% are aware of the federal surveillance programs;
In most countries we surveyed in 2014, majorities opposed U.S. government monitoring of emails and phone calls of foreign leaders or their citizens.
I think the same applies to the survey. Ideally people don't want intrusive government surveillance but at the same time it's not important enough nor egregious enough for them to mobilize over it, so they let it be --as long as it's not "too much".
I think you've hit on a good point, and following your analogy, most of us are just too damned lazy or apathetic to get worked up about it (bad food or bad government). It's easier and cheaper to buy and consume unhealthy food, just as it's easier and (mentally) cheaper to just pretend everything is fine with the world the way it is. Sure, we're going to get fat and die of a heart attack at 50, and sure, we're going to steadily lose our privacy and the ability to exercise our rights; but in the here and now, we're living in blissful ignorance and it feels just good enough to not bother changing things. In the back of our mind we know there's a better path, but taking that path requires effort and sacrifice, so we take the path of least resistance.
Questions about privacy are tricky, because what exactly do people mean? You and I probably think that "privacy" means the government doesn't collect our info. But I've seen a lot of people defend the NSA on the basis that their strict controls mean their surveillance isn't a privacy breach in the first place.
I also don't think the question about approval of the NSA's activities is all that useful here. When people disapprove, do they disapprove because they think it shouldn't be done at all, or because they think the NSA isn't doing it properly in some way?
I think there are a lot of people who believe that broad surveillance to combat terrorism are a good thing, that if done properly would not harm privacy or civil liberties, and that the NSA's current approach is in some way less than perfect. (Note that I don't agree with any of this.)
Now, I realize I have no actual data to back this up. And while I don't trust polls to be accurate (they can be reasonably accurate about the question they're asking, but the exact wording of the question is hugely important and trying to extrapolate beyond precisely what is said is tough), I have to admit that this poll does suggest I'm wrong. But I don't think it disproves my idea.
The people can eventually get what they want. But the government default behavior switched in 2001, and the people don't want it strongly enough - yet - to over-ride that.
From what I can see, most people just don’t understand and don’t care. Just like I don’t understand and don’t care about my local football team, most people don’t understand and don’t care about security.
Worse, they have the relentless propaganda machine of government and access journalism to mislead them. Ask an ordinary American to apply common sense, and I think the majority will say: of course I don’t want the government looking at my dick pics, of course I don’t want to be processed like cattle before I enter the airport, of course I don’t want to execute people without trial with lots of collateral damage. But hide all of that in a veneer of: the experts say we need this, the government says it’s fine, and look at this cute puppy and the weather is beautiful and who is the latest celebrity and our sports team is doing… and I can understand why most Americans don’t do anything about mass surveillance.
We can do nothing to stop it because we are not in the majority. That is how democracy works. If you believe we should be able stop it anyway then you do not believe in democracy.
Educate the public. Tell people why they should care. Get them to vote on it. Because right now most people are apathetic or positive on surveillance as a way of protecting them.
That's true to some extent (meaning that if everyone was fully against it and loud about it, I'm not so sure "We the People" could change that much).
However, it's not that many people that know exactly what the government is doing. It's mostly the readers of tech media and sites like Reddit and HN that do. Most others may have heard that "NSA spies on everything", but deep down either they aren't very sure it's true, despite all the reports, or they actually don't understand what they are doing.
John Oliver showed us that in one of his shows. When he told most people on the street about it they were like "Oh, okay. But they are doing it to catch terrorists right?" - and when he told them that their programs actually includes taking their nude photos too, most of those were then like "Wait - they are doing WHAT?!"
That's proof most "normal" people aren't aware of the implications of NSA's spying programs - which are now also FBI's programs, and DEA's programs, and even the police's programs. Most people just aren't very well aware of all of that or of what it truly means.
And the government along with the friendly media keep spinning it to misinform the public, too. For instance, when Bush passed the Patriot Act, he told in his speech that "we've passed these powers so we can read the terrorists' emails" (paraphrasing). Of course, most people heard that and thought "Hmm, so they need to read the terrorists' emails - okay, I don't see why that's a bad thing. Sounds like a very good thing, actually".
Except, what Bush really meant was "We've passed these powers so we can read all of your emails and then determine, based on everything you've ever written, whether you are a terrorist".
If he would've phrased it like that (which is actually its true meaning), the outcome may have been different for the Patriot Act.
So bottom line is "we" (as in most people) don't actually know or understand what the government is truly doing, and in fact most people still need to be much more educated about this issue.
* technological (strong crypto everywhere, decentralization of data collection and retention)
* UX (technology must be usable for a common person without security and CS background)
* political (strict laws protecting privacy and related human rights AND general consensus that these rights are important)
We technologists naturally tend to focus on the first point but getting it right is not nearly enough.
I'm not convinced that's even it. There appears to be a large segment of the US population that believes Snowden is a traitor, and that the NSA were just doing their jobs to "keep us safer". Franklin's famous quote about those who trade essential liberty for security deserving neither comes to mind... but it seems that many people do believe that their right to privacy ends when there's the threat of terrorism involved.
I hope that attitude is changeable, but if we want to have even a chance of reigning in our governments, we need to get these fearful government apologists to understand why their thinking is in error.
"no matter what technical solutions you can come up with, force and law always trump those."
^this applies much deeper and further then you might have realized.
As the world turns ...
You live in a democratic country, right? So you can. But you won't with that sentiment. If everybody thinks like you then you indeed are helpless.
If we are talking about USA, than there is data which indicates that not, its not a democratic country.
Nonsense: in a democratic country, the minority typically are powerless to stop the majority. That's kinda the definition of a democracy.
Now, some republics provide some protection for the rights of minorities, but even those can be removed with sufficiently strong majorities.
In the US, and perhaps other Western democracies, governments are the people and people are the government, not separate from each other. If a government is doing something, it's because the people value it, at least enough to not change it.
Edit: Downmods? Really? Refute my claims rather than attempt to bury them.
> If a government is doing something, it's because the people value it, at least enough to not change it.
I'm tempted to tell you that's bullshit, and others might feel strongly enough about it that they're clicking the down arrow. The NSA, for example, doesn't care which way I voted. So we have to sit around "knowing" they do stuff, but with no evidence. Finally, evidence comes out ala Snowden. Wait ten more years while we finally get around to passing laws to reign in the offending agencies/people. Add it all up and thirty years have gone by. Meanwhile, using the NSA example, they've found other ways to sneak around or outright flaunt laws, lather, rinse, repeat.
It might look like no one cares, (rather than actually value it, as you say) but even if they do care the change doesn't come overnight.
Yes, the US government has done and probably continues to do very bad stuff. NSA wiretaps are probably one of the least innocuous things it has done.
IF the US people actually did want to shut it down and change the behavior of government, they could. I don't think most do (or at least, don't regularly think about it [but may agree if prompted to consider]), know how they could, or know what actual behavior/policy they would replace things with.
I'm not sure we're quite in that situation yet, but certainly there are severe inequities in how similar situations are treated (which is supposedly not allowed).
You would expect to see exactly that in a degraded democracy.
Secure Connection Failed
The connection to blog.cryptographyengineering.com was interrupted while the page was loading.
The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
Please contact the website owners to inform them of this problem.
True, but not a useful observation because we're stuck with the core of what we have. I think it's more worrying atm that nobody can be bothered to even deploy what we do have: TLS, OCSP stapling, HSTS, HPKP, DNSSEC. This stuff isn't difficult to deploy at the individual level. Especially for this crowd. You can make a difference.
> We don't encrypt nearly enough
Ironic from a security conscious cryptographer and blogger who isn't protecting his readers or himself with TLS. Ok, Matt isn't using WordPress, but many do, and I wonder how many of them ever log in to moderate, or edit a post, over networks they don't entirely trust? WordPress has a built-in file editor and stores its config file in the docroot by default for crying out loud... if someone gets your admin session cookie you're toast. They're one patch away from your password, and your commentators passwords and email addresses, if they trust you with such, and can plant as much malware on your site as they please.
> It's the metadata, stupid
Yet Matt Green and Troy Hunt both use Blogger, effectively allowing their readers interests and comments to be further pervasively catalogued by Google.
I'm not saying these minor hypocrisies are even 1 millionth as grievous as failing to prevent the NSA from wiretapping the UN, or even terribly important at all, but damnit... there are things we can all do instead of just pining for a privacy utopia that isn't going to come. If you want privacy to be the norm then protect everything in your power, and aggressively, NOW, everyway you know how.
That’s because this stuff is difficult.
TLS: The Let’s Encrypt ceremonies seem to be going apace, perhaps to be finally launched a month from now, but in the meanwhile you get only 1 free certificate per year that actually works with most clients: The StartSSL product.
Which means you can encrypt only 1 hostname. Have multiple domains? Too bad, you pay money. Have multiple hosts? Same thing. One of the things that made tech so accessible is that you didn’t need to pay to start playing, and TLS breaks that.
Also, want to support Android Gingerbread clients? Then you need an IP address per TLS certificate. No SNI for you. You do know we’re in an IPv4 address space crunch, right?
OCSP, HSTS, HPKP: Need a functioning TLS, first.
DNSSEC: Have you actually tried to implement DNSSEC? My personal domain is validated using DNSSEC. A whole lot of pain for dubious gain.
And these security technologies are not set and forget. Microsoft seems fond of getting TLS maintenance wrong, causing failures in cloud services or the basic security model. DNSSEC also is supposed to do regular key rotations. Which individual has time for all of that?
That’s if you even have access to enable security. A whole lot of content is now published in centralized silos: Twitter, Facebook, Google, Wordpress. No federation, no outside control: no need for individuals and organizations to care about privacy. You are totally free to set up or join a diaspora* pod, but you will find yourself forever alone.
I think technology can be developed to make privacy easier, and I think insecure defaults and fallbacks should be eliminated, but I am convinced that it will not be easy.
Wosign are giving away gratis SHA256 certs valid for up to 3 years, and supporting up to 100 AltNames. They're in all the major trust stores, and cross-signed by Startcom (StartSSL).
I believe free basic wildcard certs will come. Someone will eventually break formation.
> want to support Android Gingerbread clients?
No. Gingerbread is down to <5% of the Android user share and lots of apps already don't support it. XP is arguably more of a problem at 10-12%.
IPv4 exhaustion is still mostly a problem of allocation. Big vendors already have a glut of IPs. Even the budget VPS providers don't seem to care if you spin up a dozen VMs just for the IP space, which isn't much more expensive than the $2/mo many charge for additional IPs.
> Have you actually tried to implement DNSSEC?
Yes, I have. It's one easy command with PowerDNS.
His closing point is very open ended. But, thinking about this as to how "network security" sells products in today's landscape if Green's suggestion that these new systems would fulfill the goal of not having to worry about the network because the systems are designed with an inherent zero-trust model, how does the landscape of "network security" change? If the data path is immune from protections (firewall, IPS, URL, etc.) then does the endpoint radically change? Do we all end up with a containerized laptop with a front-end NGFW/UTM/security blob with which is locally routed within to my guest operation system of choice? And are the general functions broken out into secure segments so that I can work and play while minimizing risk of a malicious actor exfiltrating corporate data while I browse the questionable reaches of the Internet?
Thought provoking, although - as Green states, I don't see many moving the ball quickly (yet?).
> "network security" sells products
Because people desire fire-and-forget solutions. We in the field know that security is not a product, or even (pun intended) a product of products. It's an ongoing process, where a term like "arms race" fails to convey the full meaning or scope.
And above all, it's hard. If you want network and communications security at scale, you will need to solve how to deploy a fairly large set of keys. Secure key distribution depends on auditable trust. Bootstrapping trust relationships is HARD.
Let's take an easy example. The entire CA industry is built around the premise of seeking rent on trust. And then, to work around the problems that arose from having a CA system in place, we came up with solutions like certificate pinning. Which really boils down to imposing distrust upon the very system that exists because they sell trust.
While people want secure solutions, they really want to buy a product. Precisely they want something that "just works", and in effect make the complexities of security someone else's problem.
Ephemeral keys and forward secrecy are a solved problem for real time and near-real-time communication. Why don't we have a Hangouts or Skype or Yahoo Messenger that are secure against the state-actor threat?
At some point we have to assume the companies providing these services have been persuaded to sell us all out.
Therein lies the problem.
Money controls everything. It's scary sometimes how fast people forget the moral obligation to their clients and sell them out as soon as someone drops a few million dollars in their laps.
After 9/11, AT&T etc were arguably motivated to cooperate with the NSA out of fear and patriotism. From Bamford, I get that the NSA and its precursors have done the same since the 1800s.
But we enjoy and expect certain services for free. In this case, as we all know, we are necessarily the product being sold.
Google's BeyondCorp  initiative recognizes this and treats the internal network as untrusted.
Instead of trusting a privileged network or VPN, securely identify devices and users assuming untrusted networks.
 http://static.googleusercontent.com/media/research.google.co... [PDF]
I feel like there's no point in trying to chop it down any smaller than that. All you're doing is shifting trust from "system A" to "system B", without really changing anything.
The reason for putting the encryption in system B AKA the keypad and display controllers is, if you want end to end encryption, the keypad decoder and the display controller are the closest to the end points you can get. And they are single purpose devices that don't run arbitrary code nor accept arbitrary input.
Compare that with laptop CPU's and complex operating systems with their huge attack surfaces. Running software written by completely untrustworthy corporations and individuals.
But be careful designing firmware upgrades though. Also I really don't see what kind of attack it prevents when you have a trusted keyboard and display but otherwise compromised OS.
What I'd like to know is why people though it was ever fully secure in the first place. It really never was.
XPKI simply isn't enough: it's a worst-of-all-worlds solution in which there's not just one global trust root, there are hundreds.
Using the blockchain as a globally-verifiable data store is interesting, but comes with an incredible cost (and may still be vulnerable to manipulation).
Better, I think, would be to embrace the reality that human beings are citizens of states, and to leverage that: if the governments of the United States, Iran, Germany, Russia, Australia, Uzbekistan, Chad, Chile and Peru all agree to a statement, then it's very probably true. We could use that kind of unanimous (or supermajority) agreement as a trust root for identity, since it's extraordinarily unlikely that ever state in the world would agree to the same lie.
Once we have a global trust root, it's easy enough to carve off namespaces within it. States could have authorised textual namespaces (e.g. '(global-root us)' for the United States: in a very real sense, '(global-root us foo)' is whatever the US government wants it to be).
With this scheme, anyone would still be free to have his own, additional, alternate roots; an assertion of '(global-root uk british-airways)' would not apply to '(billy-joe random-orgs ba)' unless the objects thus named shared the same key.
How do we (blanket statement) try to address the overall level of understanding that people have around this topic and make them understand that the problem is real, serious and needs significant thought?
I've thought about this a fair bit. Think about your average non-super-technical co-worker. How do we get them to see the problem in a clear way, and how do we rally people around the problem? I don't know how to, but I try and fail and try again. It's a tough gig. I do have an enormous man crush on Matthew Green though.
Google is pushing IPv6 so hard because IPv4 just can’t expand enough to fulfill our needs. And not just Internet of Things, your light bulb needs to be online, that sort of silliness. IPv4 can’t expand enough to raise each community and each organization to a current first-world level of Internet usage.
Want to build a Chinese Facebook? Can’t get enough IPv4 address space for all the servers. Want to build Facebook in America, can’t get enough IPv4 address space for all the servers. Vint Cerf doesn’t need to be employed by Google to say that IPv4 just wasn’t intended for the Internet of 2015. He made IPv4. He knows.
Using your MAC address for your IPv6 host address is now obsolete. Privacy Extensions are the current best practice. IPv6 address space is so astonishingly vast, that it’s possible though not common to have a separate IP address for each host you would want to contact during a single session. IPv6 is not inherently less private than IPv4.
If IPv6 becomes widely deployed, it would make a more decentralized, peer-to-peer Internet possible, and make it easier for small companies to compete with large megacorps like Google. If Google were truly as selfish as you imply, they would oppose IPv6 - a future where IP addresses are difficult to obtain unless you have deep pockets only benefits companies like Google.
Any sources for this? Otherwise, one could simply assume that they do it because the IPv4 address space is about to be exhausted.
The US Gov, and the NSA, act hostile to the core values of Western democracies.
then they're frienly, to a pt.
by that I mean witness the fitness of network evolution in a hostile environment