That problem is entirely due to you choosing to run Google/Apple software on a network attached computer that you carry with you everywhere (even, apparently, to poo).
Mapping the source of radio signals is common and expected. Your Wifi router is basically a big bright lightbulb that flashes with a unique pattern, and can be seen through walls. You can't legitimately ask the rest of the world not to notice it, the same way you can't request that no one use your porch light as a landmark.
Even if the security protocol was "append this number to the messages you send me", it wouldn't be very easy to break since you don't have a successful transcript, any information about the implementation, or the ability to cheaply try keys.
Oh how fickle you are. A company that supports an open source project out of pocket for two years now doesn't love open source because they have moved onto other things? Over time, technologies change, stabilize, and/or become obsolete, expecting them to support one project forever is silly and boring.
How about: dear Engine Yard, thanks for 730 days of open source love, can't wait to see where you invest your time next.
I don't disagree, which is why I'm wondering if this is a case of not having enough money to continue supporting these projects, or a case of seeing the momentum or the need for support in open source changing to something new. So far, though, I've only seen Engine Yard drop support for projects. If, tomorrow, they announced that they were starting to support a handful of Go or node.js projects, then I think we'd all have a much different view on this news.
Being a professional developer is hard. That's why it's generally a high paying position, because you solve hard problems and are accountable for the results.
So stop making excuses. An organization of this scale should be taking serious precautions (against code errors, bad hardware, malicious attacks, network problems, compiler bugs, cosmic ray induced bit errors, etc.) when building a credit card processor.
If something truly extraordinary comes out to justify this problem, I may be sympathetic. But Occams razor suggests this was avoidable (through good process, not "I should have seen that off by one error").
> We're getting better and better bandwith. We're getting more and more computing power. A few milliseconds doesn't hurt anyone.
Yeah, but the speed of light isn't getting any faster, and little guys can't afford to have their servers everywhere. If HTTP 2.0 reduces the round trip count, then it will make a big difference in download times for small pages.
Consider that until people start being told "no, you can't get more IP v4 addresses" or some services are unavailable via IPv4, the incentive to switch to IPv6 is quite low.
We have IPv6 connectivity from our colo provider in one location, and a tunnel to our office, but two years in we're still only testing it for this reason - there's never enough time, and it's not yet urgent enough.
I see the proposals of binary protocols, push content, etc. as symptoms of a deeper issue.
Different than the spirit of HTTP, those protocols have little to do with publishing content anymore. They are just more kludges, in the history of kludges, to patch browsers into application platforms.
That mainly benefits the big web monopolists, who require the browser to be the ultimate application platform, where they can track to their hearts content, display unsolicited advertising, and basically extort business to advertise on their channels to remain relevant on the web. Not quite the idea of "information repository" that spawned the web in the first place.
Right. Because the alternative, bloated pages out the wazoo makes so much more sense than having smarter Apache and Nginx servers who will do all this for you. Remember, most of the web will be connected by cell phones, we need better protocols for that.
Perhaps I can reply more clearly after some sleep :) What I meant is that it's to support an evolution of content from static print and "data" to dynamic, interactive and responsive content the likes of which we've never seen before on such a wide scale of platforms and devices. Write once, run anywhere truly exists for content that conforms to web standards, and HTML5 Components will push that boundary even farther. I'll admit, authoring tools and publishing techniques need to play catch up, but that's not a reason to stop progress on standards. That's only a reason to slow or change adoption patterns and usage. Build a better Wordpress, I say, one that's two way. It's not always for the big advertising players, you know...
Read. That's why I mentioned Apache and Nginx, since the Google SPDY team has focused on making sure that there are plugins for each, plus CloudFront support, so that it can be applied "to the little guys". And in turn, I would ask you: Have you not used Web Sockets? How about HTML5 Web Components? New JS functionality? As "hackers", I would expect more excitement than this -- since you're going to enable a whole new generation of low latency cell phone apps, OSes based on web browsers and apps for them (e.g. Win8.1, ChromeOS and Webkit) and do so in a very standards-based way, rather than ad-hoc distributing SPDY through your proprietary browser. Short of security flaws like SSL's header compression, what could go wrong? :)
Read again. I'm not saying the technologies enable advertising. I'm saying it's in their best interest to have people do all useful work inside a browser, with a technology stack they control, increasing ad impressions and being able to provide ubiquitous user behavior tracking. This is the holy grail of advertising and would strengthen their monopoly even more.
For Google, a .0001% reduction in user overhead may translate to millions of dollars of resource savings. Google is willing to do crazy things like run their own modified Linux kernel that most companies wouldn't bother with.
It's funny, isn't it? We gave out more domains to speed things up, but now it slows us down (referring to your link).
I would highlight that HTTP 2 > SPDY as far as standards go, and the "beta" protocol of SPDY is already at 55% market share including IE 11: http://caniuse.com/spdy
For those who are interested in speed optimization for 2013 and beyond, plus why SPDY's faster-than-SSL descendants are crucial to the web's success, have a look at http://www.igvita.com/slides/2013/breaking-1s-mobile-barrier... -- I believe there's a video out there too somewhere. Edit: Video link is on first page of the slides.
What's interesting too is that this talk focuses as much on what's achievable without SPDY as what might be with, down the road. We really need faster SSL negotiation for that first time connection cost.
Google can afford to dedicate a bunch of engineers just to deal with all these complexities and inconveniences of binary protocols, but others can't. And this is far more important than some hypothetical tiny improvements in performance.
I don't understand why your criticism of DNSSEC focuses on TLS vs. DNSSEC.
DNS is a scalable global key-value store, and DNSSEC allows owners of namespaces to sign their own key pairs and delegate to sub-namespaces. If you can make the case that _that_ is not valuable, or how that is accomplished by other means, I am curious. But TLS vs. DNSSEC doesn't cut it.
Yes, DNSSEC/DANE can make TLS work better. It provides a straightforward way to pin an entire zone tree while allowing site owner modifications. Your example of Ghadaffi and Bit.ly is silly, because the status quo is that governments already have access to CA keys able to issue for any server. Restricting to a single zone can only improve that.
I don't know how you can claim NSEC3 is a grotesque hack without noting that it is equally grotesque to pretend that your DNS records are private. If I wanted to collect the BoA zone, I would setup recursive nameservers on the coffee shops' wifi within a quarter mile of their headquarters and grep the logs. Incidentally, this would work even with DNSCurve deployed.
Anyone who would advocate for DANE in 2013 is looking at a situation where users assume that governments have compromised the PKI that drives the most important encryption on the Internet, and saying to themselves, "let's bake that problem into the network architecturally; let's make it so that the NSA doesn't even need to compromise a CA, because they'll own the global root of all CAs".
Regarding NSEC3: most people reading this thread don't know what it is, so I'll explain it really quickly and let them decide, because it it so obviously a stupid hack that I don't think I need to argue against it too much:
Just a couple years ago --- more than a decade after work on DNSSEC was started --- somebody realized that Bank of America would not in fact be OK with a DNS design where every single one of their hostnames, for both public and internal systems, was public. But that's a problem, because DNSSEC wants to authenticate negative responses; if there's no JABBERWOCKY.BANKOFAMERICA.COM, DNSSEC wants that cryptographically proven. But their design to do that breaks if there are nonpublic names, because DNSSEC chains the names together as a way to authenticate denial.
So here's what they came up with: domain names are hashed (crappily) as if for a 1997 Unix password file, and the authenticated denial messages refer to hash ranges instead of literal hostnames. Meaning you can only discover all of Bank of America's hostnames if you are as technologically sophisticated as a circa-1997 password cracker.
I don't know why you're talking about the NSA. And if you are, anyway, do you really think they can't ask Verisign for an arbitrary cert already? And won't the same tools that modern network programs use to protect against these attacks (certificate pinning, convergence, etc.) work equally well when applied to DNSSEC KSKs?
NSEC3 is good enough. I gave you a trivial way of acquiring jabberwocky.bankofamerica.com, even with DNSCurve deployed. If BOA wants network accessible services on a network accessible namespace to be private, they should make a zone cut at internal.bankofamerica.com and restrict access to the delegated NS (which can be the same machine). The easiest way to do this is to run a VPN, which they already do.
I am asserting that having a global key-value store, where namespace owners can sign their own entries and make delegations, is a valuable system to have in place. That is what DNSSEC is. Unless you can argue against that, you are simply beating on a straw man.
Why spend the money adopting DNSSEC if it's at best a marginal setback to Internet security?
The "trivial way of acquiring jabberwocky.bankofamerica.com" relies on somehow being in the same coffee shop as an employee who accesses the site using public DNS. Whereas DNSSEC just goes right ahead and publishes the information.
As for "making zone cuts" --- they haven't. Very few networks have. DNSSEC advocates just like to pretend that everyone has either architected their DNS zones they way they would, or that they'll all relabel all their hosts to fit that way.
I don't know why I should care about a "global key value store where namespace owners can sign their own entries and make delegations". We can have lots of those. Why use a crappy one bolted onto DNS?
It's not a setback at all. You can still use the existing CA system. In fact, you can just not set the secure bit and ignore its existence.
> As for "making zone cuts" --- they haven't. Very few networks have. DNSSEC advocates just like to pretend that everyone has either architected their DNS zones they way they would, or that they'll all relabel all their hosts to fit that way.
Very few networks have ridiculous PHB requirements for public servers defined on public namespaces that are somehow slightly more difficult to find than normal (and once the cat / jabberwocky is out of the bag and published to a mailing list somewhere, gives no advantage whatsoever).
Those that do have reasonable options for satisfying said PHBs, first with NSEC3 and then with zone cuts and private networks (which actually does solve the problem, instead of just pretending to solve it).
> I don't know why I should care about a "global key value store where namespace owners can sign their own entries and make delegations". We can have lots of those. Why use a crappy one bolted onto DNS?
What alternatives? To my knowledge, there is no credible alternative system to DNS. Why put up with a DNS system that is not end to end verified when you don't have to?
Yes, but that's a hack, and not nearly as well-studied as HMAC or SHA-3. If you decide to do it this way, at least be sure to make it very obvious that "upgrading" to SHA-512 completely breaks your cryptosystem.
Thanks for pointing me at those papers! I'll admit it's been studied more than I thought.
(I was actually more worried about cryptanalytic attacks - the construction itself is obviously secure when instantiated with a random oracle - but I don't really see how that would work either. I maintain that it's ugly, but HMAC isn't the nicest construction in the world either, I suppose...)
I suppose there is some CYA value in HMAC, since there is less attacker control over what goes in the outer hash. If the case is that prefix-MAC (with chopMD) is vulnerable to some kind of cryptanalysis and HMAC is not, however, the hash should be considered broken, and not in the academic sense.
You're completely right, but it's not like people have transitioned off of SHA-1 (Wikipedia: "As of 2012, the most efficient attack against SHA-1 is considered to be the one by Marc Stevens with an estimated cost of $2.77M to break a single hash value by renting CPU power from cloud servers"; note that renting cloud server time is not a sensible way to approach this problem). Or even MD5.
We can all agree that running PGP in the browser is sub optimal for a whole host of reasons.
Here is why it might actually be a good thing:
3. Exposing a webmail PGP interface makes it easy for less technical people to communicate with people who may not trust their provider. For example, you may believe that gmail isn't going to do anything bad, and therefore have no interest in setting up Thunderbird+Enigmail. However, you may want to send email to a person who is forced to use an untrustworthy provider (and therefore uses a 'correct' PGP setup like Enigmail).
4. Exposing a webmail PGP interface allows people to play with PGP and become familiar with it before investing the time in a more robust software configuration.
Nothing is anything. We're all just subatomic particles destined for heat death
openpgpjs, the topic of the article, is a program that implements the OpenPGP standard, and the cryptographic algorithms it depends on.
node-gpg, the topic of this comment, does not implement any part of the OpenPGP standard, nor any cryptographic algorithms. It executes a local C program, GnuPG. If you don't have that C program, it does nothing.
Suggesting that they are the same thing is silly. The distinction is massive.
Otherwise, I could claim I just implemented a world-class PGP implementation! To try it out, just run: perl -e "exec 'gpg'"