I think this demonstrates 2 very major problems with SSL Certificates we have today:
1. Nobody checks which root certificates are currently trusted on your machine(s).
2. Our software vendors can push new Root Certificates in automated updates without anyone knowing about it.
More content on this rant: https://ma.ttias.be/the-broken-state-of-trust-in-root-certif...
1. Nobody checks the source code of the software installed on their machines (or that the binaries were actually built from the purported source code).
2. Our software vendors can push new software in automated updates without anyone knowing about it.
Suppose that the CA system were replaced with a PGP-style web-of-trust or SSH-style trust-on-first-use system. How would you know that your software vendor has supplied you with a TLS implementation that's correctly and faithfully implemented? Just as they can push out new root CAs, they can push out a new TLS implementation that introduces a bug or backdoor that bypasses all trust checks.
So why single out root certificates?
Because you can only solve one problem at a time.
There are real people behind all levels of these projects, and people are vulnerable to corruption, coercion, etc.
Now, I still think it makes sense to single out the opaque-ness of CA trust. The Raison-de-etre for the whole CA system is trust, more specifically delegating trust. It provides nothing else.
Which is why it's so ridiculous that by default the browsers will trust the Chinese and German governments to vouch for Swiss banks, American companies to vouch for wikileaks -- even if the Swiss banks or Wikileaks don't trust those entities.
I can give you a pastebin with the links or a link to the zip with containing the subset I've downloaded so far. Whatever you prefer. Eventually I'll have one link to an actual web site haha.
site:schneier.com "nick p"
First limits to domain. Using quotes limits to results containing that phrase. A -name removes results containing the name. Can use multiple quoted and minus terms. Main two tricks I use to get good results.
The problem with most people's complaints and solutions about certificate authorities is that pretty much every proposed solution, when applied at scale, becomes functionally equivalent to the web of trust system we currently have. At one point you have to trust _somebody_.
Question: Why does the software need to be updated?
Vendor: "Bug fixes."
Question: Can I look at the source code?
I agree with agwa that this phenomenon of "automatic updates" goes well beyond root certificates and browser authors such as Microsoft. It is pervasive and seems to be growing.
Perhaps a related line of thought, I find it interesting that we are seeing some again pushing for enabling browsers to have more control over the user's computer. Simply put, as I see it, a user visits a webpage and someone else gets to run their code on the user's computer.
Why stop there? Why not have email attachments that open and execute automatically?
Originally there was the "Java applet" idea in the early days of the www. Then there was Adobe Flash. These days the idea it has several different working names. Obviously neither Java nor Flash is the language of choice. And the browser authors have something to try to put users' minds at ease: "sandboxing". But I see no difference in the issues this raises for users.
Is there any "sandboxing" for the browser itself? The browser and its authors are inherently trusted?
In the same way that root certificates installed on users' computers are "pre-approved"? Who approved them? Are users involved in that approval process?
If the certificate/code in question comes from the browser authors then it must be both necessary for and desired by each and every user?
The problem with automated updates from my perspective is that there are very few software authors who will not try to give me more than I actually need; if I am not careful I end up with the "kitchen sink". Without user intervention, software is like a gas: it will expand to fill space.
As for the CA system, I am my own CA root. The openssl binary is the antithesis of the so-called UNIX philosophy. How many things does it do? Perfect for an example program to include for "testing". This goes to agwa's comment. I use this software but I know it is quality control disaster. Keep those updates coming.
In any event, the only certificates I "trust" are ones which I did not obtain over an untrustworthy network, i.e., the internet. That number is of course zero.
I will sign a certificate to satisfy a browser, but that does not mean I "trust" any part of the process. In practice, outside of organizational use, I see the whole CA scheme as a joke. (But not that its implementation has impeded the success of e-commerce.)
As a user, I play along with SSL/TLS and certificates only to get today's software to work. That's it. A nuisance more than anything else.
If I install trollololz image messaging app in by ~/bin -- I've given the author access to all of my personal files -- because they're all readable by the user under which I'd run trolloloz.
If I go to trollollz.com, it doesn't automatically have access to my gmail/outlook.com mail, google spreadsheets or dropbox files.
Of course this is a usability problem, so now users move data to data-services like dropbox, and give apps/websites access to that data...
We might have an opportunity here to give users an UX that works for more sensible compartmentalization of data/working ACL/capabilites -- but time is running out, as convenience chases security out of people's lives.
(actually, I'm not sure if Mozilla even can push certs without an explicit update)
Once you start playing with gag orders, secret courts and whatnot, all kind of fun stuff become possible.
Is there a legal mechanism, authority, or track record for such a thing?
If you're talking about Dual_EC_DRBG, that was a non-trivial, poorly-kept secret that failed on launch. An alleged $10 million secret deal, plus development of the algorithm doesn't sound trivial to me.
The problem is, as a layman, I cannot know. I wouldn't have thought that something like FISA court orders was possible, where you get a secret order from a semi-secret court and you are not even allowed to talk about it.
Who knows, maybe there is a secret FOOBAR law that says agents can force any certificate agency to sign random certificates for them. Maybe some wierd agency you never heard of forced every major manufacturer to include hardware backdoors, and lie about it.
A few years ago I wouldn't have thought that was possible. But my trust that the legal system is democratic and transparent has been thoroughly undermined.
Now, if you run a business and some people in suits come and order you to install a backdoor, and threaten you, and tell you you can't talk about the incident to anybody besides your laywer, you can't do anything about it - and you better hope that that lawyer is good, since otherwise you have no way of telling whether that order is legitimate or not. Those people might as well be criminals, and you have almost no way to find out. Back in the pre-9/11 world, if you didn't recognize the IDs of the, say, FCK agency, you would have phoned around a bit and then told them to f'ck off after hearing their outlandish demands. Because there is no way something like that would happen in our democratic country. You can't assume that anymore nowadays.
I'm not asking you what episode of Blacklist you enjoyed the most, I'm asking you about real life. The lavabit company was served a real warrant made by a real judge, served by a real officer of the court. NSLs are served by real FBI agents with real badges. I'm not debating the anti-liberty essence of an NSL, but the "men in black" fear is completely unfounded by the domain of things that are in the public eye.
The strawman of the "FCK agency" agents ordering people with the threat of jail time to put backdoors in their software isn't backed up by any credible fact. We can suppose and assume all day, but you shouldn't take it for granted that everyone agrees or should agree with you.
*the alleged RSA backdoor was reported by Reuters to be a $10M bribe, doesn't sound coercive to me. Not to mention the NSA has no arrest powers outside its facilities, but sure.
I think that Snowden's email provider was not even allow to talk about things with their lawyers at some point in time...
I think that even in the current state of things there's not much standing for the NSA to force that to happen.
If you don't trust your trust provider, don't use their software?
I'm also fine with them signing for outlook.com, microsoft.com etc.
I'm not fine with them signing for wikileaks -- but I also am not really worried about that. I'm worried that some fly-by-night CA will loose their keys, get hacked, etc. So I don't want any more than a minimum of CAs on my system, and I'd like to approve them on a domain-by-domain basis.
Even with good UX, that'd bee way more hassle than most people want -- I know that. But it would've been nice to have a sane option for it.
And also some special control over updates/upgrades to the CA-cert store.
In short, I trust Microsoft to write software, I don't trust them to delegate trust, because they're trapped in the CA racket.
CA protects nothing from the underlying application, operating system, and device drivers.
If MS force me to trust, say 800 CA certs, and all of them can mitm wikileaks, the likelihood that one of them could be penetrated by a hacker or a state actor is much higher.
Sure, it's not "absolute security", but nothing is.
There are different concept that one trusts, or not trust:
The os, drivers, bios, hardware. I generally trust that. It may be naive, but I do. However, even if I'm right in trusting that, that doesn't matter if I can't trust all the CAs. Not just from the point of malicious actions by the CAs, but from incompetence by them.
It's bordering on crazy to assume that Microsoft has backdoored all of windows on the behest of the shadow intelligence monster ruling our world from behind the curtain.
While some have found many of the NSA revelations shocking, there's really been only a handful of new things: audacity, and a surprising (but small) violation of the NSA mission (to keep USA safe).
The former comes down to crudely monitoring elected officials from allied countries, the second the assumption that NSA might have intentionally weakened crypto (and by so doing hurt the US, and the US Military, US companies). Many veterans from the cryptowars of the 90s were surprised by that.
Now, just because a spy agency is good at its job, doesn't mean that no-one else is. I have little love for MS, Intel, AMD (actually I do love AMD. Who doesn't love an underdog? ;-) -- but I very much doubt they are complicit in some kind of grand Clipper chip scheme. They all want to sell to both the US, Chinese and Russian military, for one -- you can't do that if the equipment/software is useless.
Now, Siemens (and perhaps MS) might have helped the US with the operation against Iran. That's nice and patriotic, and probably paid well (if not in money, with contacts, further government contracts etc). That doesn't mean Siemens intentionally sabotaged the development of the stuff they sold Iran -- it just means Siemens is as incompetent and rushed as the rest of the tech industry. It's not quite the same as being malicious. Well, not the same as doing "malicious engineering/product development".
Did the US Navy sabotage Tor? Unlikely. If they did, it was masterful subterfuge. If the "great conspiracy" can't get it's act together in sabotaging the armoured humvee's they've left to an actual enemy [1,2] -- do we really think they manage to organize around the long game of deploying secure, hidden backdoors in windows, years ahead of their use?
No, I don't think windows have hidden backdoors. It might have more security holes than a swiss cheese, and I generally run GNU/Debian anyway.
I'm open to being completely wrong though. Show me that a typical mainboard+ram+cpu combination is open to a hidden, intentional (even if masqueraded as accidental) backdoor, and I stand corrected.
In the mean time, lets just try and make stuff that works half-way the way we intend it to, and that includes the whole system. In this particular case, it means that we throw out CAs that we have no reason to trust (that reason being a) we don't need them, they don't currently certify anything we need to trust b) They're incompetent, c) they're hostile (eg: foreign government front/owned) -- and we reduce our attack surface.
Add a meaningful capability system "NORID can only sign .no-domains", pinning and some other stuff to reduce the scope of CA power (now everyone is critical, because if someone has any one key, they can mitm everything. That's just nuts).
Basically, I think the NSA is mostly a bunch of useless muppets, and I don't see why we should keep making their job easy. Especially as I'm in Norway, and so, while technically in an allied state, we've seen that that means fuck all. I'm not part of the scandal in the US, I'm not a US citizen. It's part of the above-board, initial brief of the NSA that they're right to try and steal all my data. And that hasn't changed.
False. See slide 73 and forward in http://www.slideshare.net/zanelackey/attackdriven-defense
EDIT: I googled the first two. "GDCA TrustAUTH R5 ROOT" and "S-Trust Universal Root CA" are both new certificates (~Novemeber 2014). The latter is in Firefox already, and is a new SHA-256 root certificate to eventually replace a SHA-1 certificate for an existing CA.
Maybe the system could be changed to one where domains can only be signed by the DNS registrar, or something.
It's pretty crazy that any CA can issue a certificate for any domain. And paying for more validation doesn't help you at all, since it won't keep the "bad guys" from getting an illegitimate one from a cheaper/lazier CA.
The problem with the root of trust problem is that you have to trust someone. Who is your preference? The government? The registries? ICANN? The hardware vendor? Those are all suspect too, and most have been historically compromised.
The only thing I can see that helps is "community" auditing -- things like Chrome's pinning and ... the action that discovered these new certs. So, as much as it sucks, we're sort of doing the right thing already.
(disclaimer: I work on this project)
No, silly: the domain owner! Something that's already possible with DANE: the domain owner gets to decide, as it should be, by publishing information about valid certification into the DNS.
Of ALL options this one is the least crappy IMO.
And if you require some level of authentication you are back to the same problem.
If you are instead saying that you should avoid having a URL in the certificate itself, validate it as normal and then use the DNS record to match the certificate to the URL I guess that could work. However you still have the above problem where anyone who has a valid certificate at all can impersonate any website by injecting a DNS record.
Since we already are at the point where the security of the system hinges on the registrar, why not at least limit the damage potential by only accepting certificates issued by the registrar in question? Instead of letting any CA under the sun ALSO issue these?
PS: If certificate issuance was a required and expected task for all registrars, it could even lead to $0 certificates, which would surely boost HTTPS usage. It would even provide an incentive to pick a thorough registrar if you are concerned about security and validation, since no other entity could issue illegitimate parallel certs behind your back. Brings some meaning back to being a high quality authority (even charging more for more trust) instead of the race-to-the-bottom we have today. Sloppy registrar-CAs would lose business because their trustworthyness actually means something for once. (You wouldn't register your serious domain with a registrar known to have poor validation checks in place for issuing certificates, for example)
You seem to have gotten things backwards. If the DNS is hijacked, SSL is the only thing protecting a visitor from being lured into the hijacked site.
Connecting these two things into one makes SSL effectively worthless.
They way it is now, hijacking DNS (between the domain NS servers and any CA) would allow an adversary to request and obtain an illegitimate certificate by way of domain validation. If the registrar is the only valid issuer of certificates, this loophole is closed since there is nothing for the CA to verify - the registrar already knows who the customer is and can offer certificate signing without an insecure DNS based domain validation.
For end-users with a hijacked DNS, the registrar-issued SSL certificate would still protect the transmission, because the DNS-spoofing adversary would not be able to present a valid certificate signed by the registrar. In fact, in today's CA environment, if the DNS hijacker is playing ball with a rogue CA, they could also spoof the SSL. With my suggestion, they couldn't, unless the rogue CA is actually the registrar, because the browser would see that the SSL certificate was not signed by the appropriate registrar.
You could create a DNS-style CA setup like this:
* Browsers would ship with ONE root CA, the public key of the "." root zone operator
* Each TLD ("com", "org" etc) has a CA signed by the root zone operator, valid only for signing TLD registrar CAs. The root zone CA could publish a signed list of TLD certificates daily/weekly/monthly (they shouldn't change too often and the number of entries would be relatively low). Anyone could compare notes and see if these change. There aren't any hidden intermediary CAs. Browsers could sync this list daily/weekly/monthly, the deltas should be minimal. ISPs could even provide mirror services, because the whole list is signed by the root zone operator anyways.
* Each registrar under a TLD has a CA certificate signed by the TLD operator, valid for only signing secondary domains under the given TLD. (i.e. customer domains). Just like the RootZone->TLD signed list of CAs, the TLD CA operator could publish a signed list of registrar certificates daily/weekly/monthly (again, they shouldn't change too often and the number of entries would be relatively low). Again, anyone can compare notes and see if things change. No hidden intermediary registrars. Browsers could yet again sync this daily/weekly/monthly and ISPs can mirror the list since it's signed.
* Registrants of customer domain ("example.com", "example.org" etc) can request a domain-specific CA at their registrar, and only at their registrar. This domain-specific CA is valid only for signing leaf domains, i.e. "www.example.com", "mail.example.com". With the domain-specific CA in hand, the customer can create their own leaf domain certificates "at home". The registrar should offer non-EV domain-CA certificates for free (there is no work to be done to validate domain ownership as the customer already has an account there), or charge a fee for EV certificates.
* There needs to be a way to determine who is the valid registrar for a given domain. This could be for example a TLS-based machine-readable WHOIS service address operated by each TLD. The TLS-based WHOIS service would negotiate TLS with the same TLD-CA-certificate as specified in the root list. Thus, a browser can connect to the TLD's WHOIS service and validate that a given domain is under a given registrar, and that the leaf SSL certificate is signed by the domain CA certificate, which is signed by the registrar CA certificate, which is in the signed-by-the-TLD list of registrars, which is in the signed-by-the-root-zone list of TLDs. This stuff could also be cached at the ISP level because everything is chain signed.
Maybe this is how DNSSEC works (I haven't looked into the details), but I think this would be a pretty neat way to limit the damage done by rogue CAs.
Not exactly, but it's very similar. DNSSEC has no concept of registrars, and each zone is entirely signed by the same entity.
You'd trust that single entity to inform the registrar anyway, so descentralizing it gains you nothing.
I would have to think about it but I'm sure there is SOME way to verify someone (either in person or digitally). This process would only have to happen once to verify the CA cert for their domain. After that - if their CA cert is compromised it's not a big deal because it only affects one domain.
Oh - and any hoster that allows it's employees to reset virtual machine passwords or make account modifications via email/support ticket should NOT be allowed to participate in this scheme.
Also, DNSSEC still has some severe issues in practice. I'd be glad if we could get DANE widely deployed for mail servers.
Edit: Oh, and if you're not using a validating resolver yourself you're also trusting that you're ISP is using one and not manipulating the responses.
I really don't understand why people keep repeating that complaint. Of course, if you don't check the keys you don't get any security.
How is that a problem of the algorithm? And how is that a problem on practice? If you want some real amount of security you check the keys, being them SSL certificates, DNSSEC signatures, or whatever else encryption system people put on place.
That's why DANE for mailserver is such an attractive target. They're usually run by people who know what they're doing and it helps bring a lot of infrastructure into place.
The fact that current software is hard of configure is just a symptom that it's badly designed. The only inherently hard thing in DNSSEC is distributing your domain data (not really harder than setting our server for TLS), and normal people do not do that.
I think the trust model is broken for the same reason it can be said three can keep a secret, if two of them are dead.
How can humans create a software or hardware system that absolves us of the issue of trust, when people are, and have always, been up to no good.
The courts are full of people who claim other people have acted outside of good judgement.
I don't think it's possible to have trust without its opposite.
I'd like to be proven wrong.
Of course I've since learned how awfully inertial the banking industry is and that the security of financial transactions is a lot of hand-waving and wishful thinking. On top of that, there's outright corruption of banks profiting from theft and always cooperate with governments.
So the answer is, you shouldn't have to trust anyone. The security infrastructure must be built assuming hostilities are everywhere. Any trust that is given should be limited to just what is necessary to accomplish the task at hand.
The industry does not encourage users to think and learn. The industry wants its software to do the thinking for them. It doesn't want users to think about trust at all. I think that is the scariest part of where things are heading.
It isn't about being passive; it's about having lives and jobs and other things to do with their time. A heart surgeon may be a genius but he doesn't have time to keep up with the latest research and teach himself PKI, certificates, crypto, and permission models across all the platforms he uses (and keep up to date on those too!)
Users will not waste their time thinkin about trust. The history of our industry is ample evidence that it would be a losing battle. Not to mention a single mistake can be ruinous, requiring absolute perfection 100% of the time (from users and programmers).
One can hope that, eventually, formally mathematically verified open source systems will help solve the issue, at least on the programming side.
Edit: since this was downvoted, I will elaborate. The halting problem only applies to arbitrary programs on Turing-complete systems. It is possible to create useful systems that are not Turing complete. As a baseline example, it is easy to tell that the following Ruby program halts:
That's like saying they are bad at juggling with one hand while riding a unicycle. Passwords are the bad thing, and users have been subjected to them for quite long enough.
Does society try to keep these people illiterate? Does we give up, accuse the millions of illiterate people of not caring, and find ways to hide important things from them? Do we simply let them stay ignorant while making sure we hide any books when they are around, because books when they are around "because they don't care"? While I'm sure there have been been people who have done these things, we - as a modern society - generally find the idea of being illiterate to be a serious problem; serious enough that we spend significant amounts of money on education programs and other attempts to fix the problem of illiteracy.
Technology has created a new literacy. I'm not talking about programming or suggesting that people need to know HTML or anything. However, if you want to function in modern society, you need to understand some of the evolving language that society uses. This new literacy includes things like understanding that strings of the form "email@example.com" are probably an email address, and when you read something that starts "http://" or ends in ".com" or ".net" is probably referring to a website. The fact that the educational system has yet to catch up to the changes in technology is not a reason to force decisions on people, nor is it a reason to accuse them of not caring; instead, we should be interpreting the problems people have with security as a sign that a lot more education is needed, and we need to find ways to help people learn so they can protect themselves.
Unfortunately, there are a lot of people in the tech community that are confusing "frustration at not understanding obtuse technology" with "not caring about security". Many people do care, but lack the slightest idea about how to even approach important technical decisions. Certificates, keys, signing, and the like are not what matters, and are minutia tha should be left to those with proper technical knowledge. What matters in security is where the trust is being placed, and that is something a lot of people are skilled at.
Obviously, change isn't going to happen overnight. My suggestion is that we need a way to let people at least start down that path of managing trust. A good first step would be simply exposing the process so people can learn that a trust decision is even necessary.
 functionally illiterate is defined as those adults with a reading level below the basic requirements necessary to "perform simple and everyday literacy activities". Data is from https://nces.ed.gov/naal/kf_demographics.asp
A PKI system like we have now is one of them, but we need a system where that is not the only available trust. Yes, it will be necessary to educate people about this topic (and never discussing it with the user will only keep them ignorant). Right now we have a system where users are forced to trust various CA, and if someone decides they want to trust something else, that is difficult or impossible.
The problem of WHO to trust for a particular website (that is, every website) is a separate problem which can be solved over time. My point is that we don't even have the tools to let people even being to solve that problem.
Yes. They should pay a service to keep track of who they should trust. They wouldn't be opened up to any more risk by this than as with the status quo, MS + Apple + Google would end up giving it away for "free" for those don't have a problem with that or don't know better, and the option to get more technical about issues of trust would always be available to them.
There is some addon (if I remember correctly) for Firefox that binds specific key to specific domain and raises warning if certificate changes.
If you want a system that can hinder a government doing that, or a large organisation, or a corporation. Then look elsewhere.
Do you remember to wipe your hard drive when you get your computer back from the repair shop? They could very well have installed a new root CA without you noticing....
Wiping your HDD might not be enough:
Theoretically you could hide malware in your GPU, USB, FireWire, webcam, NIC, baseband, Bluetooth, where ever.
No matter who you are or how smart you think you are if your adversary is determined and has the required resources they are going to get you.
Just trying to further emphasize your point, not be obnoxious. The truth is, there's almost no possible way to not expose yourself. Anything made by humans can be abused by other humans for personal gain.
Partly because they do care about these things and they are sending patches upstream as well so as many as possible applications in Debian can eventually be build reproducible.
I hadn't seen it before, but even better they added this piece of text on that page:
"we care about free software in general, so if you are an upstream developer or working on another distribution, we'd love to hear from you! Just now we've started to programatically test coreboot and OpenWrt - and there are plans to test Fedora, FreeBSD and NetBSD too."
This has definitely significantly improved the security of Windows, though not given users any kind of protection against malfeasance by Microsoft (including in the sense of giving people malware or simply less-secure new versions of stuff through Windows Update).
For example Windows Updates is used to blacklist CAs certs (like in the case of DigiNotar). The step to adding CAs certs over Windows Updates is really small.
I don't mean the simple ability exposed in most browsers to add/remove certs. That still assumes one set of trust that used globally, which is completely incorrect.
Maybe I don't trust $COUNTRY to handle their root certificate for most uses. Currently we handle that case by removing the cert completely. Trust, however, is not a simple boolean value, and maybe I do trust that certificate for $COUNTRY's official government pages. I should be able to specify that I trust some certificate for some domain (or other, non-domain based use!), but not for others.
As another example, consider a local Web Of Trust. whenever Web Of Trust is brought up, people complain about the difficulty of key exchange. Well yes, that's a difficult problem, but there is no reason that it has to be solved for all use-cases before anybody starts using it. Maybe a circle of (usually physically local) local friends want to have secure communications. They can share a key in person easily, and so it should be easy to give access to a private forum by simply sharing a key/cert on a USB disk.
We can currently approximate those cases, but it is not well supported, and is certainly not something that most users would be expected to be able to do. We can fix some of that with a better UI, but I'm suggesting a far more fundamental change, because actually solving problems like key sharing will not be easy, and I suspect they will only be solved once we have infrastructure in place. HTTP was successful because it did not require that everybody implements the full, fairly complex specification. Instead, we had a fluid, extensible protocol that allowed anybody to extend it, and that allowed for the development of a wide variety of software.
The problem with traditional PKI (at least as implemented) is that it assumes that we can assign an absolute trust value to anything. In reality, trust is relative, and may in fact have multiple values at the same time. Until software is designed around those realities, it will always be inflexible and insecure for any use case where the needed trust assumptions do not reflect the assumptions made by the authors the software.
Unfortunately, I'm a old-style UNIX nerd who is fine with using GPG, and I'm not sure what the UI for a dynamic-trust system would even look like. sigh
The truly paranoid would probably prefer a printout on paper; I've actually done this before with HTTPS self-signing.
So, I could have my own CA, or GNU/Debian could have their own CA -- and I/they could sign a capabilites document allowing NORID to sign off on .no-domains, but not on any other TLDs. Or I could trust Microsoft and Google with their own domains (a whitelist), and nothing else.
Such lists could be distributed in a similar way to certs and revocation lists -- and might take us from a 100% broken system, to a 80% working system -- which is probably as good as automated(ish) trust is going to get anyway.
[edit: I don't think delegating trust is completely wrong. It's insane to think that 10 billion people will get the required knowledge to make good trust decision for software etc. Most people can't even do that for normal stuff like money.
Anyone remember nt-bugtraq? I loved that. I still can't read official MS security bulletins. I usually only see that there's a patch, rated at some comically low importance, and usually have no idea if I really need the patch or not. And I consider myself rather technical, and while windows isn't my prime platform, it's not completely alien either.
The summaries on nt-bugtraq was gold. "This is remote root in default install of IIS". Ok, I don't run IIS. Note-to-self, make sure this patch is installed if I do. "This is local login bypass in GINA". Ok, lets install that, prioritize public facing machines. Etc.
That is why I'd like the ability to trust third-parties to help manage trust. I don't think MS is maliciously complicating their security bulletins, I just think they're trapped in an obscure dialect of Coroporatese that I don't speak, and really have no desire to learn.
Maybe I should order a few copies of "On Writing Well" and ship them to Seattle, att: MS security team.]
is better than what I remember. Note that the page doesn't mention certificates, and the new update doesn't appear to be mentioned on:
For example, if you want to take someone out to a nice restaurant. You might ask friends and colleagues, but some might have different tastes or some might just have an incentive for you to go there. So once you have a few options, you can then Google those choices to see if they match what you are looking for.
So for something as simple as a dinner, we are already doing several checks to validate the place and comments that people have made.
The issue comes when the web is involved, as we expect instant reaction and response. We also hate having to answer repeated question as it interrupts our though process. So we need a web of trust, but also with the option of sharing in you personal web of trust. So for an example, I want to go to page: newcorp.com, but the certificate warns me it's not allowed by my chosen web of trust. So I query so see if anyone on my local web of trust has an allowed setting for this page. It finds that Bob does, but I have set Bob as a low trust setting so it won't allow me to go there, but it will say that Bob's web of trust does. Do you trust Bob's choices to allow access?
This would at least allow people to make choices of trust that they understand. As humans we thrive in groups, as long as they are not to big, and the Internet is a huge group of people. It's a lot easier to choose if I trust Bob from the office then for example Google, ICAN or even Mozilla.
That way, the non-tech people can see the web of trust of their tech friends and make a choice on what they have set.
It's a thought, but I think we need to bring trust and security back to our local, controllable environment. The Internet will keep on integrating with our every day life's in ways that are hard the even think about now. So we need to figure out security and trust now, not once my jacket is talking to the street lamp telling it that the light should be lower because I had eye surgery yesterday and should not have strong lights around me. How the hell do we add security to all those types of functions that will come.
Sorry for the long rant, but it's something that worries me about the future.
Essentially he's proposing side-loading a new application under a custom url scheme so that the browsers will launch a helper app that's used to handle web applications with the following url format:
web: publickey @ ipaddress / capability
He's planning on developing the helper app that's based on a sand-boxed node.js and QT application which just uses a TCP session to communicate with the server.
Mozilla are considering doing this also.
This is not just browser support -- its TLS library support.
I think the author might be a bit behind the news on Tunisia, though.
They should know better.
3.1.2 Need for Names to Be Meaningful
The Issuing CA shall ensure that the subject name listed in all certificates has a reasonable association with the
authenticated information of the subscriber.
The hypocrites can't even manage that with their own 'RXC' CA name.
Dunno how MS's process is, but with Firefox there's a nice application process that's very open and you can see their claims.
However, nothing to bee seen in bugzilla about Cisco regarding their new CA. I wonder if it is MS only?
These days Chrome/chromium uses the system store AFAIK, so at least you can get surprising breakage in your management tool that isn't due to TLS, but rather some kind of IE compatibility hack... ;-)
As far as I know, The US has almost non-existent privacy laws when it comes to what corporations are allowed to do/demand to do to their employees through contracts wrt. traffic on company equipment.
Forcefully and silently intercepting traffic on employee networks would AFAIK be illegal in most of Europe.
Maybe I should, but I am not going to individually double check every root certificate. I don't think I have the means to do so either.
> A flaw in this system is that any compromised root certificate can in turn subvert the entire identity model. If I steal the Crap Authority's private key and your browser trusts their certificate, I can forge valid certificates for any website. In fact, I could execute this on a large scale, performing a man-in-the-middle (MITM) attack against every website that every user on my network visits. Indeed, this happens.
> HPKP is a draft IETF standard that implements a public key pinning mechanism via HTTP header, instructing browsers to require a whitelisted certificate for all subsequent connections to that website. This can greatly reduce the surface area for an MITM attack: Down from any root certificate to requiring a specific root, intermediate certificate, or even your exact public key.
related previous articles:
Firefox 32 Supports Public Key Pinning (188 points by jonchang 304 days ago | 100 comments): https://news.ycombinator.com/item?id=8230690
About Public Key Pinning (72 points by tptacek 43 days ago | 5 comments): https://news.ycombinator.com/item?id=9548602
Public Key Pinning Extension for HTTP (70 points by hepha1979 242 days ago | 28 comments): https://news.ycombinator.com/item?id=8520812
We want an open, yet secure web, anonymous at best. With the current setup, that is not so easily possible. Letsencrypt might help, but even with that, there is still someone you need to beg for a signed cert.
Maybe we need to think ahead.