This is a problem we (GitHub) are facing in a big way right now. Google Charts doesn't offer https alternatives, so almost all our users get a big "this site is going to steal all your private information" (mixed content warning). We chose to roll out SSL first, then deal with the hard problem of mixed content warnings (building ridiculous image proxies) later.
I think a lot of developers underestimate how big of an impact this warning is on users, especially on browsers like IE that throw up a dialog on every page that has this warning. Developers understand that it's not that big of a deal — but to a user, it looks like the site is full of viruses, malware and is going to steal all your bank account information.
This is a problem we (GitHub) are facing in a big way right now
This also broke Bingo Card Creator something fierce when I rolled out SSL support. It was the reason I hadn't had it previously, and I knew it was going to be a problem going in, and I tested for it, and I still managed to hose two pages which were critical to my business for most of a week.
Figure on a 40~50% drop in conversion from a non-technical audience on IE if they get one of those popups, by the way. It is the worst possible place to be: not enough to trigger an automated "Oh cripes!" from the website, but big enough to murder business results.
During the last Velocity conference, one of the last sessions on the last day was a talk from Google guys about how to make SSL faster, because they had recently turned SSL on for all gmail accounts.
I asked how they deal with the unlocked icon and warning dialogs for mixed protocol content on the page and the response was that people are so used to the popups and the lock being unlocked, that they (Google) don't consider it to be a problem. The response was really short and curt and I felt it was kind of a cop-out.
Well, as I recall, several of the questioners at that session were verging on the point of heckling, so many of the responses were short.
But the answer is that permitting mixed content was probably a mistake in the first place, but it's one that we have to live with. The ease of mixing content means that many sites get it wrong (including Google sites, to our shame) and the lack of ubiquitous SSL (again, including some Google sites) imposes that on others.
So, I suppose that `we don't consider it a problem' is roughly correct regarding warning dialogs: the answer is not to mix content. The problem is that it's clearly too difficult to do that. (The inability of networks to cache public resources over HTTPS is also an issue and possibly one which we'll address.)
Lack of SSL on the Chart's API is a new one, but I'll look into it now that I know that it's a problem.
As for the rest of the problem: fixing stuff is hard. Miraculous answers invariably tend to be so only in the eyes of the conceiver. We'll keep plugging away.
That's good to hear. But considering that only now is SSL considered to be "important", because of FireSheep, it would have been nice to have a major player like Google seriously consider/suggest/lead the dialog on solutions here, even if some of them are "hard" or unworkable. It's nice to have options, or know what the options are. Or even to say "there is no solution, create systems that don't mix protocols".
I mean, when I went back and summarized my experience at Velocity to the rest of my team, that this question was glossed over as it was led to some audible guffaws. Because we've all been dealing with users for years who don't know how to deal with the UX of this problem.
Agree 100%. I wasted most of a day (in increments over several weeks) worrying about whether I had screwed something up at my end, tweaking my gmail settings, analyzing TCP traffic and so on. A little bit of information from Google's end would have saved me hours of needless security anxiety.
Lack of complaint != contentment. I am pretty annoyed to hear of this indifference to users' peace of mind.
Somebody should tell the Chrome team that. A recent version of Chrome changed the mixed content warning indicator from a relatively innocuous "padlock with a cross" to an alarmist "skull and crossbones". We got a lot of complaints about that (due to not yet having built the "ridiculous image proxies" kneath complains about above).
It seems like they may have thought better of this change, since my current version of Chrome (6.0.472.63) seems to have gone back to the padlock-and-cross.
I suppose; unfortunately, the talk was in the context of making the same kinds of changes to your, or any random, site to make it more feasible to use SSL.
Also unfortunately, when there is mixed protocol content, especially with email, you're not asserting trust of the page origin, but of the additional assets loaded. Google has no control over the content referenced in emails. Encouraging people to ignore the warnings doesn't make anyone safer, if people are not informed enough to care or not.
One of the suggestions was to use shorter key lengths to make SSL less expensive to process, this wasn't considered a welcome suggestion by many of the more security conscious and vocal folks in the room.
The warning isn't spurious, by the way. A man in the middle could inject evil JS into urchin.js (or whatever the equivalent is now) just as easily as he could inject it into your site's JS; the page is not secure.
That being said the second part of your argument is completely wrong. You can just as easily inject evil JS using an https server and never get the mixed content warnings.
The warning serves to indicate to users that some assets (think important-financial-graph.jpg) aren't being served over the same encryption as the rest of the page. But then again, browsers like Safari have no problem with this. Other browsers like Firefox (correctly) cache these assets on disk if Cache-Control:public is set, thereby un-encrypting the asset.
The error may not be spurious, but it sure doesn't mean the page is secure or not.
The problem is that you asserted it is "just as easy".
It certainly might be possible for the attacker to compromise a specific server that you have chosen to trust - but that's a much higher barrier to an attacker than performing MITM on an open Wifi connection which doesn't require them to compromise any server.
The entire concept of the SSL icon is so that a user can trust a third party (web developer) that they don't know. If it's up to you (the web developer) — it's all up in the air again. And the icon/warnings are pointless. Which is where I've been trying to go with this…
Any browser which doesn't warn about that in some way is essentially broken. (Yes, I see you cited Safari as one, but it must the the only one as far as I know - it does remove the padlock, but that seems pretty inadequate ...)
EDIT: I do take your point in that I think IE is the only browser that actually blocks the content. The others warn about it but still load it, by which time, of course, the damage is done.
Our theory is that an SSL site including non-SSL content is no better or worse, in terms of security, than a completely non-SSL site.
What is the purpose of warning more prominently in the scenario described, than in the scenario where the user goes to a non-SSL site in the first page, or is redirected from an SSL login form to a non-SSL page?
Saying you can hack an analytics company's servers is cheating. I can just say I can hack GitHub's servers. Or obtain a root SSL cert. Or crack SSL.
If you don't trust a company and their competency at security you probably shouldn't be using their service for anything sensitive. You can't assume that your users aren't on hostile networks vulnerable to MITM attacks, etc.
Absolutely, but the protection SSL helps with is it actually forces the attacker to compromise hot-new-metrics whereas without SSL you can just skip the first part of step 2 and just do "send malicious js" through a MITM without ever having to go compromise any of the services involved.
It is the webapp developer who ultimately decides whether or not there is mixed content, not the browser. If you don't mix content in your webapp, an attacker who controls the network shouldn't be able to change your content (not even to inject references to new untrusted HTTPS or plain HTTP servers), or that of trusted service providers. The browser needs to implement SSL securely, but even users with a browser with no mixed-content warning benefit from there being no mixed-content.
The mixed content warning helps to warn the developer of the site of the problem, and let users of browsers that support it know that they are not fully protected.
Ask tptacek how hard it is to SSL man-in-the-middle attacks in the wild.
Hint: People sell out-of-the-box solutions to the problem.
It's trivial to get certs that browsers won't choke on. You have to more than check for the cert not being "invalid", you have to actually examine it carefully, knowing which cert sellers are trustworthy and which are not. Your SSL lock icon is useless.
In writing a plugin that rewrites URLs as https (http://github.com/nikcub/fidelio) I found that this worked in a lot of places. Facebook does not explicitly support ssl everywhere, but you can rewrite the requests to https servers and it works.
Exactly. Browser makers (including Mozilla/Firefox to a large degree) are responsible for the fact that HTTPS hasn't become the standard protocol as it should have been years ago. It's not only the unproductive mixed content warning but also the insistence of all browsers to only accept expensively bought certificates and throw a very scary and hard to overcome error dialog if a site uses any other kind of cert. While that isn't a problem for big(gish) commercial sites like GitHub, it presents an insurmountable hurdle for private sites and small-time projects for no good reason. For most sites I don't need "secure" origin verification as badly as encryption. The lack of a verifiable server address shouldn't mean that I should be bullied to not use an encrypted connection with it. But even if the verdict is that you absolutely can't have one without the other, browser makers should AT LEAST include trusted root certs of authorities who offer free SSL certificates, too.
While your frustration is understandable, I think you're speaking from the perspective of a tech-savvy person and not the average user. If browsers began accepting all free / self-signed certificates, it would be only a matter of time before something like "Firesheep FX" came along and permitted random strangers to MITM anybody's SSL session. Some of us can notice when that happens, but most people won't have a clue unless the browser presented them with a big scary red warning.
However, I agree with you that we need some good free CAs. The difference between free and $10/year is bigger than most of us think it is. Fortunately, there are registrars such as Gandi which will give you free certificates with every domain.
> If browsers began accepting all free / self-signed certificates [...]
Right now, browsers are accepting any unencrypted old HTTP connection without any warning, while non-verified securely encrypted connections are actively prevented. Tech people can circumvent the block, but normal users cannot. Nor do they have any reason to because the warning they are being shown sounds like the end of the world, while any unsecured connection looks perfectly fine to them. This is something that could be done right now to make everybody more secure, at no cost, but it threatens the business model of companies like Verisign.
Nobody is suggesting that browser makers should display the much-sought-after "lock of absolute protection" icon on any random SSL connection, I'd be fine if they reserve that for paid-for-certs. I'm merely suggesting they show free (or even self-signed) certs the same courtesy as basic HTTP, the most permissive protocol of all time, instead of actively preventing users from using encryption.
I agree with you about the threat of "Firesheep FX" and believe Wifi connections should probably all use WPA2, even at coffee shops where internet access is free. The threat of MITM is real, but the attack can be made more difficult using a number of schemes, and it even includes free certs that offer way more protection than any unencrypted link ever could. Yet, we are currently encouraging unencrypted connections while actively blocking encrypted ones.
If HTTPS could have the same UI mechanisms as, say, an SSH connection I'm convinced the online world would be a much safer place.
OK, I see what you mean. If you're suggesting that websites protected with untrusted certificates should be treated as if they were plain HTTP sites, then I agree with you. Chrome crosses out the "https" part of the URL if the page contains insecure elements. Something similar might be the right way to treat untrusted certificates.
Very common misconception, but it's still a problem. Any client with the network password can capture the initial key negotiation, and then decrypt the client's subsequent traffic. You can enter the network password in Wireshark: http://wiki.wireshark.org/HowToDecrypt802.11 .
Their root CA was generated in 2006. In theory, any browser shipped before 2006 will not support it unless it was added (through, for example, Windows Updates).IE7+ is supported; I haven't tested (and don't care to test) IE6.
Thanks for the link, I didn't know them. I just tried it. I generated a certificate for a site of mine, uploaded it, changed the config and the cert was pulled by Firefox. However sadly, the authority of StartSSL was NOT recognized by Firefox. This is what it said in the egregious warning dialog:
*-------.com uses an invalid security certificate.
The certificate is not trusted because no issuer chain was provided.
(Error code: sec_error_unknown_issuer)*
StartSSL does not work for me. Unless I did something wrong, which happens from time to time. I verified that the StartSSL cert I installed was downloaded by FF, it just doesn't recognize StartCom as a trusted cert authority (apparently). Can anyone confirm this?
Edit2: You guys were right, thanks! I did paste the intermediate certificate into the wrong file, my bad! It works!
I don't really understand why these Certificate Authorities exist and need to charge money to sign a digital certificate. Couldn't some sort of user based distributed network be used for authentication? I mean we trust compilers and code patches (which do see many eyes) that we then load onto our PCs. Why can't we trust some similar mechanism of authority that is user based.
Yup, I agree with you. This is a pretty big problem with a lot of other google services as well.
The google maps api for example will not work behind https. Google has publicly said that this is because they want their maps free and open, not behind some page where the user needs to be logged in. This create a huge problem for any site that uses google maps. They do offer a solution though, for $10,000 a year they will let you use the map api behind https.
Other than Google Analytics I can't think of a single other widget/embed/analytics app that has supported SSL out of the box. It's a real shame, but on the other hand I'd bet good money that the web will be 99% SSL within the next 24 months.
Give every user a monotonically incrementing value that's initialized at the start of the session using HTTPS. For every request, the client will provide the next value in the expected sequence. Listeners won't have the secret key that was exchanged during the HTTPS authentication, and can't issue requests on the legitimate client's behalf.
Forcing the requests to be serial sucks, but if you only do it for privileged actions (as opposed to public page GETs) it should be manageable.
Still vernerable to MITM attacks, since an attacker could intercept a legitimate respo se for safe.js, but then send the user a completely different file.
You need to sign the entire file. Now you're incredibly close to having SSL.
The IE8 warning is the most confusing sentence I've ever seen. Even I as a veteran of 13 years of web programming have to read that thing 3 times to know which button to press to make it load the damned stuff.
To be accurate, this is not the reason many sites choose not to go with SSL for everything. The real reason is that most sites don't need to be SSL for everything.
I run a travel blogging site, where 99% of all pageviews are from random people off the internet reading people's trip reports and looking at photos. Encrypting all that traffic would do nothing except bog the site for everybody.
Every once in a great while (in terms of total traffic), somebody will log in and post something. That tiny moment could benefit from SSL, since chances are it's happening from a public internet cafe or wifi hotspot. That's the only time a user is actually vulnerable to this sort of attack, so that's when they need to be protected.
But when you look at the internet at a whole, the traffic fraction that needs protecting looks pretty much the same. When you're showing me pictures of cats with funny captions, please don't encrypt them before sending them to me just because you read something about security on HackerNews.
The thing that Firesheep brought to people's attention is that the login is not the only thing that needs to be SSL protected. The cookies you get after signing in are often sent in the clear, and that cookie is just as good as your login for gaining access.
In a lot of systems, you can change the password without knowing the old one as long as you're logged in. Others, you can change the email address, only confirm on the new email address, and then get a password reset.
So if you really cover all of your bases, and require confirmation at every step, then the least they can do is access your data and generally impersonate you until you log out of that session (which no one does) or the session times out (which it won't, because you're still logged in as them).
Honestly, how many sites are aware that they are vulnerable?
It seems like you assume that because the security-oriented 0.5% of the web knows about it, the rest of the web should, too.
For most people, just making sure that their site runs at all is quite enough for them to handle, and keeping current on the latest vulnerabilities is way down on the list.
Additionally, fixing a site takes time. How long has Firesheep been out? A week? Two? You should realize that for many sites, even those staffed by very competent tech people, a month is the minimum amount of time for immediate action.
I agree that most of the web is probably ignorant at best of most security vulnerabilities. But, keep in mind that firesheep is not exploiting a new vulnerability, but an old one that has been known about since probably early 2000. Firesheep is new in that it is automating the work of other programs (which were admittedly a little less user friendly).
How many sites (that any of us are legitimately worried about) employ webmaster, developers, system admins or other that DON'T know why SSL/HTTPS is important? You can't honestly be giving facebook, twitter, etc a pass on understanding very basic concepts... (sniffing, http (cookies))?
Firesheep has been around for 2+ weeks now, but come on, we've all known this has been possible for forever. I'm 20, and I knew how to do this (and did) /years/ ago. I think Firesheep is just what everyone needed.
There are really good reasons why this is taking a long time and it is NOT lack of knowing that this problem exists.
That having been said, my laptop is now running a LiveCD of x2go's LTSP client and my desktop computer is running the x2go server. Very near-native performance and total security. (I trust my desktops' endpoint).
There has got to be a sensible way around this. It seems overkill to require every pageview to be over HTTPS, even for otherwise public sites. For example, should these public discussion pages be over HTTPS on hacker news?
On my site I am planning the following: operate the login page over HTTPS and issue two cookies. One is HTTPS only and other other is for all pages. The public (non HTTPS) cookie is only used for identification (e.g. welcome messages and personalisation). However, all requests that change the database in any way are handled over HTTPS and we check to make sure the user has the secret HTTPS cookie as well. Often forms submit to a HTTPS backend and then redirects back to the public page over HTTP. Also, all account information pages (sensitive pages) will be over HTTPS.
This way, the worst that can happen via cookie sniffing is that someone can see pages as though they were someone else. In your case, this is not much of a risk.
This is just dangerous. Example. If news.ycombinator.com implemented this dual cookie method. A man in the middle could intercept the page I'm looking at now, where I'm entering this comment in a textarea. They could modify the underlying form to post to the same page as the update form on the profile page, and set a hidden email field. Then when I hit the "reply" button, even though I'm posting to a HTTPS page, I'm not posting to the one I think I am, because the page containing the form it's self wasn't protected by HTTP.
I hope I explained that well enough. Mixed content is hard to do right. Forcing every page over SSL prevents anyone making any modifications to any page, and is just inherently safer.
Wait, if you've got the capability to intercept and rewrite arbitrary http forms, couldn't you just rewrite the homepage of Google.com the same way? The "action" attribute on the form gets changed to https -mybank- dot com slash profile, hidden fields are inserted, etc, and when the user clicks "Search" he's actually updating aforementioned profile page.
If this is possible, then the dual cookie method would seem to make sense.
Well, I would say its not 'just as dangerous' as a man in the middle attack is harder to set up.
Good point though. Maybe this could be solved by including a unique access code with the form that is a hashed value of the user'id and the url that you are submitting to (with salting to make this unguessable). Simply check this value upon submission to make sure it matches the URL seen by the controller. That would prevent anyone rewriting a form to submit to a new endpoint.
Is all this extra effort really worth the alternative of just using HTTPS for everything?
Have you really examined the extra cost of 100% https compared with the scheme you've outlined? Sounds like this idea would require a decent amount of effort to identify where to use https, to ensure that each privileged request is using https, etc.
I can see that for some cases it is advantageous to stick to regular http for unimportant requests and use https for the important stuff, but I have a strong feeling that this is only applicable for the minority of use cases and websites.
You're think about right solution, but you're overdoing it and in the end you're unnecessary complicating very simple thing.
The simplest (and IMHO the best) solution is to have everything served over HTTP for unauthenticated users and everything served over HTTPS for authenticated users (this requires authentication cookie marked as "secure" and regular cookie "authenticated=true" that would redirect authenticated user to HTTPS version of the site in case that he/she would go to the HTTP site).
Browsers should use two kinds of notifications: "encryption on" (green or red) and "certificate is present" (green or red).
Websites that do banking or handle sensitive information should be green/green (SSL-on with verified cert), while ordinary websites could be green/red (SSL-on with self-signed cert).
That is the solution to protect websites from the current iteration of FireSheep. It doesn't fix the underlying problem though. If a version of FireSheep comes out that can do MITM we might have bigger problems.
I'm not sure I'm parsing your post correctly, but as I understand it you're talking about third party websites accepting responsibility to protect you over an insecure network connection. If that's the case, then I think you're mistaken.
Certainly SSL is not required on every page, and MITM tools have been around for some time (including fairly friendly ones like Cain - http://www.oxid.it/). At the end of the day companies such as Facebook, Twitter et al have a moral (and in some cases legal) obligation to protect the information assets you uploaded to their systems from compromise. Likewise it is not unreasonable that you take certain steps to protect yourself.
The current version of FireSheep is a real known threat. We don't know what might be in future versions. For protecting against Session ID theft, SSL and the secure flag on cookies are the way to go. Certainly for data that doesn't need to be secure (such as static publicly available graphics), there's no need to use SSL for the majority of use cases.
Rather than using SSL on every page and expecting the web sites to do the heavy lifting, consider not using insecure bearer networks, or some sort of means of securing insecure Internet links such as a VPN or SSH tunnel.
I don't think websites should protect users that don't have AV, firewalls or share user accounts no. I also don't think websites should protect users who cross the road without looking both ways. None of those things are relevant to protecting the communications between the website and the user.
If the users machine is compromised, that's the users problem. If the users machine isn't compromised, yet the website can't be accessed over an inherently untrustable network like the Internet, then the website has some flaws that it needs to deal with. SSL is a start. DNSSEC is becoming important too and I will be using it on grepular.com when Verisign signs "com" at the beginning of next year.
It seems unlikely most users on the Internet could even parse your suggestion, much less execute it. That doesn't make them stupid, just not experts in the subject. Instead they rely on experts, like you, to make the right choices for them in these esoteric matters. While there are any number of ways you might solve this problem, HTTPS is the best available option since clients and servers already support it. As ericflo is pointing out, a small number of sites deploying HTTPS would go a long way to achieving the goal of a more secure Internet for all.
I totally agree. In those cases I usually point people in the direction of TOR and the like. The orginal poster just asked if HTTPS was the only way to get around Firesheep, and the answer of course is no it isn't.
I still think HTTPS is the best way forward (never said anything to dispute that), but right now if you want speed (tor is slow!) and still want be safe, a SSH tunnel is the way to go.
Besides, anyone reading Hacker News is probably comfortable with creating a SSH tunnel, or learning how to do so :)
Getting a VPN account from some service like witopia.net is not rocket science. People just need to be educated that they need a VPN account if they want to use a publicly shared network without compromising their privacy.
(Of course, as boyter pointed out, a VPN connection doesn't protect you on the general internet, but it does protect you where you're most exposed.)
While sites wait for services such as adsense to support SSL, adding a second Secure cookie and requiring on sensitive pages and to perform destructive actions can help reduce risk to users. Depending on the site, it may be OK to skip showing ads on a few authenticated pages. Wordpress implemented this in 2008: http://ryan.boren.me/2008/07/14/ssl-and-cookies-in-wordpress...
This won't protect against active attackers, but is definitely a step forward and will make a full transition easier in the future, when possible.
We spent about a week trying this on GitHub. It works pretty well as long as you have no ajax requests. We were basically left with this option:
1) Lose the ajax (and spend a significant time redoing bits of the site)
2) Scary iframe hacks.
3) SSL Everywhere.
I feel like we made the best choice (I certainly don't mind removing any chance we'll have adsense any time soon :). It cleaned up a lot of logic based around determining which pages were important enough to require SSL (admin functions, private repos, etc).
It's brought on some other issues though. Safari and Chrome don't seem to properly cache HTTPS assets to disk, for one. This is an old problem: http://37signals.com/svn/posts/1431-mixed-content-warning-ho... . I'm not too worried about increased bandwidth bills on our end, I'm worried about having a slower site experience. We're also seeing users complain about having to log in every day. Are browsers not keeping secure cookies around either?
You may like it but it really reduces the security. A man in the middle can make it look like you don't speak Tcpcrypt by manipulating the first few packets of a connection. It's the same issue as mixing HTTP/HTTPS - the HTTP parts leave you vulnerable. If encryption is not mandatory then it might as well not be there.
Which requires active attacks, ie MITM. And stopping MITM is essentially impossible - the best you can do is use our current CA setup, which assumes the original download of the certificate you got wasn't hijacked.
In preventing passive listeners, like Firesheep, this would work 100% effectively anywhere it can work at all. The only way it reduces security is in making people who don't understand MITM attacks feel they're safer than they are - at absolute worst it's like you don't have it installed.
I hadn't thought about that. However, at the bottom of tcpcrypt.org, it said "You're not using tcpcrypt =(" to me, so perhaps this could be used to show a (less obtrusive and less intimidating than browsers do when they detect the lack of SSL for some content on the page) warning to users that they are not using Tcpcrypt.
I think he's arguing that since both twitter and facebook (and other unnamed sites) do not use adsense, but are still vulnerable to firesheep, there must be another reason why developers don't update the security for their website.
Unfortunately, Facebook's XMPP service doesn't utilise SSL either. They hash the password, but everything else is trivally decodable on the wire. Plus there was the hole in Facebook chat which exposed your conversations to your friends earlier this year. Possibly the worse IM system in existence.
Most content sites have login systems that people use to customize their experience, post comments or upload content, etc. Millions and millions and millions of people are logged into content sites and are vulnerable to this attack.
Also, I disagree on the premise that adsense is mostly used on content sites. It's used on all kinds of websites.
Well, sometimes for strange values of "vulnerable".
Some of my sites use Apache::Session over non secured http connections, which makes them technically "vulnerable".