I think a lot of developers underestimate how big of an impact this warning is on users, especially on browsers like IE that throw up a dialog on every page that has this warning. Developers understand that it's not that big of a deal — but to a user, it looks like the site is full of viruses, malware and is going to steal all your bank account information.
This also broke Bingo Card Creator something fierce when I rolled out SSL support. It was the reason I hadn't had it previously, and I knew it was going to be a problem going in, and I tested for it, and I still managed to hose two pages which were critical to my business for most of a week.
Figure on a 40~50% drop in conversion from a non-technical audience on IE if they get one of those popups, by the way. It is the worst possible place to be: not enough to trigger an automated "Oh cripes!" from the website, but big enough to murder business results.
I asked how they deal with the unlocked icon and warning dialogs for mixed protocol content on the page and the response was that people are so used to the popups and the lock being unlocked, that they (Google) don't consider it to be a problem. The response was really short and curt and I felt it was kind of a cop-out.
But the answer is that permitting mixed content was probably a mistake in the first place, but it's one that we have to live with. The ease of mixing content means that many sites get it wrong (including Google sites, to our shame) and the lack of ubiquitous SSL (again, including some Google sites) imposes that on others.
So, I suppose that `we don't consider it a problem' is roughly correct regarding warning dialogs: the answer is not to mix content. The problem is that it's clearly too difficult to do that. (The inability of networks to cache public resources over HTTPS is also an issue and possibly one which we'll address.)
Lack of SSL on the Chart's API is a new one, but I'll look into it now that I know that it's a problem.
As for the rest of the problem: fixing stuff is hard. Miraculous answers invariably tend to be so only in the eyes of the conceiver. We'll keep plugging away.
I mean, when I went back and summarized my experience at Velocity to the rest of my team, that this question was glossed over as it was led to some audible guffaws. Because we've all been dealing with users for years who don't know how to deal with the UX of this problem.
Lack of complaint != contentment. I am pretty annoyed to hear of this indifference to users' peace of mind.
It seems like they may have thought better of this change, since my current version of Chrome (6.0.472.63) seems to have gone back to the padlock-and-cross.
Also unfortunately, when there is mixed protocol content, especially with email, you're not asserting trust of the page origin, but of the additional assets loaded. Google has no control over the content referenced in emails. Encouraging people to ignore the warnings doesn't make anyone safer, if people are not informed enough to care or not.
One of the suggestions was to use shorter key lengths to make SSL less expensive to process, this wasn't considered a welcome suggestion by many of the more security conscious and vocal folks in the room.
That being said the second part of your argument is completely wrong. You can just as easily inject evil JS using an https server and never get the mixed content warnings.
The warning serves to indicate to users that some assets (think important-financial-graph.jpg) aren't being served over the same encryption as the rest of the page. But then again, browsers like Safari have no problem with this. Other browsers like Firefox (correctly) cache these assets on disk if Cache-Control:public is set, thereby un-encrypting the asset.
The error may not be spurious, but it sure doesn't mean the page is secure or not.
Only if the user ignores the "invalid certificate" warning.
2. hot-new-metrics-startup gets hacked. Sends over malicious js
3. Your page is no longer secure. https certificate remains.
We can argue semantics, but I guess I'm more concerned about the end result than semantics.
It certainly might be possible for the attacker to compromise a specific server that you have chosen to trust - but that's a much higher barrier to an attacker than performing MITM on an open Wifi connection which doesn't require them to compromise any server.
1. You include http://google.com/trusted.js on a https page
2. Someone goes to a cafe, opens up your website with Safari while someone is performing a MiTM attack on that file.
3. No warnings, your user is compromised.
EDIT: I do take your point in that I think IE is the only browser that actually blocks the content. The others warn about it but still load it, by which time, of course, the damage is done.
What is the purpose of warning more prominently in the scenario described, than in the scenario where the user goes to a non-SSL site in the first page, or is redirected from an SSL login form to a non-SSL page?
If you don't trust a company and their competency at security you probably shouldn't be using their service for anything sensitive. You can't assume that your users aren't on hostile networks vulnerable to MITM attacks, etc.
The mixed content warning helps to warn the developer of the site of the problem, and let users of browsers that support it know that they are not fully protected.
Hint: People sell out-of-the-box solutions to the problem.
It's trivial to get certs that browsers won't choke on. You have to more than check for the cert not being "invalid", you have to actually examine it carefully, knowing which cert sellers are trustworthy and which are not. Your SSL lock icon is useless.
However, I agree with you that we need some good free CAs. The difference between free and $10/year is bigger than most of us think it is. Fortunately, there are registrars such as Gandi which will give you free certificates with every domain.
Right now, browsers are accepting any unencrypted old HTTP connection without any warning, while non-verified securely encrypted connections are actively prevented. Tech people can circumvent the block, but normal users cannot. Nor do they have any reason to because the warning they are being shown sounds like the end of the world, while any unsecured connection looks perfectly fine to them. This is something that could be done right now to make everybody more secure, at no cost, but it threatens the business model of companies like Verisign.
Nobody is suggesting that browser makers should display the much-sought-after "lock of absolute protection" icon on any random SSL connection, I'd be fine if they reserve that for paid-for-certs. I'm merely suggesting they show free (or even self-signed) certs the same courtesy as basic HTTP, the most permissive protocol of all time, instead of actively preventing users from using encryption.
I agree with you about the threat of "Firesheep FX" and believe Wifi connections should probably all use WPA2, even at coffee shops where internet access is free. The threat of MITM is real, but the attack can be made more difficult using a number of schemes, and it even includes free certs that offer way more protection than any unencrypted link ever could. Yet, we are currently encouraging unencrypted connections while actively blocking encrypted ones.
If HTTPS could have the same UI mechanisms as, say, an SSH connection I'm convinced the online world would be a much safer place.
if you just write the password on the wall, it defeats the purpose - everyone who logs in is on the same network again, just like a public network
Every device negotiates its own keys with the access point.
> I agree with you that we need some good free CAs
https://www.startssl.com/ Supported by just about every browser. Entirely free. A fellow Hacker News user linked to it in a similar thread. I was impressed :)
*-------.com uses an invalid security certificate.
The certificate is not trusted because no issuer chain was provided.
(Error code: sec_error_unknown_issuer)*
Edit2: You guys were right, thanks! I did paste the intermediate certificate into the wrong file, my bad! It works!
ERROR: cannot verify [site]'s certificate, issued by `/C=IL/O=StartCom Ltd./OU=Secure Digital Certificate Signing/CN=StartCom Class 1 Primary Intermediate Server CA':
Self-signed certificate encountered.
The verification is the same, there is no good reason it shouldn't be the same price.
The google maps api for example will not work behind https. Google has publicly said that this is because they want their maps free and open, not behind some page where the user needs to be logged in. This create a huge problem for any site that uses google maps. They do offer a solution though, for $10,000 a year they will let you use the map api behind https.
Other than Google Analytics I can't think of a single other widget/embed/analytics app that has supported SSL out of the box. It's a real shame, but on the other hand I'd bet good money that the web will be 99% SSL within the next 24 months.
It's a damn shame because it was a really cool integration.
Forcing the requests to be serial sucks, but if you only do it for privileged actions (as opposed to public page GETs) it should be manageable.
The worst is that the default selected choice in the modal box is to not load anything.
I run a travel blogging site, where 99% of all pageviews are from random people off the internet reading people's trip reports and looking at photos. Encrypting all that traffic would do nothing except bog the site for everybody.
Every once in a great while (in terms of total traffic), somebody will log in and post something. That tiny moment could benefit from SSL, since chances are it's happening from a public internet cafe or wifi hotspot. That's the only time a user is actually vulnerable to this sort of attack, so that's when they need to be protected.
But when you look at the internet at a whole, the traffic fraction that needs protecting looks pretty much the same. When you're showing me pictures of cats with funny captions, please don't encrypt them before sending them to me just because you read something about security on HackerNews.
So if you really cover all of your bases, and require confirmation at every step, then the least they can do is access your data and generally impersonate you until you log out of that session (which no one does) or the session times out (which it won't, because you're still logged in as them).
It's pretty much the same.
It seems like you assume that because the security-oriented 0.5% of the web knows about it, the rest of the web should, too.
For most people, just making sure that their site runs at all is quite enough for them to handle, and keeping current on the latest vulnerabilities is way down on the list.
Additionally, fixing a site takes time. How long has Firesheep been out? A week? Two? You should realize that for many sites, even those staffed by very competent tech people, a month is the minimum amount of time for immediate action.
Firesheep has been around for 2+ weeks now, but come on, we've all known this has been possible for forever. I'm 20, and I knew how to do this (and did) /years/ ago. I think Firesheep is just what everyone needed.
There are really good reasons why this is taking a long time and it is NOT lack of knowing that this problem exists.
That having been said, my laptop is now running a LiveCD of x2go's LTSP client and my desktop computer is running the x2go server. Very near-native performance and total security. (I trust my desktops' endpoint).
So- to the HN community, is this whole "ssl is cheap" a false meme, or does someone have actual instructions on how to deploy & implement a scalable SSL?
On my site I am planning the following: operate the login page over HTTPS and issue two cookies. One is HTTPS only and other other is for all pages. The public (non HTTPS) cookie is only used for identification (e.g. welcome messages and personalisation). However, all requests that change the database in any way are handled over HTTPS and we check to make sure the user has the secret HTTPS cookie as well. Often forms submit to a HTTPS backend and then redirects back to the public page over HTTP. Also, all account information pages (sensitive pages) will be over HTTPS.
This way, the worst that can happen via cookie sniffing is that someone can see pages as though they were someone else. In your case, this is not much of a risk.
I hope I explained that well enough. Mixed content is hard to do right. Forcing every page over SSL prevents anyone making any modifications to any page, and is just inherently safer.
If this is possible, then the dual cookie method would seem to make sense.
Good point though. Maybe this could be solved by including a unique access code with the form that is a hashed value of the user'id and the url that you are submitting to (with salting to make this unguessable). Simply check this value upon submission to make sure it matches the URL seen by the controller. That would prevent anyone rewriting a form to submit to a new endpoint.
Also, I didn't say "just as dangerous", I said "just dangerous"
However, with proper CSRF protection your man in the middle argument is not the case is it?
Have you really examined the extra cost of 100% https compared with the scheme you've outlined? Sounds like this idea would require a decent amount of effort to identify where to use https, to ensure that each privileged request is using https, etc.
I can see that for some cases it is advantageous to stick to regular http for unimportant requests and use https for the important stuff, but I have a strong feeling that this is only applicable for the minority of use cases and websites.
The simplest (and IMHO the best) solution is to have everything served over HTTP for unauthenticated users and everything served over HTTPS for authenticated users (this requires authentication cookie marked as "secure" and regular cookie "authenticated=true" that would redirect authenticated user to HTTPS version of the site in case that he/she would go to the HTTP site).
The solution to the problem is SSL on every page.
Certainly SSL is not required on every page, and MITM tools have been around for some time (including fairly friendly ones like Cain - http://www.oxid.it/). At the end of the day companies such as Facebook, Twitter et al have a moral (and in some cases legal) obligation to protect the information assets you uploaded to their systems from compromise. Likewise it is not unreasonable that you take certain steps to protect yourself.
The current version of FireSheep is a real known threat. We don't know what might be in future versions. For protecting against Session ID theft, SSL and the secure flag on cookies are the way to go. Certainly for data that doesn't need to be secure (such as static publicly available graphics), there's no need to use SSL for the majority of use cases.
Rather than using SSL on every page and expecting the web sites to do the heavy lifting, consider not using insecure bearer networks, or some sort of means of securing insecure Internet links such as a VPN or SSH tunnel.
If the users machine is compromised, that's the users problem. If the users machine isn't compromised, yet the website can't be accessed over an inherently untrustable network like the Internet, then the website has some flaws that it needs to deal with. SSL is a start. DNSSEC is becoming important too and I will be using it on grepular.com when Verisign signs "com" at the beginning of next year.
It won't protect you from man in the middle attacks on the general internet, or fix the underlying issue with most websites, but it will stop firesheep.
I still think HTTPS is the best way forward (never said anything to dispute that), but right now if you want speed (tor is slow!) and still want be safe, a SSH tunnel is the way to go.
Besides, anyone reading Hacker News is probably comfortable with creating a SSH tunnel, or learning how to do so :)
(Of course, as boyter pointed out, a VPN connection doesn't protect you on the general internet, but it does protect you where you're most exposed.)
This won't protect against active attackers, but is definitely a step forward and will make a full transition easier in the future, when possible.
1) Lose the ajax (and spend a significant time redoing bits of the site)
2) Scary iframe hacks.
3) SSL Everywhere.
I feel like we made the best choice (I certainly don't mind removing any chance we'll have adsense any time soon :). It cleaned up a lot of logic based around determining which pages were important enough to require SSL (admin functions, private repos, etc).
It's brought on some other issues though. Safari and Chrome don't seem to properly cache HTTPS assets to disk, for one. This is an old problem: http://37signals.com/svn/posts/1431-mixed-content-warning-ho... . I'm not too worried about increased bandwidth bills on our end, I'm worried about having a slower site experience. We're also seeing users complain about having to log in every day. Are browsers not keeping secure cookies around either?
I like that, as opposed to requiring users to have to install some plugin before they can even talk to the server.
In preventing passive listeners, like Firesheep, this would work 100% effectively anywhere it can work at all. The only way it reduces security is in making people who don't understand MITM attacks feel they're safer than they are - at absolute worst it's like you don't have it installed.
It would cost you more bandwidth, but then there's no annoying warning message about mixed content.
1. Go to https://www.facebook.com/
2. Log in.
3. Immediately you get redirected back to http://www.facebook.com WTF?!
4. Click logout.
5. Go back to https://www.facebook.com/
6. This time you get redirected to https://ssl.facebook.com/ and you're STILL LOGGED IN.
Actually now that I try the same thing with the non-SSL version of the site I have the same problem. WTF is going on? The only way I'm able to log out is by deleting the facebook.com cookies.
I'm on Safari 5.
Also, I disagree on the premise that adsense is mostly used on content sites. It's used on all kinds of websites.
Some of my sites use Apache::Session over non secured http connections, which makes them technically "vulnerable".