Hacker News new | past | comments | ask | show | jobs | submit login
Firefox 23 will block non-SSL content on SSL pages by default (developer.mozilla.org)
341 points by simonster on Apr 8, 2013 | hide | past | web | favorite | 90 comments



In other words, start using the `//url.to/something-here/` shortcut and the world will be a better place.

EDIT: Use `//` instead of `http://` or `https://` and it will use whatever the protocol of the page being fetched is using.

EDIT 2: Double check when you use the `//` shortcut that the website you are linking to supports HTTPS, some still don't and they don't redirect properly...


No, don't, unless you want bad things to happen when a whole bunch of corner cases start sprouting up. I did this for a while and ultimately gave up after seeing the error rate spike on my server. There are lots of people out there using stuff which just does not get the protocol-independent // scheme. I went back to doing either the whole thing (http[s]://host/path) or a relative thing (/path) but nothing in-between.

http://rachelbythebay.com/w/2012/09/15/rel/


I disagree with your conclusion. The fewer sites this works with, the quicker developers will fix it. Supporting the wrong way is not the right thing to do.


Your two viewpoints are reconcilable, I think.

From the standpoint of the individual developer, you handle edge cases like this so you don't break your clients' experience.

From the standpoint of the ecosystem, broken tools are bad, and they cause individual developers to have to handle more edge cases.

If one individual developer doesn't handle these edge cases, their clients will just think they are bad. If all of the individual developers decided not to handle these edge cases in a coordinated fashion, sure, they could trigger a change. But I don't think that's how it happens in the real world.

To look back on the days of IE6/IE7, most developers didn't just stop supporting these browsers and hope that their clients would stop using them; they supported the browsers until larger forces caused their clients to shift to newer browsers.

Getting back to my original point, I think it's possible for each of you to have the viewpoints that you have, but also for rachelbythebay to say, "Sure, if I could coordinate with all other devs to stop handling broken edge cases at the same time, I would do that," and for you to say, "Sure, I can see how you would want to keep your job and handle edge cases until they are fixed upstream."


I half-agree with rachelbythebay's position, but, in my experience, users will definitely say "hey, this site doesn't work with your service", the developer investigates and fixes it. It'd also probably be worth it to just email the devs of the service, it'd probably get fixed a significant proportion of the time.


> The fewer sites this works with, the quicker developers will fix it.

That turned out really well in the past.


Yeah, online services, desktop software, it's equally hard to upgrade both.


We are running one of the largest sites in the Netherlands with protocol-independend css/js/img: http://www.marktplaats.nl/ Haven't heard real complaints from the users about broken things. Probably anecdotal but it saves quite some complexity


And in your spare time you are one of the best Dutch cartoonists around?


What kind of "stuff"?


IE7 and IE8 will download stylesheets that use protocol relative urls twice.

http://stackoverflow.com/questions/4831741/can-i-change-all-...


But only for uncached resources - see Eric Lawrence's comment:

“Internal to Trident, the download queue has “de-duplication” logic to help ensure that we don’t download a single resource multiple times in parallel.

Until recently, that logic had a bug for certain resources (like CSS) wherein the schema-less URI would not be matched to the schema-specified URI and hence you’d end up with two parallel requests.”

http://www.stevesouders.com/blog/2010/02/10/5a-missing-schem...

You could use a resource loader if you really cared but with IE8 under 10% and dropping I'd recommend keeping your site clean and maintainable – anyone using IE8 at this point is used to the web being slow and ugly so something like this will be the least of their worries.


IE certainly has strangest of bugs.


You really care about that? Well, relative URLs (and maybe using a basename meta tag) might be a workaround. But seriously, I wouldn't care about these Microsoft's bugs.


If this is a concern, you've made some very poor decisions.


What about large assets, like big splash images? Those can be quite a bandwidth hog.

EDIT: from the post, IE8 only downloads stylesheets twice. Nevermind.


Like what? Supporting ie 7 & 8? Using relative urls?


like worrying about twice as many hits to stylesheets (which are typically static content)


It's cached by URL, so I would assume (but haven't tested) that a different protocol would be cached separately.


From a security perspective it certainly should be.


Why not just use https:// ?

That way, if you load the file using file:// it will still work. Any downsides?


You will often need to dynamically generate all URLs depending on the current state of the page. A common example is an econmerce site where normal pages are HTTP, but things like the shopping cart and checkout are HTTPS. Your template will need to know to load resources as HTTPS when on a secure page. It's not the end of the world, but certainly annoying.


And are there any considerations nowadays to not just run the whole site under HTTPS?


Many CDNs charge a lot (10x for all bandwidth, including non secure) for https. If you are fronting one of your sites with these, it is not economically viable to switch over.


Hint: Local testing/loading of html files might require a local server then, since file:// is then used for resources, too, and this is rather problematic.


You'll have to explain why local servers are problematic. More like file:// is.


Well, if you double-click on a html file on your computer, it will open that in your browser using file://. If you use the short `//example.com/foo.bar`, it is quite problematic.

`//fonts.googleapis.com/css` becomes `file://fonts.googleapis.com/css` → doesn't work

Additionally, JavaScript files might not load or work due to browsers' (security) limitations. Not sure about the exact details here.


Definitely sounds like file:// is problematic, then.


Using a local server (instead of file:// access) would be a solution to that problem, not another problem.

That's not great...

A lot of folks get started editing HTML by downloading an existing page, editing it in some small way, and viewing the resulting page to see if it worked.

This would break that approach.


I agree, and I think not just for novice people editing HTML pages. Being able to load HTML pages locally, or send them in a zip file is important some times.

When I built Giraffe[1], a front-end for Graphite, one of my aims was that people can launch the dashboard from their desktop, then add dashboards to it by editing one file locally. So most of these people will use a server one day, but forcing them to launch a local server before they even start, really decreases the ability to play with it instantly and try it out... the code in giraffe's index.html needs to work both on the server and locally.

I already experienced strange behaviour with loading JSON/JSONP from a file:// based url, and I know it's an edge case, but it's still a useful use-case in my opinion.

[1]http://kenhub.github.io/giraffe


Firefox saves all assets to a _files directory then fixes the URLs to point in there.

If you really, really want to use //:

  busybox httpd -p 8080


or python -m SimpleHTTPServer 8080


Hell, even new PHP comes with a packaged local dev server.


But it would suck that you'd have to launch a server just to view an HTML file locally... (actually, the problem is already here for some time, thanks to JS requiring protocols to match to fetch data).


It's not file:// that's problematic, it's that you're using relative protocol URLs without an actual server to be relative to


Guy, you haven't got this right.


A shame many linkification scripts won't support this in editors, though: //news.ycombinator.com/item?id=5514784.

Does anyone have any experience writing regex for this?


Just always use the https version then. There is nothing wrong in embedding https resources in http. `//` is just shorter and more flexible (e.g. you could switch of ssl if you want).


Good thing Tim-Berners Lee put two slashes in there.


Not only are even smart people sometimes wrong about stuff, they're wrong about when they're wrong about stuff.


If there were zero or one slashes there, we could use urls starting with a colon for the same purpose.


I agree that using protocol relative URLs is the way to go, but there is one particular situation to watch out for: if the user saves the page to disk and opens it again then // will become file:// and all relative links will be broken.

So on something like an "invoice" page that the user is likely to want to save you may not want to do that (or use a piece of javascript to rewrite the links dynamically)


Egad! Yet another edge case to remember.


  some still don't...
Most still don't!


Annoyingly there's an obnoxious popup on IE6 when doing this

http://paulirish.com/2010/the-protocol-relative-url/

Not using protocol relative urls causes a great amount of pain. Unfortunately when you're building content for third party pages you need more graceful degradation than focus stealing dialogues.


Unless you are supporting China, IE6 usage is below 1% in virtually all countries: https://news.ycombinator.com/item?id=5511627


That article says the problem is with the way SSL is set up on the host in the example, not with scheme relative URIs.


No, protocol relative URLS work fine in IE6. (The article you linked to doesn't say they don't)


>and they don't redirect properly

You can't redirect SSL > non-SSL without a browser warning though, right? Unless you get a cert, at which point you may as well put it to use.


To anyone now panicing about user generated content and non-SSL images, and thinking "What I need is some kind of SSL proxy for user generated images"...

Node based SSL proxy: https://github.com/atmos/camo

And I whipped one up in PHP for some old PHP site that I worked on if anyone wants to see that. I shoved that behind Nginx so that I also get a file cache for the most requested files.

For my project I purchased an extra SSL domain name ( https://sslcache.se ), as I had some concern about serving user generated content on my primary domain. Concerns which are valid, as github.com recently acknowledged by moving their UGC pages to github.io .


As far as I can tell, this change doesn't apply to images, and probably shouldn't apply to user-generated content (i.e., you shouldn't be letting users write/embed arbitrary CSS, JS, plugins, fonts, frames, etc...)


I had read the link, and whilst it doesn't mention images as being included in the change it also doesn't mention images as being excluded, and does imply that all mixed content is blocked. It was my assumption given those conditions that it included images.

And there are many scenarios in which you do want to allow user generated content to include JS, off the top of my head Google Maps does so to allow user maps to be extensible. The issue is how such content is managed safely, and enabling SSL and putting the content on another domain is a good thing. Google do the right thing and serve such content over SSL and via an iframe on a totally different domain ( http://whois.domaintools.com/googleusercontent.com ).


FWIW I too wrote a camo clone, but in Go[1]. Decently sized project for learning a new language. At $dayjob we have a python version too (considering replacing it with the Go version at some point)...

[1]: https://github.com/cactus/go-camo


That's cool, as the new thing I'm working on is all in Go I'll probably use this and contribute back to it.


According to the link, the block only applies to certain types of resources, notably excluding images (presumably because malicious images cannot really take over the page).


Isn't allowing your users to hotlink images arbitrarily instead of rehosting usually a mistake anyway?


If by hotlinking you mean that terrible act of theft of bandwidth, then yes! Down with that✝.

If by hotlinking you mean inline images that are an essential part of hypertext documents, then no! It's a great thing to support.

But the basic thing is that by not hosting, and by being just a proxy, we haven't expressed any ownership or liability over the content that passes through the SSL proxy.

And as a side benefit, we don't have to build out storage for this.

✝ for those who like to externalise their responsibility to determine whether their servers serve a request by just stomping around claiming people 'steal' bandwidth.


Does Firefox 23 have support for TLSv1.2?

Please fix NSS and support TLSv1.2

As of now only IE and Opera are the ones which I'm aware of that support TLSv1.2.

There is a vulnerability for BEAST against SSL 3.0/TLSv1.0

With more widespread use of HTTPS which isn't a bad thing, it would help that all browsers support the latest security recommendations.

https://blogs.akamai.com/2012/05/what-you-need-to-know-about...


Firefox 23 does not have TLS v1.2, but Nightly will very soon. During March there have been very active development in NSS for TLS v1.2 and they are AFAIK checking in bit by bit now.


There's no point implementing TLS1.2 until browsers stop silently downgrading through TLS versions under attacker control.

And there's not much prospect of that happening: it seems we're happy to exchange compatibility with less than 1% of sites for security of 100%.


From what i've read, it looks like the server advertises all the TLS it supports, the client picks the highest version, but then an attacker sends an RST and the client goes to the next version in the list. Is that accurate? (The only other downgrade attack I saw was on False Start which has since been disabled in Chrome)

Could (or should) they support an option in the browser to require only the highest possible version of a protocol? Or is there some other fix required to mitigate the attack?


This doesn't mention images specifically -- anyone have insight into how that will work? I'm assuming they're not "active" content.

Even Google's image search displays insecure images, so I'd hope they get a pass.


From the article:

That means insecure scripts, stylesheets, plug-in contents, inline frames, Web fonts and WebSockets are blocked on secure pages, and a notification is displayed instead.

That seems to me like a complete list and does not include images.


It looks like images get loaded. However, CSS doesn't, which breaks a lot of sites (e.g. https://www.nytimes.com/).

This is in the latest Firefox Nightly build, and available as a pref in older Firefox versions, so you can play with it too.


Just visited secure NYT link on Chrome 26 and none of the CSS and few images loaded there either. Console reports loads of [blocked] insecure content.


So that explains why some sites were looking unstyled. Github had the same problem some days ago.


Again, if you embed non-SSL content into a SSL website, you are obviously doing something wrong.

Regex in CSS can be used to guess page content. Also CSS can load SVG which may include JavaScript.


It's odd they have a secure version of nytimes.com at all. They have separate subdomains like myaccount.nytimes.com that are secure, and the links back to the homepage are all explicitly http.


Note that both Chrome and IE have been blocking this for some time now. IE did it first.


IE took a lead move in favor of security, kudos to them.

It seems like Chrome really forced the ecosystem to move towards auto-updates and sandboxing. Each of those have transition impacts for developers and publishers.

Mixed content though, I've got to imagine that's a hard area for Google to lead on, since its transition challenges primarily affect ad integration.

This follows on the heels of the "disable third party cookies by default" row. I'm wondering if a) Google's business interests will prevent them from being a first mover on security and privacy in browser development, and b) if other browsers will start exploring these issues just to force Chrome to make hard choices.


Yeah, Chrome has been doing this for a while. It has actually been pretty annoying that FireFox wasn't doing so.

My main dev environment is FireFox and i hate it when things work for FF but not Chrome.


I wonder if this will help push Google to start serving AdSense over SSL?


Chrome has been doing this a fair while now, not very well in my opinion either - the option to enable secure content is hidden away in a tiny silver shield at the right of the URL bar.


The visibility of the shield was designed to be proportional to the number of people who care/should care about it.


I believe this has been default on IE (and maybe even Chrome?) for quite some time.


It's fun looking at unintended tack-on effects of decisions like this.

For example, requiring SSL for all assets served on SSL pages is going to make the profits of CloudFlare, and other CDN providers with their same business model, spike precipitously. You have to have a paying plan ($20/mo to start) to get SSL CDNing support, which basically means CloudFlare's free plan is now useless to anyone who enforces HSTS.


I would expect most companies serving pages over SSL to already be serving assets over SSL to avoid the mixed content warnings that most browsers currently give when loading non-SSL assets on an SSL page.


A lot of companies make profit by selling SSL as an pro feature. Just look at all those services where SSL login is only available at a certain cost level. (And using SSL only for the login form is a joke, too.)

It's a shame that these are making money from that basic consumer security. Especially since SSL is neither expensive nor need a lot of performance.


If you care about security enough to use HTTPS, you need to serve assets over SSL anyway. Note that this does not affect all assets – only active content – which means that you can still serve images, audio, video, etc. over HTTP if you're comfortable with the risk of interception, spoofing, etc.


Browsers blocking insecure content has been a challenge for us. Users can add embeds into their pages, unfortunately, no two embeds are alike. There are way too many services that don't offer secure versions of their embeds, and on top of that, several implement secure vs insecure embeds differently.


Can you click a button in the notice to override the error and display the content anyways?

If not, this is going to be a colossal pain in the ass.


Yes the current implementation works similar to click to play plugins. A shield icon shows up in the address bar as well as a notice that the domain has http:// content. (in more user friendly terms) Examples of the click to play UI at https://blog.mozilla.org/security/2012/10/11/click-to-play-p...


Yep, I use FF Nightly and Pocket's bookmarklet, noticed that the bookmarklet broke in this new version, and notified Pocket Support.


Would Google Reader get an exception, or is this Firefox version being released after that shuts down?


Firefox 23 will be released on August 6 [1], while Reader closes on July 1 [2].

[1] https://en.wikipedia.org/wiki/History_of_Firefox#Version_23

[2] https://en.wikipedia.org/wiki/Google_Reader


its been a while since I made a Facebook App but I am pretty sure last time I checked you can made an app on a non-SSL domain and it will be iframed into the secure page, which in FF23 will break, so some apps may not work anymore. just saying.


If you access the app using https-ed Facebook, i.e. via https://apps.facebook.com/appname, it would use https:// for the iframe as well. So, no problems there (at least no new ones that did not already exist).


Do the affected pages currently show a warning in Firefox 22?


Firefox 22 and earlier do not show a user-visible warning, but they do log bright red "[Mixed Content]" warnings in Firefox's Developer Console.

Firefox 23 displays a grey shield icon in the address bar for mixed content.


I was considering upgrading my site to HTTPS but now I changed my mind. The benefits are no big deal anyway.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: