EDIT: Use `//` instead of `http://` or `https://` and it will use whatever the protocol of the page being fetched is using.
EDIT 2: Double check when you use the `//` shortcut that the website you are linking to supports HTTPS, some still don't and they don't redirect properly...
From the standpoint of the individual developer, you handle edge cases like this so you don't break your clients' experience.
From the standpoint of the ecosystem, broken tools are bad, and they cause individual developers to have to handle more edge cases.
If one individual developer doesn't handle these edge cases, their clients will just think they are bad. If all of the individual developers decided not to handle these edge cases in a coordinated fashion, sure, they could trigger a change. But I don't think that's how it happens in the real world.
To look back on the days of IE6/IE7, most developers didn't just stop supporting these browsers and hope that their clients would stop using them; they supported the browsers until larger forces caused their clients to shift to newer browsers.
Getting back to my original point, I think it's possible for each of you to have the viewpoints that you have, but also for rachelbythebay to say, "Sure, if I could coordinate with all other devs to stop handling broken edge cases at the same time, I would do that," and for you to say, "Sure, I can see how you would want to keep your job and handle edge cases until they are fixed upstream."
That turned out really well in the past.
“Internal to Trident, the download queue has “de-duplication” logic to help ensure that we don’t download a single resource multiple times in parallel.
Until recently, that logic had a bug for certain resources (like CSS) wherein the schema-less URI would not be matched to the schema-specified URI and hence you’d end up with two parallel requests.”
You could use a resource loader if you really cared but with IE8 under 10% and dropping I'd recommend keeping your site clean and maintainable – anyone using IE8 at this point is used to the web being slow and ugly so something like this will be the least of their worries.
EDIT: from the post, IE8 only downloads stylesheets twice. Nevermind.
That way, if you load the file using file:// it will still work. Any downsides?
`//fonts.googleapis.com/css` becomes `file://fonts.googleapis.com/css` → doesn't work
That's not great...
A lot of folks get started editing HTML by downloading an existing page, editing it in some small way, and viewing the resulting page to see if it worked.
This would break that approach.
When I built Giraffe, a front-end for Graphite, one of my aims was that people can launch the dashboard from their desktop, then add dashboards to it by editing one file locally. So most of these people will use a server one day, but forcing them to launch a local server before they even start, really decreases the ability to play with it instantly and try it out... the code in giraffe's index.html needs to work both on the server and locally.
I already experienced strange behaviour with loading JSON/JSONP from a file:// based url, and I know it's an edge case, but it's still a useful use-case in my opinion.
If you really, really want to use //:
busybox httpd -p 8080
Does anyone have any experience writing regex for this?
some still don't...
Not using protocol relative urls causes a great amount of pain. Unfortunately when you're building content for third party pages you need more graceful degradation than focus stealing dialogues.
You can't redirect SSL > non-SSL without a browser warning though, right? Unless you get a cert, at which point you may as well put it to use.
Node based SSL proxy:
And I whipped one up in PHP for some old PHP site that I worked on if anyone wants to see that. I shoved that behind Nginx so that I also get a file cache for the most requested files.
For my project I purchased an extra SSL domain name ( https://sslcache.se ), as I had some concern about serving user generated content on my primary domain. Concerns which are valid, as github.com recently acknowledged by moving their UGC pages to github.io .
And there are many scenarios in which you do want to allow user generated content to include JS, off the top of my head Google Maps does so to allow user maps to be extensible. The issue is how such content is managed safely, and enabling SSL and putting the content on another domain is a good thing. Google do the right thing and serve such content over SSL and via an iframe on a totally different domain ( http://whois.domaintools.com/googleusercontent.com ).
If by hotlinking you mean inline images that are an essential part of hypertext documents, then no! It's a great thing to support.
But the basic thing is that by not hosting, and by being just a proxy, we haven't expressed any ownership or liability over the content that passes through the SSL proxy.
And as a side benefit, we don't have to build out storage for this.
✝ for those who like to externalise their responsibility to determine whether their servers serve a request by just stomping around claiming people 'steal' bandwidth.
Please fix NSS and support TLSv1.2
As of now only IE and Opera are the ones which I'm aware of that support TLSv1.2.
There is a vulnerability for BEAST against SSL 3.0/TLSv1.0
With more widespread use of HTTPS which isn't a bad thing, it would help that all browsers support the latest security recommendations.
And there's not much prospect of that happening: it seems we're happy to exchange compatibility with less than 1% of sites for security of 100%.
Could (or should) they support an option in the browser to require only the highest possible version of a protocol? Or is there some other fix required to mitigate the attack?
Even Google's image search displays insecure images, so I'd hope they get a pass.
That means insecure scripts, stylesheets, plug-in contents, inline frames, Web fonts and WebSockets are blocked on secure pages, and a notification is displayed instead.
That seems to me like a complete list and does not include images.
This is in the latest Firefox Nightly build, and available as a pref in older Firefox versions, so you can play with it too.
It seems like Chrome really forced the ecosystem to move towards auto-updates and sandboxing. Each of those have transition impacts for developers and publishers.
Mixed content though, I've got to imagine that's a hard area for Google to lead on, since its transition challenges primarily affect ad integration.
This follows on the heels of the "disable third party cookies by default" row. I'm wondering if a) Google's business interests will prevent them from being a first mover on security and privacy in browser development, and b) if other browsers will start exploring these issues just to force Chrome to make hard choices.
My main dev environment is FireFox and i hate it when things work for FF but not Chrome.
For example, requiring SSL for all assets served on SSL pages is going to make the profits of CloudFlare, and other CDN providers with their same business model, spike precipitously. You have to have a paying plan ($20/mo to start) to get SSL CDNing support, which basically means CloudFlare's free plan is now useless to anyone who enforces HSTS.
It's a shame that these are making money from that basic consumer security. Especially since SSL is neither expensive nor need a lot of performance.
If not, this is going to be a colossal pain in the ass.
Firefox 23 displays a grey shield icon in the address bar for mixed content.