Good. Also, I recently poked the bear on the chromium and mozilla dev security mailing lists, and they started discussing ways to push https in the browser UI. Hopefully this momentum continues!
The gray icon might as well not be there as far as consumers are concerned and blue vs green crossing guard icons really fail to indicate anything of use to an end user.
If you want to see a big list of SSL UI screenshots, I have them up at:
I disagree. It's not that I don't think that secure websites are preferable, but instead I see Google's growing influence being able to shape the web the way Google wants it to be, and them being perfectly willing to use that influence. You can argue that the things they are doing now are making the web better. But are we assured that this will always be the case?
What happens if this influence turns completely and more directly self-serving? Such as, Google adwords customers are given higher organic ranking, weighted by how much they spend?
At first glance it might appear that such a scheme would work against adwords, but it really wouldn't because the ad-click advertising just doesn't work for a lot of us, but organic search does.
"The decision could encourage more sites to turn on encryption, which makes them less vulnerable to hacking".
What? This is entirely wrong. It makes them more vulnerable to hacking. There is a whole lot more complex software and configuration to get right, and we know SSL doesn't have a great recent history of that....
Of course it help secure the communications which presumably is what they meant but it's 100% wrong with the statement the article actually says.
As somebody else pointed out recently in another thread, being able to steal session cookies can even help you attack the server directly, as authenticated users usually have more/different write access to databases and the like, making (e.g.) SQL injections easier. In this regard, even if you don’t consider it “hacking a website” if someone steals session cookies, HTTPS makes it more difficult to “hack websites” in the sense of “getting root access to the server”.
How that compares to the increased attack surface of the HTTPS implementation is of course up for debate.
It makes YOU (the consumer) less vulnerable to "hacking" (MiTM), it actually doesn't make the website less vulnerable and as you quite correctly pointed out somewhat more (just due to increased attack surface).
That's a large part of the reason HTTPS/SSL isn't more common: It doesn't benefit the website as much as it benefits their customers and there are both real and perceived costs in deploying HTTPS.
So you have to put pressure on them (websites) to adopt secure defaults. Google are now helping hugely.
It's good to see CloudFlare are going to make this free. In planning the launch of my own new site/blog/thing (hopefully launching soon), the one thing that's really stopping me considering SSL isn't the cost of certificates (which can be had for peanuts anyway if you don't care too much which CA you use) it's the ongoing costs and increased server load.
Right now, launching without CloudFlare would almost certainly result in the unfortunate death of my VPS. SSL would only expediate that. OTOH, the minimum paid CloudFlare package would quadruple my hosting costs - I'm not running enterprise scale infrastructure for my personal site!
If CloudFlare do make it part of their free package, I will definitely use SSL by default.
I can't reveal that at the moment. That will be part of the announcement in mid-October. I can say that this will not require anyone to install new root certs in browsers etc.
To quote it: "Second, at CloudFlare we've cleared one of the last major technical hurdle before making SSL available for every one of our customers -- even free customers. We're on track to roll out SSL for all CloudFlare customers by mid-October."
Wow, that's a lot of duplicate articles about this reaching the front page, one of which already contains a complaint about the mods changing the title. They could really do with merging these articles together.
I've previously suggested a feature to show title change histories under the title, before the comments, because certain comments make no sense after the title is changed.
I'd also like to suggest a similar feature for merges whereby when there are separate articles talking about the same thing, the canonical one gets used as the main link and others submissions are retained, again, under the title, before the comments.
> We're on track to roll out SSL for all CloudFlare customers by mid-October. When we do, the number of sites that support HTTPS on the Internet will more than double
To be secure, won't this require your customers to set up HTTPS between CloudFlare and their hosting providers, which will require additional manual setup with their hosting provider, assuming they even support HTTPS? It seems rather optimistic to assume that enough customers can/will do this to result in a doubling of sites supporting HTTPS on the Internet.
You can use a self signed cert between CloudFlare and your server by the looks of it. The optimistic point though I agree on. Hopefully there's some way of telling if your traffic from CF to origin is secure.
Wouldn't want the next big community to be fake-secure to save a few quid
Thanks for the link. I'm really surprised by the presence of the "flexible" option since it provides little more than a facade of security. (A self-signed cert is also insecure, though less so, unless there's some way to pin it on the CloudFlare side.)
> "For now it's only a very lightweight signal - affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content - while we give webmasters time to switch to HTTPS," Google's Zineb Ait Bahajji and Gary Illyes said in the blog post.
Later, high-quality content will be carrying less weight than HTTPS.
Even for wikipedia there are privacy implications of third parties knowing which page you are visiting and integrity concerns in places countries that want censor certain topics (e.g. China).
That's a nice concept, but even with HTTPS the GET string is often leaked (e.g. referrer strings, tracking URLs (like Google's prior to this)).
It is technically encrypted in HTTPS traffic but it isn't treated with very much respect so if you actually have access to all of the HTTP and DNS traffic surrounding a request you can often recover pages viewed.
Additionally, in a lot of these countries computers come pre-installed with a government root CA which they can use to impersonate sites like Wikipedia (although the USG does this too!).
DNS doesn't give the page you were on. Whilst some systems might have a government root CA on it, it's still quite possible to remove that - it's pratically impossible to remove ISP level monitoring.
Indeed. My static blog hosted on Linode behind Apache has survived a HN frontpage entry three times now. If I have to use HTTPS, does that mean I need a beefy server with lots of entropy?
Google, from 2010:
"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead."
Entropy is a different matter, but I believe pretty much all virtualisation platforms have ways to ensure the VMs have enough entropy sources - so it should be fine.
No. My HTTPS blog hosted on Linode's smallest plan has survived a HN front page without any trouble. It's a myth that HTTPS causes significant resource overhead.
As for entropy, your server only needs a small amount of entropy to seed a CSPRNG, and the CSPRNG takes it from there.
https://groups.google.com/a/chromium.org/forum/m/#!topic/sec...