Hacker News new | past | comments | ask | show | jobs | submit login
Evolving Chrome's security indicators (chromium.org)
101 points by cryo on May 17, 2018 | hide | past | favorite | 78 comments



I am all for secure connections but I do lots of work with apps that run local webservers (inside your home network). AFAICT there's no way to make that non-techie friendly secure. The only non-techie friendly solution I know of is the Plex solution but the Plex solution costs $$$$$$$

https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...

You can also think of this as an issue for any IoT device that wants to serve a webpage.

Let's Encrypt doesn't cover this use case because it only provides certs for domains so every IoT device would need it's own domain or subdomain and that domain would need to be blessed in the list of domains that Let's Encrypt doesn't put limits on certs since you need a cert per device.

Some company making a product (for example Plex) can pay to run the dynamic DNS and partner with a CA but at the moment there is no free and easy way for some open source project to this.

I wish I knew how to make it happen. Get a giant grant like Let's Encrypt got or something to enable a solution. I'd love all my projects running local servers to be able to use HTTPS but at the moment it's way too painful.


I actually just started using Let's Encrypt to issue certificates for web services (router config pages, Firewall, NAS server) on my local network. You _do_ need your own domain and DNS server so unfortunately it isn't free, but it _is_ definitely doable.

Basically I just setup [acme-dns][1] on a VPS, pointed the relevant _acme-challenge domains at that server via a CNAME entry, and gave each device on my LAN an account on the acme-dns server so they can fulfill challenges at will.

Unfortunately as you said that's probably a little too complex for the average user, not to mention domains and VPSes aren't free. Probably wouldn't be too expensive to offer as a service of some kind though; I'm only spending $3.50/month total, and I'm sure my VPS could handle _much_ more traffic than it currently does just fulfilling acme challenges for my LAN.

[1]: https://github.com/joohoi/acme-dns


It's good that there is at least a way to do it, but it's still absurd that showing a page from a server in a LAN on a browser in the same LAN now requires third-party services on the internet and incurs running costs.

(* yes, you could simply tolerate the insecurity warning if you're ok with not using more recent JS/CSS features. I imagine though the warnings will become more obnoxious and the amount of features you can't use will grow.)

(* yes, you can install your own root CA cert. For admin pages, this is even a reasonable thing to do - however, you'll need to do it on any machine that is expected to open your page. So say goodbye to raspberry pi media experiments or kiosk systems.)


The problem is that the address associated with a publicly trusted cert _has_ to be universal. We can't give users certificates for `https://192.168.1.1/` or `https://my-router.local/`, because then attackers could get certs for those names as well, which would defeat the whole purpose of HTTPS.

DNS is well suited for allocating unique, memorable names to devices, but, as you said, it _does_ rely on third party services (registrars, at the very least) to operate. If we don't want to rely on third party services, we'll need to come up with an alternative to DNS that accomplishes the same goals.


You know, this has always bothered me to a certain extent. We have self-signed certificates, and we have CA certificates, but couldn't we have an in-between option? Say, a self-signed certificate which suggests to the browser that it could be installed locally.

The browser could still displays a warning once, "Warning! We can't determine whether we should trust this site. It claims to be [Such and such local service.] Would you like to trust this site? [Yes, temporarily], [Yes, permanantly (installs local certificate)], [No, get me out of here!]

Firefox already does something like this, but its wording is kind of vague, and the strong assumption from browser vendors seems to be CA or bust. The truth is, encryption is useful in some very common cases where a CA or a domain name is simply not feasible, and we need workarounds for those use cases that don't feel like a dirty hack for techies only.


I'd also like to find a solution for this.

Part of me wants to say that the solution is to just accept that it's not secure and the warning is valid, and services only available behind my firewall don't need to be accessed over https. but that doesn't help when browser vendors are also making new features only available on secure origins. How are you supposed to build a serviceworker to run on your local server when serviceworker only runs on secure origins and there's no way to secure a local origin?


I've wondered if there is a startup idea there. SecureIoT.com that provides free certs for free projects and commercial certs for commercial projects? A few cents per device and some good sales people to get IoT companies to sign up for a managed solution?

I think the problem is few IoT devices (except maybe IP cameras and NAS) provide a webpage. Most provide an app and that app and device talk to the cloud not the device


Does Plex's solution support IPv6? I tried querying some IPv6 addresses in a similar way and it didn't respond with anything, but perhaps I wasn't doing it correctly.


The solution is for Let's Encrypt or a similar CA to start issuing certificates for direct IP addresses.


The parent was talking about a device on a home network, I definitely do not want a CA issuing a trusted cert for 192.168.0.1


Obviously I'm only referring to public IP addresses. Let's Encrypt can't attest anything about local IPs.


Which is a good idea, until we remember that most residential "local" networks have dynamic IPs. Honestly the best way to deal with this is to have a dynamic dns record pointing to a (sub)domain you own, then get a cert through that.


> Users should expect that the web is safe by default

I don't expect this of anything else in my life. Why in the world would I expect the web to be safe?

To be clear: the only thing this is assuring is that your connection is secure. Your content may not be. Hell, the web server may not be. There's no way a browser can know these things. And they're going to assure the user they're secure by default?

Lots of phishing sites use valid connections to trick you into giving your SSN, credit card, and other data. This is not safe or secure. But the browser is telling me it is.


You expect text messages to be intercepted in transit? You expect people to steal things out of your checked luggage? You expect your credit card number to be stolen whenever you give it to a restaurant to swipe?

All of these things happen with some frequency, sure, but I personally expect that they don't, which is why I'm disappointed and upset when they do happen.

And the fact that the browser can't assure any other security besides transport security on its end is exactly why it shouldn't be displaying an indicator saying "Secure," when all it means is "In this particular way, it is not insecure; for everything else I have no idea." If it knows it's insecure in a particular way, it should display a warning or possibly an error. If it doesn't, it's meaningless to claim "Secure." That's how the rest of the world works: the State Department gives you warnings about high-risk travel areas, but it doesn't say that any place is safe and crimes won't happen to you there.


Guess I'm the only person who never sends sensitive information by text, never checks valuables into baggage, and never hands my credit card to the person serving me? (Ok, this last one has become way less important since the rollout of Chip & PIN).


Sure, there are people who do that, and there are people who do all their web browsing in Tor. Such people are important and there should be tools (like Tor Browser) built for them, but they're also very much not the target audience of the default case. Chrome is a mass-market browser, not a browser built for people with specific uncommon security needs.

(What do you do at restaurants - pay with cash? Insist on walking up to the PoS device?)


Actually the target position is that https _won't_ be marked as "secure", but rather than http will be marked as "not secure". Can't really argue with that.


Oh I'm not arguing with that part, that's a good development. But that's "assume the web is just the web", not "assume the web is secure by default". The latter is a dangerous idea.


I take this to be a aspirational statement, not a descriptive one. Google is not saying the web is currently safe by default, but rather that it is their goal to make browsing a safe experience by default.


A place or thing can be considered "safe" without being absolutely safe.


Removing the lock entirely is just wrong.

It’s a really quick indicator that I’ve either set something up right or not.

Also if I’m on chase.com and I don’t see a lock of any kind I’m going to NOPE out. And I’m a tech person.

Imagine explaining to my parents that after all this training to look for the lock they now shouldn’t look for the lock, but look for this red text instead.

Bad move.


The problem with the lock is that people think it means a site is safe (i.e. contents are safe). That's not what it means and you're never going to educate enough people about what it really means, nor should you have to.

It's grossly misleading and it needs to go.


I think the best compromise is to at least show HTTPS in green. Not showing anything at all seems wrong somehow.

I understand why they don't want to show "Secure" for websites that may have already been breached and lost your plaintext password. But I don't understand why they need to remove https and http from the URL. Is that really a big UX step forward? Or one backwards? Sometimes removing too many interface elements causes a regression in intuitiveness and good UX.

Unless they're preparing for the replacement of the HTTP protocol, and they don't want people to freak out when they see the new labels? That could explain it. But other than that, I don't see a good reason for eliminating HTTPS/HTTP from the URL.


They aim to replace https with quic when it’s ready for prime time.


Do you have a source for that? I assumed the lessons learned from SPDY and QUIC eventually went into HTTP 2


> It’s a really quick indicator that I’ve either set something up right or not.

That indicator just changes to either showing nothing (good) or showing an info icon with "Not Secure" (bad). There's still an indicator there, it's just different.


Not only that, but the red text doesn't appear until it's too late: AFTER they've put their sensitive information into a form, which probably spirited it away with JavaScript before one can change their mind.


Not true: Currently the "info" indicator appears on first interaction (which _could_ be after they've auto-completed everything, but may not be). This is changing though: In Chrome 68 the info indicator will be there on all HTTP pages, without form interaction required. So this change just changes it from "info" to red warning.


If you type fast, you can probably get a whole username (or personal name, or street address) in without seeing the red warning. (That said, it looks like it _will_ show "not secure" by default, even before you type anything in, on HTTP; it just isn't emphasized unless you enter data.)


The biggest problem here is not that the green lock is being eliminated, but rather that HTTP pages will show a low-contrast grey message unless and until you type into an input.

By default, if you're not typing in a form, Chrome will show no major eye-catching colour-differentiation between HTTP and HTTPS. That's a major regression in user security.


I believe you're mistaken, conflating two different steps. There's still going to be a lock for HTTPS while the "Not Secure" message doesn't turn red for HTTP until you type in a text box.

The post indicates that by the time the lock disappears ("eventually"), then all HTTP sites will be marked as not secure in red. In the meantime it's removing the "Secure" text with the lock and HTTP sites will get their "Not Secure" turned red when you type in a text field. Those seem like fine steps.

> That's a major regression in user security.

Turning red is more functionality than today, not less.


What the poster you're replying to is saying is that the reduction of contrast between secure and not (prior to anything being typed into the page) is the reduction in functionality.

That is, there is now nothing eye-catchingly different between HTTP and non-HTTPs aside from a small monochrome marker; I think OP's concern is that that will get lost in people's vision next to the URL — they won't notice it. (And I somewhat agree w/ that.)


That's exactly the intent. Too many false positives will teach people to ignore the red marker, so putting it in place before the web catches up would be counterproductive.


> Turning red is more functionality than today, not less.

Firstly, I didn't say it was a regression in functionality, just a regression in security (due to the loss of contrast by removing the green).

Secondly, I actually did think the red was currently in place. I'm not a Chrome user, so it seems surprising to me that there is no warning currently in Chrome for entering data into insecure forms!? Firefox currently shows a red slash in the URL and an additional popup on the form input element itself which appears on focus rather than only after you've actually entered text. The Chrome team has such a good reputation for improving security on the web, so it seems bizarre to me they'd be behind on something so simple (and also not even attempting to catch up).


It's not easy to explain to your parents either. "Don't look for the padlock any more, just look for a Not Secure warning... which may not appear until AFTER you put in your credit card details... at which point it's not really helped you at all has it. Huh. I have no idea what Google was thinking here. Better call the bank and cancel your credit card every time Not Secure pops up."


To be fair, the "Not Secure" message is always there—it only changes color to red once you start typing into a form field. I do think it should be red by default, because the gray is not obvious enough, but I don't think it's necessarily any harder to explain than the green padlock.


It's kind of a weird how it works. Users will be looking at the field they are typing on (or even the keyboard for some users). Then they'll look down for the next field and so on until they reach submit. I think users will often miss the color change entirely.

I think it'd make more sense to just make it red if there is any input options on the page at all.


Firefox shows an insecure warning in the insecure form field itself so the user will see it.


An attacker could probably emulate a form in JavaScript without using any actual input fields.

Displaying the indicator on user input seems much more difficult to bypass.


"If it says Not Secure, it's definitely not secure. If it doesn't say that, it's probably not secure anyway."


No, as of chrome 68, http will be marked as not secure without waiting for any user interaction at all. The change for http pages outlined in this post is to upgrade that from the grey info message to a red warning.


The more I think about this the more I'm actually on board with it. Users can't assume that a site is safe anymore because of a green padlock because HTTPS is so easy/cheap to implement, and this is a step toward re-training users to use a different means to confirm they are entering information into a correct page. There's too much risk for the green padlock to be used as a false sense of security.

Combine this with marking HTTP sites as explicitly insecure and I fail to see any downsides with this becoming the new norm (assuming other browsers follow suit).


> Users can't assume that a site is safe anymore because of a green padlock because HTTPS is so easy/cheap to implement

I don't think this has meaningfully changed today vs the past as you suggest. HTTPS has been cheap and _relatively_ (for an engineer anyway) easy to implement if you cared for quite some time, even before the advent of free SSL certificate services like Letsencrypt etc.

I certainly don't agree that widespread SSL/HTTPS has somehow devalued the significance of the green padlock as you are implying - the level of security it implies for your in-transit requests is still much the same as it always was, it just happens to be used on many more sites than in days past.

For this argument to hold, we would need to assume that for some reason in the past, only "good actors" of some kind used HTTPS due to its expense/complexity, and therefore the padlock was somehow certifying their good intent. This has never been the case, and HTTPS (perhaps with some small degree of exception for the newer Extended Validation certs...) continues to really only indicate your requests will be encrypted in transit only.


There's nothing lost from keeping the green lock and adding the red "not secure" warning in place of having no warning at all.

There's no reason to maintain a "no-indicator" state, UX or otherwise. Keep the green lock for sites which implement it correctly.

peterwwillis makes a good point in another top level comment -- and my own research in securing products suggests much of the same, specifically that many users don't assume things are secure. I'd extend that argument by stating most users don't assume things are insecure either unless the security of the system is specifically called out.

In other words, it's generally out-of-mind.

Considering this, you should be keeping green locks and red warnings going forward and never having a no-warning state except perhaps on private networks where the security of the connection is to be determined by the team which owns that private network. There's an entire industry of "Secured By" badges which CAs managed to market to draw attention to connection security in a space where standard indicators should be serving that role based on standard--not marketing--metrics.

(This will probably prepend a later letter I'll write up about Google Security's unilateral changes the past few years. This and HPKP are two that come to mind.)

Edit: A good point was raised by twitter user @akanygren on this topic, notably that since there's no assurance other vendors will share this same approach, this will immediately sow UX confusion. The corollary I'd attach to that is that with Chrome as a plurality player eliminating a decision-point in their UI, other browsers now gain a marketable competitive advantage by labeling secure connections as such and are dis-incentivized to follow suit because they stand to benefit by maintaining some variant of the status quo.

https://twitter.com/akanygren/status/997178362669010944

https://twitter.com/eganist/status/997186215249137665


There is actually appeal to having no indicator: the absense of an indicator doesn’t inadvertently suggest anything about the site’s overall security. The padlock icon we’re so accustomed to can create a false sense of security, as every other aspect except the connection to all servers may actually be insecure.

You may, for instance, be transmitting credentials that may be stored in plain text, be easily accessible to third-parties or may even be handed directly to black markets. The padlock itself says nothing about any of those details.

A better approach to no icon at all, I think, is simply “secure connection” wording. Trying to distill a lot of complexity and nuance into a single icon is what’s ultimately problematic. It works somewhat well, but not quite well enough.


> A better approach to no icon at all, I think, is simply “secure connection” wording.

The problem is, most people won't understand the difference between that and just "secure"; arguably, the people who understand what "secure connection" means are those for whom this change doesn't matter.


It's a fair point. I can't think of any perfect solution, only solutions that are less worse than others at conveying the 'Right Thing'.


Is this also going to cause Chrome to attempt HTTPS before falling back to HTTP?

That would be the more useful change. If HTTPS fails (due to hostname mismatch for example), fall back to HTTP and warn, but trying HTTPS first would be fantastic and eliminate one round trip for many of my sites that are not in the HSTS preload...


Fail-open security is useless against any actual attack. The attacker would just block the TLS connection then intercept the resulting plain HTTP request.

Google already defaults to linking HTTPS versions of pages in its search results, which is a much better solution IMO. (Attacker can't intercept the connection to Google because of HSTS, and can't intercept the connection to the site you visit because it's a direct HTTPS link.)


> Fail-open security is useless against any actual attack.

Except that if the user is not visiting for the first time, then HSTS comes into play if the security headers are set, browsers can let the user know something is amiss.

We both agree that fail-open security is useless, but right now Chrome and other browsers default to going to port 80 for first visit instead of port 443. All I am asking is that the default becomes 443, and the fall-back, for first visit, is port 80.

This way I can stop running web servers on port 80 that do nothing but send a 303 with https as the protocol for that particular domain.


Chrome and Firefox use HTTPS first for sites in their preloaded HSTS lists. (I don't know if Edge has a preloaded HSTS list.)

https://blog.mozilla.org/security/2012/11/01/preloading-hsts...


> ut trying HTTPS first would be fantastic and eliminate one round trip for many of my sites that are not in the HSTS preload...

It's almost as if you didn't read my post.

Unless I can go register all my domains to be in the HSTS preload, I'd much rather Chrome tried HTTPS first, so I don't have to run a web server on port 80 that sends a 303 with https as the protocol.


Kind of not understanding the “why” here.

If they’re going to change a status indicator during form entry, the indicator needs to be right in the user’s face (i.e. floating over the cursor). Anything else might be missed. Even so, pointless dynamics; mark the page red always, and stop trying not to offend those implementing blatantly-unsafe forms.

Also, a minimum of a green default checkmark seems in order, even in this wonderful secure-by-default future. Continue to train people to demand better security and look for safer-web indicators.


And with each new generation of security features, we get bigger, better, more in-your-face "safer-web indicators", a veritable Lensman Arms Race of iconography.


Likely next step: mark only pages that are present in Chrome's preloaded HSTS lists as secure (including their new premium .app domain).


Most people don't use very many sites. We do, yes, but the most will revisit the same sites over and over. And they'll be entering their personal/card details into fewer. Why not have a browser popup each time you enter this info into a site for the first time? And a warning if the site is not on a browser supported whitelist? Don't "phone home" with people's sites; browsers could maintain white/black lists. Making users see if they think the site they're on is dodgy is like expecting people to only download code in source form and check it before compiling it. If my mum uses the same 4 shopping sites and one bank it shouldn't let her log into another one - fake or not - without at the very least a "wtf are you doing" warning.


What's "premium" about .app?


I finally trained my mom to look for the green lock and verify the domain name. Now Chrome is gonna eliminate the green?


eliminating useful visual information is high style


It's not eliminating it per se, it's just flipping the logic: Instead of HTTPS sites having a green lock, now HTTP sites will show a "Not Secure" message. It's a different visual cue, but the same idea.


It's not useful, it's grossly misleading. People think it means the site is safe when that is not the case.


you're right of course. <form>s should be eliminated entirely


Green lock is probably easier to describe to someone not proficient as a web user but you can still just say, "Make sure it doesn't say Not Secure and verify the domain name".


I'd support this if every insecure page had a prominent, red, "Not Secure" message.

But unless you're entering a password, there's no color at all.

The difference between secure and insecure is far too subtle for non-experts like my mom.


This seems like a good move long-term. It always did seem a little misleading to me for Chrome to be labeling a site as "Secure" when really all it's talking about is the security of your connection to the site.

I wonder how they're planning to handle EV certificates. They don't seem to be mentioned anywhere in this post. I seem to recall at least one person at Google advocating for removing EV indicators entirely.


EV certificates are nothing other than a way for the CAs to make money now that people have realized that the CAs' core product is worth approximately $0.

Argument 1: this person was able to get an EV cert for "Stripe, Inc. [US]", an entity they registered in Kentucky, no relation to the Stripe, Inc. of California whose website is stripe.com. They were not able to get a certificate for stripe.com. (The CA revoked it, and then later apologized for revoking it because there was no reason by their policy to do so.) https://stripe.ian.sh/

Argument 2: the actual website for MasterCard's SecureCode is https://www.mycardsecure.com/ , whose EV cert is "Arcot Systems LLC [US]". The fact that it has a meaningless domain name is in no way fixed by it having a meaningless (but technically accurate, Arcot is the contractor for SecureCode) EV cert. How do you know you're actually supposed to type your personal information there?

Argument 3: the web's security model is based on origins (domain names), not on EV certs. If Stripe switches tomorrow to stripeiscool.com and a domain squatter gets stripe.com, my browser will still send cookies to stripe.com, even though it no longer has an EV cert, and it certainly won't send cookies to stripeiscool.com, even if it has an EV cert with the same organization. Even if EV were a good idea in the abstract, it lacks a plan to make it work with the web as actually deployed.


Counter-argument: domain names are notoriously bad at conveying the identity of real-world entities. How is a user supposed to know that my-bank.com is the domain for their bank, as opposed to mybank.com or my-bank.org? EV certs can serve much the same role that the "Verified" indicator does on social media sites; it provides some assurance that the name you're seeing on screen really _does_ belong to the person or company you think it does.

There are obviously issues with EV as it's currently implemented, but I believe the solution to that is to fix those issues, not to eliminate the indicator entirely.


A better long-term solution (although maybe too long-term) is to move to some auth mechanism that knows about your origin. When you get a bank account, you should get a U2F device (which should cost <$20, which should be far less than the amount the bank is already spending on securing your account) that you enroll then and there, and so if you end up at my-bank.org by mistake, it doesn't matter, your U2F device doesn't have a shared key with the origin and you can't log in. Or do the same sort of thing with WebAuthn. Or you scan a QR code with your phone and get a link to the bank's mobile website and bookmark it, or whatever.

Attack 1 makes me worry about whether this is solvable at all. There's a lot of "First Bank and Trust"s out there, and they'd all need an EV certificate for the same string, so avoiding the attack seems genuinely hard. Like social media (or curated app stores), the only way it can really work is if the "verified" marker creates first-class and second-class entities: the entities approved by the powerful as reasonable to do business with, and the entities that aren't. Someone makes a decision about which First Banks and Trusts are "real" and which ones aren't, and you can't appeal it.


That’s interesting. I wonder if it would help to show a couple more certificate fields alongside the authentic-sounding name, or if it would all just be equally ignored.


All equally ignored. Moreover, as the original comment points out the Browser doesn't care about any of it. A typical web site involves dozens, even hundreds of resources each of which might have involved a separate connection, separate certificate. Images, scripts, style, and of course dynamic content. But the EV display summarises just one, the one for the original document fetch.

A site you're looking at with an EV cert you're happy with might have an image that's supposed to say "Sorry, we're closed for maintenance" with the site's logo. But bad guys, who have a DV cert have substituted "Welcome, please use our new login system". They've also replaced the cool live map and fancy animated carousel below that with their "new" login form. So the site has the EV visuals, but bad guys who broke only DV can subvert it almost totally.


AFAIK you can configure this in chrome://flags to see what it looks like.

Do you know the thinking behind getting rid of the EV indicator? Is EV considered useless?

Edit: Just as I posted this, someone else answered my question: https://news.ycombinator.com/item?id=17093737


I find it somewhat worrysome that the hassle of obtaining and maintaining certs is now handwaved away with saying "we now have let's encrypt".

I think it's important to keep in mind that even if their certificates are free, they are still a permanent dependency of your site. Moreover the fact that they can offer certificates for free works because of the concerted industry effort going on currently to move the whole web to https.

They are well-supported but, for sites with no budget for commercial certs, will still stay a single point of failure. If they should, after the transition to 100% https is completed, not bevable to offer free certs anymore, what exactly would be the plan B?


Chrome Android shows this text if you click the lock:

"Your information (for example passwords or credit cards) is private when sent to this site"

That sounds so wrong because it implies you can trust the site... but writing something clearer is hard!


Perhaps "This site will securely and privately receive your information"


I still am against hiding the protocol. Color https green and http red if you want to (though I think http deserves a less condemnatory color like yellow).

Hiding it, unhiding it, is too much magic. Yes, I believe web users need a certain level of technical understanding, so that they are savvy enough when they see it elsewhere, like in an email or some other document.

I am for hiding the inner workings of things, but I believe the URL is part of the user interface.


>Users should expect that the web is safe by default

In the age of Let’s Encrypt I wouldn’t say that HTTPS implies safety. I can register a phishing site in 5 minutes with fake credentials, the configure let’s encrypt to get that little padlock icon. No questions asked.

HTTPS only guarantees that your traffic is not spied on or modified en route, but for ordinary users that wasn’t an issue in the first place.


"Encrypted" and "Not Encrypted" would be more accurate labels than "Secure".


They should have used word Encrypted instead of Secure, it's misleading. Now I think they should at least keep the padlock green. Also next logical step should be blocking javascript on http sites by default.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: