I was surprised this was still an issue, as I heard about IDN homograph attacks years ago.
Now I'm on Firefox and was fooled by the link. Thanks for the about:config.
That way we could still show the real characters and not the punycode.
Some kind of clear, visual indication does make sense to me, though.
> This 'users are idiots, and are confused by functionality' mentality of Gnome is a disease. If you think your users are idiots, only idiots will use it.
I buy that this isn't as easy as the old mechanism (click on the TLS lock icon next to the URL) but...
In the last few years, I've been somewhat pleasantly surprised at how much more attention my non-tech friends and acquaintances are paying to the topic of security.
No, they don't have it all figured out. But they are aware, and they are demonstrating some "pause and check".
How nice it would be to be able to point them to an obvious and straight-forward representation/presentation of the cert chain.
Instead, we get continued fussing with the "lock UI" and further and further hiding away of the relevant details.
The dev tools are nice, and I like using the multiple profiles for both dev (logging in as multiple users) and segregating my banking cookies. Otherwise I'd drop chrome for safari.
(I also wish they'd expose certificate metadata to plugins, so I could write my own cert checking/pinning code.)
My brother's been recommending Opera for a long time. I think it's time I listened to him and gave it a try.
If I remember correctly, one of Chrome's objectives since the beginning has been to remove a lot of functionality and clutter (o yeah, the menu bar was another thing that Chrome did not have and drove some people crazy).
Take OCSP for example - Chrome disabled OCSP because in optional mode it wasn't a perfect solution and they didn't want to enable it in required mode because that could confuse the poor users. After the heartbleed attack, Chrome was only able to bundle a tiny fraction of the certificates into their CRLs - leaving millions of domains still vulnerable to blacklisted certificates. On Chrome - you're just screwed, there's no way to turn OCSP into required mode to prevent such attacks.
On Firefox however you're able to force OCSP required - making it actually useful if you're sophisticated enough to understand what a failure of it may mean and you're actually able to blacklist all of those certificates. A huge security benefit in such a case.
Well, please, do explain where I'm mistaken here - is OCSP required not the most secure option? And it was rejected simply due to usability reasons on Chrome?
And the option to do so anyways for power users doesn't exist because Chrome is a browser for ease of use?
I'm not saying it's entirely ludicrous - for a good chunk of the user base CRLs are at least decent - but having an option for power users who desire more security should be available. And it's mandatory for any browser that wants to be used by power users.
Stop with the appeals to authority and make an actual argument if you disagree.
Yes, must-staple is the optimal solution, but OCSP required is the best available today.
Look at some of the numbers here: https://www.grc.com/revocation/crlsets.htm
How would this happen? Are you suggesting that network failures are something that shouldn't impact even advanced users who've opted themselves in to the potential pitfalls in order to gain security?
I'm not suggesting hard-fail be the default, I'm suggesting CRLs be the default with hard-fail as an option for advanced users who know how to resolve problems when they do occur.
Thanks all for pointing that out.
The ability to answer arbitrary HTTP requests on a server is sufficient to get a publicly-trusted certificate from any number of CAs. You do not need the credentials for the CA they've been using. Just create a new account. Future extensions for the DNS CAA record might be able to help against this if you keep your credentials away from your web server.
I'm saying you have to know what response to GIVE, don't you? You can't just give a random response...
However, he completely ignores the fact that some users may be sophisticated enough the handle the hard-fail scenarios mentioned. Well, I am. And so are many others. So what am I supposed to do if I want security and can handle hard-fails?
Not use Chrome seems to be the answer. Chrome fails to account for power users and developers who would rather have improved security and functionality.
The flexibility this brings you in privacy gains alone is immense. Just try to do something as basic as turn off WebRTC in Chrome and you'll see what I'm talking about. Firefox is the only viable answer if you care about privacy and security and have the ability to take things into your own hands.
Fact is, exploit price is a crappy way to do such a comparison, there are too many other factors that can play into it - like what browsers companies and governments have deployed.
When I say Chrome is more secure than Firefox, I am making a banal statement that most people in software security would roll their eyes at me collecting karma to say. As with the OCSP discussion downthread, I have the feeling that by starting a discussion of runtime security, allocator design, CFI, JIT-spraying, and sandboxing, I'm just going to give you more ammunition to make arguments about things you haven't read about.
So, here's the deal: I've set you up beautifully to write the comment where you demonstrate that you've got some subject matter expertise in browser security and exploit development and further show that I'm making an ass out of myself by arrogantly assuming you don't know what you're talking about. I'll give you a hint: it's even easier to write that comment than you might think, because I do not myself work in browser security, so if you do even a little bit you should be able to nail this easily. I invite you to write that comment now.
Otherwise: I'm done. Stick with Chrome.
Personally, I value privacy options and the ability to disable many potentially vulnerable features altogether over a slight possible mitigation for a zero day exploit.
well at least if you have had selected Security in the dev tools last
"There is a new message in your inbox on MijnOverheid. Please visit MijnOverheid to read this message."
And every time they also warn that they never will provide any link in any email.
I would be great if more (all) websites start doing this.
If I'm clicking a link to unsubscribe you're not going to get me to start entering personal information.
To say nothing of unpatched/non-updated browsers.
- if you install a malicious browser add-on (or if the guy behind one you already have gets phished and pushes a malicious update).
- if any other part of your system/browser ever gets compromised.
- If you mistype the address and your browser searches on Google/bing instead you can be phished from the ad links at the top.
The only hurdle left would be TOTP or Yubikey.
The article is about punycode. Using it, you can have a site name like 'apple.com' even though really the domain is made up of various unicode characters.
So even if you were exceedingly careful and you made sure not to fail for googel.com and goog1e.com, you would still fall for the visually correct google.com.
If you're in Firefox, about:config, search puny, change that from false to true, NOW when you hover over this URL you'll see the untranslated punycode which is the real URL.
Edited: you're right I thought you meant just check its https.
I wonder what Facebook's heuristic there is, since they don't seem to block all punycode URLs. Maybe something about character distribution (all latin-like characters -> probably phishing)?
Edit: Actually, it might not be a block at all. I think it might just be a bug in Facebook's URL parser, since when pasted into messages the automatic hyperlink is set to http://invalid.invalid.
They have a horrible track record of horrific bugs due to negligence, stupid responses to vuln reports ("I couldn't reproduce your website launching calc.exe on my mac") and more generally a lot of senseless security decisions throughout the app last I used them (e.g. two factor defaulting to disabled when logging in from the mobile app -- wut?).
Disabling the rendering of punycode is actually not helpful in cases where you wanted to visit a domain using the Cyrillic alphabet which you want to be sure of, and someone registered some similar looking domain which looks equally like a bunch of gibberish to the one you're looking for.
Some suggestions, maybe good, maybe bad:
* It may be as simple as adding a character set into the address bar
* Flag a warning if the domain name alphabet doesn't match the page content (as would be the case in this example) or maybe something else
Firefox 52.0.2 & linux here, and the "L" in the URL looks like a capital i with serifs - quite noticeable. Perhaps different on windows/osx though.
</sarcasm>I guess I need to "upgrade to a modern browser" for websites to work correctly?</sarcasm>
As an aside, I still do not understand how "modern" browsers evolved to hiding portions of the URL or using a phony address bar i.e. "omnibox" to the right of the real address bar.
In the first case, it seems to offer no benefit other than to hide important details.
In the second case, it seems so overtly deceptive for newcomers to the www that I am surprised they could pull it off.
Maybe these things have changed recently as these monster programs are constantly changing. If so, pardon my ignorance.
Is it not true that users who do not understand the basics of www usage e.g., what is a domain, a URL, etc. are always going to be at risk of manipulation?
It is, and the general attitude seriously irks me. Technology is complex, but if they're hiding the actual details behind half-assed, inconsistent, lying abstractions, they're not helping anyone - the user will never develop a consistent mental model of what's going on if every other piece of software lies about some parts of it.
> Indeed. Our IDN threat model specifically excludes whole-script homographs, because they can't be detected
> programmatically and our "TLD whitelist" approach didn't scale in the face of a large number of new TLDs. If you are
> buying a domain in a registry which does not have proper anti-spoofing protections (like .com), it is sadly the
> responsibility of domain owners to check for whole-script homographs and register them.
> We can't go blacklisting standard Cyrillic letters.
Fixed in Chrome 58, so I wonder what the significant difference is.
The token's crypto takes the page's domain into account.
It seems very limited in scope, essentially preventing the Unicode display of a domain that only contains the following "Latin-alike" Cyrillic characters: "асԁеһіјӏорԛѕԝхуъЬҽпгѵѡ".
Somehow i was expecting that comodo was the one culprit for the valid cert but i forgot how easy is to ask for certs like this. Sometimes i think that lets encrypt is hurting more than doing good.
Furthermore, the real problem is that the green lock is displayed for https sites rather than those with extended verification. Http should be red, https should be plain, and ev https should be green with a padlock.
It's a failure in browser UI.
It's not guaranteed to work, I didn't even test it, but it's perhaps a starting point for whatever you want to do with it.
This isn't a full solution yet anyway... a Russian user could still be phished for example, because their locale would match.
> This may or may not be the site you are looking for! This site is obviously not affiliated with Apple, but rather a demonstration of a flaw in the way unicode domains are handled in browsers.
Here is the archive version :
and the "apple.com" link :