Hacker News new | comments | show | ask | jobs | submit login
You think you can't be phished? (hackaday.com)
192 points by ribasushi 185 days ago | hide | past | web | 134 comments | favorite



Firefox: open about:config, set network.IDN_show_punycode to true. Next time you open the page, the address bar will show https://www.xn--80ak6aa92e.com/ instead of https://www.apple.com/. Better yet, always type in or bookmark pages where you might enter sensitive information.

I was surprised this was still an issue, as I heard about IDN homograph attacks years ago.


IDN homograph attacks are old, but - as the original article shows - people are still vulnerable to this when you manage to make a full name homograph entirely within one language in Unicode. The only fixes for it that come to my mind are dropping IDN altogether, disambiguating homographs on IDN level, or... disambiguating Unicode.


Ultimately the problem is related to Unicode. So let's disambiguate that.


This is strange. Maybe 7 years ago I tested this in all browsers and the only browser that didn't show the punycode was Internet Explorer.

Now I'm on Firefox and was fooled by the link. Thanks for the about:config.


Maybe the non-ascii characters could be coloured somehow, like with a different colour background.

That way we could still show the real characters and not the punycode.


But how would one mass-educate all the non-technical users on this? It'd be a dirty little inside secret.


If you color ASCII with one color and all the rest of characters with another, there still can be cross-charset spoofing. Just not if ASCII domain is the one being spoofed, but the whole point of IDN was to enable people to create non-ASCII domains, so the "solution" that does not work for all non-ASCII domains is not a solution that is compatible with IDN.


I mentioned that in the last thread we had about this. You'd have to be careful with the palettes for the colorblind, and the blind would need screenreader updates, I guess.

Some kind of clear, visual indication does make sense to me, though.


Why would Mozilla think that it would be a good idea to set the default to false... at least in the English language.


The fact that I have to open developer tools to inspect the cert on Chrome is infuriating.


I'm reminded of a Torvalds quote from about a decade ago:

> This 'users are idiots, and are confused by functionality' mentality of Gnome is a disease. If you think your users are idiots, only idiots will use it.


For clarity, the feature isn't removed. You open the developer console (it has like nine keys bound to do this, F12 is the easiest), select the "security" tab, then hit the "view certificate" button.

I buy that this isn't as easy as the old mechanism (click on the TLS lock icon next to the URL) but...


But the point was/is to get "normal" users to pay attention. This just buries those details further away.

In the last few years, I've been somewhat pleasantly surprised at how much more attention my non-tech friends and acquaintances are paying to the topic of security.

No, they don't have it all figured out. But they are aware, and they are demonstrating some "pause and check".

How nice it would be to be able to point them to an obvious and straight-forward representation/presentation of the cert chain.

Instead, we get continued fussing with the "lock UI" and further and further hiding away of the relevant details.

Boo.


What are the other keys? I know about F12, but I have it mapped to something else. I'm using Gnome.


Any of Ctrl-Shift-{c,i,j} will pop up various tabs of the developer tools. There may be others I'm not aware of.


Didn't know about F12 - thanks for the tip!



Yes, it is infuriating. I didn't even realize it was in the dev tools until today. I thought they'd just dropped the functionality altogether.

The dev tools are nice, and I like using the multiple profiles for both dev (logging in as multiple users) and segregating my banking cookies. Otherwise I'd drop chrome for safari.

(I also wish they'd expose certificate metadata to plugins, so I could write my own cert checking/pinning code.)


It was one of those minor chrome changes that shouldn't bother me, but ends up driving me slightly insane.


The dumbing down of Chrome is seriously driving me away. There used to be a ton of power-user-friendly options easily available. Now they're all hidden in favor of simplicity.

My brother's been recommending Opera for a long time. I think it's time I listened to him and gave it a try.


Interesting assertion. Back in 2008 when Chrome was released, that was exactly the same feeling people had about it. I remember some of us where sad to see the search bar and address bar combined, along with other stuff that Firefox, Explorer and other browsers had.

If I remember correctly, one of Chrome's objectives since the beginning has been to remove a lot of functionality and clutter (o yeah, the menu bar was another thing that Chrome did not have and drove some people crazy).


There's a big difference in attitudes between browsers here.

Take OCSP for example - Chrome disabled OCSP because in optional mode it wasn't a perfect solution and they didn't want to enable it in required mode because that could confuse the poor users. After the heartbleed attack, Chrome was only able to bundle a tiny fraction of the certificates into their CRLs - leaving millions of domains still vulnerable to blacklisted certificates. On Chrome - you're just screwed, there's no way to turn OCSP into required mode to prevent such attacks.

On Firefox however you're able to force OCSP required - making it actually useful if you're sophisticated enough to understand what a failure of it may mean and you're actually able to blacklist all of those certificates. A huge security benefit in such a case.


Without widespread adoption of a must-staple extension, OCSP is cosmetic. The decision to disable it didn't come from simplicity fascists; it came from people like Adam Langley. AGL isn't right about everything, but if you take the position that something he does with Chrome TLS is ludicrous, my default will be that you don't understand the issue as well as he does.


> Without widespread adoption of a must-staple extension, OCSP is cosmetic. The decision to disable it didn't come from simplicity fascists; it came from people like Adam Langley. AGL isn't right about everything, but if you take the position that something he does with Chrome TLS is ludicrous, my default will be that you don't understand the issue as well as he does.

Well, please, do explain where I'm mistaken here - is OCSP required not the most secure option? And it was rejected simply due to usability reasons on Chrome?

And the option to do so anyways for power users doesn't exist because Chrome is a browser for ease of use?

I'm not saying it's entirely ludicrous - for a good chunk of the user base CRLs are at least decent - but having an option for power users who desire more security should be available. And it's mandatory for any browser that wants to be used by power users.

Stop with the appeals to authority and make an actual argument if you disagree.


I don't understand the questions you're asking here, but, no, OCSP wasn't disabled simply due to usability reasons, but rather because it did not work.


It works just fine if you require an OCSP response and hardfail. Soft-failing is for usability and it's the cause of the problem.

Yes, must-staple is the optimal solution, but OCSP required is the best available today.

Look at some of the numbers here: https://www.grc.com/revocation/crlsets.htm


Soft failing isn't for "usability". OCSP simply doesn't work in situations where you would otherwise soft-fail, not because of any problem with the certificate, but because the protocol itself does not work.


How so? Replay might be a bit of a concern, but it's fairly time limited. The other stuff cited by Langley is just down to usability alone.


You are using the word "usability" to describe a situation in which extremely popular sites that are themselves doing everything right from a security perspective simply stop working --- because OCSP does not work. This is a silly conversation, and I'm opting out of it now.


> simply stop working

How would this happen? Are you suggesting that network failures are something that shouldn't impact even advanced users who've opted themselves in to the potential pitfalls in order to gain security?

I'm not suggesting hard-fail be the default, I'm suggesting CRLs be the default with hard-fail as an option for advanced users who know how to resolve problems when they do occur.


Edit: I'm wrong. Removing this comment regarding why certificate revocation checking is still useful, because someone pointed out you can create a new account to get a new certificate from a CA, and I didn't realize it was that easy to just sign up for a new account. (I was thinking of EV certificates mostly so I didn't think of this attack vector at all.) See [1] for background.

Thanks all for pointing that out.

[1] https://www.imperialviolet.org/2014/04/19/revchecking.html


> And to install a specially formed page on the site, you still need to be able to GET a copy of that page to install in the first place, and how the hell are you going to do that without ALSO stealing their CA login credentials, which I'm pretty damn sure will NOT be stored on every single random server you hack?

The ability to answer arbitrary HTTP requests on a server is sufficient to get a publicly-trusted certificate from any number of CAs. You do not need the credentials for the CA they've been using. Just create a new account. Future extensions for the DNS CAA record[1] might be able to help against this if you keep your credentials away from your web server.

[1]: https://datatracker.ietf.org/doc/html/draft-ietf-acme-caa


> The ability to answer arbitrary HTTP requests on a server is sufficient to get a publicly-trusted certificate from any number of CAs.

I'm saying you have to know what response to GIVE, don't you? You can't just give a random response...


The CA tells you what response to give on that new account you signed up.


Thanks, that didn't occur to me since I was mostly thinking of EV certs for some reason.


There's nothing to stop you from creating a new account at the CA of your choice, asking for a validation token, and putting it wherever it needs to be.


I think he's actually right about the attacker being near you most of the time - compromising a data center is much harder than compromising a LAN most of the time.

However, he completely ignores the fact that some users may be sophisticated enough the handle the hard-fail scenarios mentioned. Well, I am. And so are many others. So what am I supposed to do if I want security and can handle hard-fails?

Not use Chrome seems to be the answer. Chrome fails to account for power users and developers who would rather have improved security and functionality.


What's your alternative? A customized Chromium fork? Every other browser is markedly less secure than Chrome. If exploit resilience is all you care about, you can theoretically use Edge, but now you're throwing away a lot of important TLS policy stuff that Chrome does and Microsoft is years behind on.


Firefox - they're years ahead on this sort of stuff. Their about:config gives you immense power to configure your browser to behave exactly how you want it to. Unlike any other browser around really.

The flexibility this brings you in privacy gains alone is immense. Just try to do something as basic as turn off WebRTC in Chrome and you'll see what I'm talking about. Firefox is the only viable answer if you care about privacy and security and have the ability to take things into your own hands.


Virtually nobody who works in browser security agrees with you about that. There are privacy wins to be had with Firefox, but they come at the expense of security.


[flagged]


Among other things, Chrome (and even Edge) are lightyears ahead when it comes to sandboxing, and were designed with sandboxing in mind. Mozilla is slowly catching up - they've started shipping some sandboxing features since Firefox 50 - but there's still a long way to go.


Compare the market prices for equivalent vulnerabilities in Chrome and Firefox.


Wouldn't that value mostly be based off popularity? Chrome is vastly more popular than Firefox - therefore it's less valuable to exploit a smaller number of users.


No. Less popular browsers are also more expensive than Chrome, and the price difference isn't even close to linear. The fact is that Chrome exploits are much harder to write and command a higher price.

Start here:

https://medium.com/@justin.schuh/securing-browsers-through-i...


That link makes no mention of comparative exploit price, is that the correct one? It doesn't even contrast security features with Firefox...

Fact is, exploit price is a crappy way to do such a comparison, there are too many other factors that can play into it - like what browsers companies and governments have deployed.


I'm getting really no indication that you understand the issues here. I am wrong about lots of things, and so is the industry consensus, but when comments show those things happening, they usually don't take the form of "maybe the exploits cost more because Chrome is more popular".

When I say Chrome is more secure than Firefox, I am making a banal statement that most people in software security would roll their eyes at me collecting karma to say. As with the OCSP discussion downthread, I have the feeling that by starting a discussion of runtime security, allocator design, CFI, JIT-spraying, and sandboxing, I'm just going to give you more ammunition to make arguments about things you haven't read about.

So, here's the deal: I've set you up beautifully to write the comment where you demonstrate that you've got some subject matter expertise in browser security and exploit development and further show that I'm making an ass out of myself by arrogantly assuming you don't know what you're talking about. I'll give you a hint: it's even easier to write that comment than you might think, because I do not myself work in browser security, so if you do even a little bit you should be able to nail this easily. I invite you to write that comment now.

Otherwise: I'm done. Stick with Chrome.


Sure, if you'd like to talk about sandboxing and exploit mitigation mechanism then sure, Chrome takes the cake. I have really no argument there. But that's not what you were talking about, you jumped to exploit pricing, seemed a weird choice to me. Forgive me for responding to what you posted I guess?

Personally, I value privacy options and the ability to disable many potentially vulnerable features altogether over a slight possible mitigation for a zero day exploit.


You just 10 minutes ago accused me of "talking out of my ass" (you later edited the comment) for saying that there were privacy wins to be had in Firefox, but they come at the expense of security. Now you write a comment that pretends that was your position all along, and implying that I somehow argued with that position. This is every asymmetrically-informed message board argument ever. Crazymaking.


Why would they even change that?


I open Safari just for that... It wasn't like that a few versions back.


You open a different browser to save one button press?


It isn't a single button press. Right-click -> inspect -> Security -> View Certificate. Although I have to agree opening a new browser for checking a certificate is hilarious.


F12 > View Certificate

well at least if you have had selected Security in the dev tools last


On macOS, it's the "intuitive" command + option + i , which I can never remember (and then you still have to click on the stupid security tab, which is hidden if you don't have enough horizontal space on the dev tool frame)


It is much worse than the previously convenient click-on-lock-icon -> view certificate. Even on windows, F12 is usually a Fn key, so I have to select Fn+F12 which I never knew till the GP pointed it out to me.


What if you just don't click on any links in email? Particularly if they are really important sites. Just accomplish the proposed task another way. For example, if you get an email from Paypal, stating that you need to update a credit card or something, don't click their link, instead open a browser and enter "https://www.paypal.com" yourself, and go into your account information and look for your saved payment methods.

edit: typos


Our government (Netherlands) doesn't provide links in emails. They even don't provide the domain name (which can be turned in to a link by the client), but refer to the website with the name of the website.

For example:

"There is a new message in your inbox on MijnOverheid. Please visit MijnOverheid to read this message."

And every time they also warn that they never will provide any link in any email.

I would be great if more (all) websites start doing this.


Unfortunately they also have a habit of moving the login form around, especially the one related to business taxes.


The only time I click on links in emails is when I've just requested them. Like I've just set up an account and it says it is sending me an email to verify the address. If it is some email out of the blue I would never click the link or open the attachment. I actually enforce the attachment rule by configuring my mail client to only support "save" as an option for attachments.


A good way to get a lot of people with a phishing attempt is the 'Unsubscribe' button in an email.


Could you expand upon this? Do you mean the unsubscribe takes you to somewhere you need to then sign in with credentials to "unsubscribe"?


I've definitely seen websites that require you to login to unsubscribe. It's probably illegal (https://arstechnica.com/tech-policy/2012/09/log-in-to-unsubs...) but I still see it from time to time. Usually I just mark them as spam and don't bother logging in.


It's also super annoying when someone has accidentally entered your email address instead of their own. When you try to unsubscribe, you're asked to log in to an account that you never created.


They could put whatever is most effective at the destination of that URL. Once you're visiting a URL crafted by the attacker, you're already in deep trouble. Often that's all it takes, merely visiting the URL.


If you assume people have the attention span of gnats, I guess.

If I'm clicking a link to unsubscribe you're not going to get me to start entering personal information.


Crafted-content UAF vulns are still occasionally being found in browsers.

To say nothing of unpatched/non-updated browsers.


You don't need a trick URL for those.


The point is that simply clicking a dodgy link can, in some cases, own you. No credential acquisition required.


Yes but what does that have to do with the article?


Are you asking what a dodgy URL has to do with the article?


If they get you to click on a link, it doesn't matter what it looks like in the address bar. So what does a malicious link that could be on any domain have to do with domain spoofing?


If you hover over a link in a browser, the URL of the link will be shown as a tooltip, or in the bottom of your browser window.


On many sites (that have more than a simple newsletter), the email preferences are part of the account settings where you do things like change your password, avatar, possibly manage your billing information and shipping address, etc. You wouldn't want any random anonymous person to be able to access that info, so of course it requires logging in by default. While those systems should ideally have one-click unsubscribe functionality nowadays, there are probably quite a few that don't.


This is a good use for bookmarks, too.


What about links on websites? Do you also not click those?


If it's to my bank or email account, no, for the same reasons.


I meant it more as a rhetorical question: even if you really don't click on any link to any of your important accounts _ever_, you are still vulnerable:

- if you install a malicious browser add-on (or if the guy behind one you already have gets phished and pushes a malicious update).

- if any other part of your system/browser ever gets compromised.

- If you mistype the address and your browser searches on Google/bing instead you can be phished from the ad links at the top.


If you have a malicious browser add-on or your system/browser is compromised, you're pretty much pwnt, aren't you?

The only hurdle left would be TOTP or Yubikey.


I always hover over links before I click them to see where they actually lead in the status bar. It is less safe than your suggestion (since if I'm not careful I can fall for "googel.com" or "goog1e.com"), but is still often useful (even just to avoid going somewhere I'm not interested in going, not specifically against phishing).


I think you skipped reading the article.

The article is about punycode. Using it, you can have a site name like 'apple.com' even though really the domain is made up of various unicode characters.

So even if you were exceedingly careful and you made sure not to fail for googel.com and goog1e.com, you would still fall for the visually correct google.com.


The status bar is completely worthless and unreliable from a security standpoint unless you disable javascript. javascript can intercept your click and navigate you anywhere it wants.


This does not work if the link is in a browser translating punycode, i.e. if the URL is in gmail or as in this example in a web article, hovering over the link shows the translated punycode, i.e. apple.com.

If you're in Firefox, about:config, search puny, change that from false to true, NOW when you hover over this URL you'll see the untranslated punycode which is the real URL.


In mobile chrome at least, the xn-- URL showed up when long pressing instead of the apple one. Once clicked, it rendered as apple


Which is why I look for the organization's name in the browser bar when I'm logging in to a high-value website (Google, Apple my bank, etc). For those who don't know the UI for extended verification certificates see the difference in the screenshot: http://imgur.com/a/ycVwA.


Doesn't work very well if you're on a Google site: http://i.imgur.com/1JMASfH.png


This was posted the other day, checking its green isn't enough.

https://www.xudongz.com/blog/2017/idn-phishing/

Edited: you're right I thought you meant just check its https.


That is an ordinary DV cert. GP is talking about EV certs, which in theory assures you that Apple, Inc. specifically owns the domain name you're on, not just the operator of the server.


This is very easy to miss, even as a technical user. Still good advice.


Google doesn't use EV certs.


Amusingly, Facebook seems to block this link from being posted publicly (The site reports "There was a problem updating your status. Please try again in a few minutes" - private messages work fine however).

I wonder what Facebook's heuristic there is, since they don't seem to block all punycode URLs. Maybe something about character distribution (all latin-like characters -> probably phishing)?

Edit: Actually, it might not be a block at all. I think it might just be a bug in Facebook's URL parser, since when pasted into messages the automatic hyperlink is set to http://invalid.invalid.


The latest version of Chrome renders the URL in the original punycode, not as apple.com. The browser vendors all use their own algorithm for deciding when to render as punycode vs unicode:

https://www.chromium.org/developers/design-documents/idn-in-...


This is probably the most effective advertising to update to the newest Chrome release that I've ever seen.


If you use a password manager, it most likely won't auto-fill apple.com's passwords on https://www.xn--80ak6aa92e.com/


(unless its lastpass)


I may just be missing a joke here, but this isn't true.


If you're on a malicious site, LastPass will likely auto-fill all your passwords into the site. And also give them to the hackers who have root on all their cloud servers.

They have a horrible track record of horrific bugs due to negligence, stupid responses to vuln reports ("I couldn't reproduce your website launching calc.exe on my mac") and more generally a lot of senseless security decisions throughout the app last I used them (e.g. two factor defaulting to disabled when logging in from the mobile app -- wut?).


Pure shameless FUD.


:-(


I really hope things like this do not lead us to a mentality that anything that isn't the Latin alphabet is malware or spam. There are people in the world using non-Latin alphabets and allowing them to have domain names in their native alphabets is a good thing, we just haven't worked out how to do it securely yet.

Disabling the rendering of punycode is actually not helpful in cases where you wanted to visit a domain using the Cyrillic alphabet which you want to be sure of, and someone registered some similar looking domain which looks equally like a bunch of gibberish to the one you're looking for.

Some suggestions, maybe good, maybe bad:

* It may be as simple as adding a character set into the address bar

* Flag a warning if the domain name alphabet doesn't match the page content (as would be the case in this example) or maybe something else


>This affects the current version of Chrome browser, which is version 57.0.2987 and the current version of Firefox, which is version 52.0.2.

Firefox 52.0.2 & linux here, and the "L" in the URL looks like a capital i with serifs - quite noticeable. Perhaps different on windows/osx though.


it looks like that in monospace I think, in rendered sans serif the bars on the I don't render and it's indistinguishable from a lowercase L.


Text-only browser shows the IDN, not the phished domain.

</sarcasm>I guess I need to "upgrade to a modern browser" for websites to work correctly?</sarcasm>

As an aside, I still do not understand how "modern" browsers evolved to hiding portions of the URL or using a phony address bar i.e. "omnibox" to the right of the real address bar.

In the first case, it seems to offer no benefit other than to hide important details.

In the second case, it seems so overtly deceptive for newcomers to the www that I am surprised they could pull it off.

Maybe these things have changed recently as these monster programs are constantly changing. If so, pardon my ignorance.

Is it not true that users who do not understand the basics of www usage e.g., what is a domain, a URL, etc. are always going to be at risk of manipulation?


> Is it not true that users who do not understand the basics of www usage e.g., what is a domain, a URL, etc. are always going to be at risk of manipulation?

It is, and the general attitude seriously irks me. Technology is complex, but if they're hiding the actual details behind half-assed, inconsistent, lying abstractions, they're not helping anyone - the user will never develop a consistent mental model of what's going on if every other piece of software lies about some parts of it.


As much as I like Firefox, I don't really agree with their reason for not considering this to be a bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1332714

> Indeed. Our IDN threat model specifically excludes whole-script homographs, because they can't be detected

> programmatically and our "TLD whitelist" approach didn't scale in the face of a large number of new TLDs. If you are

> buying a domain in a registry which does not have proper anti-spoofing protections (like .com), it is sadly the

> responsibility of domain owners to check for whole-script homographs and register them.

> We can't go blacklisting standard Cyrillic letters.


I wanted to share the fake Apple URL with my team, and Slack expanded it to https://www.xn--pple-43d.com when I hit send.


Note that the URL people are excited about is the pure-cyrillic xn--80ak6aa92e, not xn--pple-43d.


Even though Safari is behind the curve for many web tech features, I've been pretty happy using it as my main browser for the last few months. On a MacBook Pro, none of the browsers even come close to competing with Safari when it comes to battery life. I still keep Chromium and Firefox installed, and Chromium is my go-to option for web development. But I'm happy to find that Safari has sane defaults when it comes to displaying URLs.


Safari is not fooled. http://i.imgur.com/2PyCWtz.png


Brave is not fooled either, which is interesting because it's based on Chromium.

Brave 0.14.1 libchromiumcontent 57.0.2987.133

Fixed in Chrome 58, so I wonder what the significant difference is.


Brave 1.0.19 on mobile is fooled, it displays apple.com.


edge isn't fooled.


U2F as a second factor prevents this (and many other) kinds of phishing attacks.

The token's crypto takes the page's domain into account.



The fix is described here:

https://chromium.googlesource.com/chromium/src.git/+/08cb718...

It seems very limited in scope, essentially preventing the Unicode display of a domain that only contains the following "Latin-alike" Cyrillic characters: "асԁеһіјӏорԛѕԝхуъЬҽпгѵѡ".


Verified by: Let's Encrypt

Somehow i was expecting that comodo was the one culprit for the valid cert but i forgot how easy is to ask for certs like this. Sometimes i think that lets encrypt is hurting more than doing good.


Given a choice between a safe web by default and nice looking urls, I'd choose the former.

Furthermore, the real problem is that the green lock is displayed for https sites rather than those with extended verification. Http should be red, https should be plain, and ev https should be green with a padlock.

It's a failure in browser UI.


A password manager would be what (hopefully) saves me from this.


I thought about normalising homographs then I tried out an implementation.

https://github.com/nalply/homoglyph_normalize

The idea is: get confusables.txt from Unicode and generate from that a JavaScript object which does the mapping.

It's not guaranteed to work, I didn't even test it, but it's perhaps a starting point for whatever you want to do with it.



The good news is, it's hard to find characters that actually look like latin characters. This uses the Cyrillic characters, but there are no characters resembling g or d, so most websites are safe from this. Though, it is incredibly infuriating that it doesn't show the punycode unless you try to find it.


As a temporary workaround, you can install a Chrome extension to block all IDNs from being loaded. More details at https://github.com/jiripospisil/chrome-block-idns


Would a possible solution be to check if a url contains 'ambiguous' letters, and if so, transform all these letters to the more common versions and then check if that domain already exists? If it does, give the user a warning.


I was affected in chrome, then i went ahead and navigated to chrome://help/ and I was no longer affected. Gj google.


Non-ev HTTPS should loose its green color. The green padlock should only be displayed with ev certs.


Damn...... so ive been vulnerable up until v58 released a few days ago? WTF?


Well it's an extremely challenging edge case to address. As noted, mixed alphabet domains have decomposed to their punycodes for a while, but the single alphabet case is harder to separate malicious from legitimate.

This isn't a full solution yet anyway... a Russian user could still be phished for example, because their locale would match.


Can somebody confirm that the link is safe to open?


Yep, it's just a small message, link out, and screenshot.

> This may or may not be the site you are looking for! This site is obviously not affiliated with Apple, but rather a demonstration of a flaw in the way unicode domains are handled in browsers.


Yes it is safe :)

Here is the archive version :

http://archive.is/UmARF

and the "apple.com" link :

http://archive.is/Udt7X


Made me update to Chrome v58 right away.


This article is too narrowly focused on IDN. appie.com has fundamentally the same problem, despite being pure ASCII.


Disagreed - i is distinct enough from l (if you look at the URL at all)


Capital I looks exactly like an l in my address bar. It's slightly thicker but you'd only notice that if there was an l next to it.


In FF, yes. In Chrome 57, it looked "normal" (it is now decomposed in 58).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: