Thanks to the rapid adoption of TLS, HTTP/1.1, and CDN services like Cloudflare, it's hard to actually find IP addresses to test some of these tricks on. However, it seems like at least some of this stuff really does still work.
Here's one for example.com. Without the host header, it still misbehaves, but it at least does something.
Firefox even displays the IP address in dotted notation in the status bar on hover, so it parses the URL before displaying where the link goes.
While nice, I can't help but imagine the mayhem once someone finds a URL parsing bug that leads to remote code execution. Then again, URL parsing is decades old now, the odds that it still contains a RCE is probably low.
A quick test show that Chrome, Firefox and IE accepts dotted, long, class a, and long with auth in decimal, octal, and hex notation. Firefox also accepts long overflow.
Non of them accept Dotted overflow, Class B or Class C.
Yes, exactly. The @ trick is being removed from browsers because what the author refers to as interesting trick has been used for phishing and other scams.
Since when is destroying web standards acceptable? There are so many internet and intranet pages from 1994+.
In the name of security the change way too many moving parts lately to push their hidden agenda.
Using HTTP in local LAN is fine. Using HTTP for non login-pages is fine. Using "basic authentification" (user:pwd@IP) is fine for certain use cases. Stop breaking things.
I am not using a Firefox anymore, they went insane lately, they don't bother about their community anymore. I am worried about Chrome too, TLS only for HTTP/2 is the wrong signal. So many times I couldn't open a HTTPS sites because the browser thinks the cert is broken - it was just a trivial page were I just want to read some text (not input anything) - just some me the page - something that wouldn't be a problem with HTTP.
If Firefox, Chrome or whoever want to destroy access to 1994-2017 websites, go to hell. Name your forked off thing TheMicrosoftNetwork (or what not) and see it tank rather quickly.
Keep in mind they're not breaking HTTP basic auth, they're just just removing the ability to specify credentials in the URL because its legitimate use is extremely rare but its use in phishing is common.
You used to be able to put HTTP basic authentication into the URL in the format http://username:password@hostname-or-ip.example.com/ - modern browsers are removing support for the "username:passord@" part because people would put a legit domain in the username:password@ part to fool users. (i.e., http://cnn.com:article123456@fakenews.example.com/ which would actually take you to fakenews.example.com while sending a username of "cnn.com" and a password of "article123456")
what? O.o I use this in Keepass to store quick-login links for some that still require http basic auth (though, haven't tested these for a while). Do you have any link on the deprecation?
> In Firefox, it is checked if the site actually requires authentication and if not, Firefox will warn the user with a prompt "You are about to log in to the site “www.example.com” with the username “username”, but the website does not require authentication. This may be an attempt to trick you.".
Huh, that's just security theater. The phisher could set up a website that does require authentication, thereby avoiding the warning.
Which stretches way past my laptops viewable URL bar... and it takes me right to badsite.null (or a valid site like example.com). If you need HTTPs you can redirect on badsite.null's web server. Very wild.
These are interesting as curiosities, but most of them are not interesting from the point of view of deceiving users, and this was the case even when the article was written.
It's trivial to make users think they are not visiting evil.com, or just to serve evil content from evil2.com instead. This means a blacklist model of web addresses will never work, and even the most naive computer user uses a whitelist instead, if they pay attention to urls at all.
The username:password@evil.com/... trick is the exception, and it's good that browsers are working on ways to mitigate this.