
I can see your local web servers - jamesfisher
http://http.jameshfisher.com/2019/05/26/i-can-see-your-local-web-servers/
======
megous
Other approach is to create a useful extension like:

    
    
      https://addons.mozilla.org/en-US/firefox/addon/yt-adblock/reviews/
    

disguise that you're inserting an iframe linking to your web server into every
single page user opens, by naming variables and your tracking domain
incorrectly and by waiting for an hour after installation (this may also help
avoid automatic tests mozilla is doing) and then just sit back and wait and
log all the referers and ip addresses. It's a bit stealthier too, but needs
users to visit their local web servers. But you'll also get the full URL.

Nobody will report you or care about the report and users are banned from
fixing the extension code locally even if they're able to review it
themselves. Bad reviews with some actual text fade away quickly, so if someone
warns your other users, it will be pushed out to page 2 after a while by other
useful one word or just empty reviews and it will work out.

~~~
kapep
Is there any way to get the source code of extensions from the Mozilla web
site? I think some years ago you could look at the code in your browser from a
link in the version history but I don't see any links at all now.

~~~
megous
Right click "Add to Firefox" button and use "save as..". This will get you an
xpi file which you can unzip and inspect.

Many addons will use some packing method, bundle all kinds of stuff into their
content scripts (jQuery, etc.). It can be hard to review.

Some addons are quite horryfying (you see stuff like `<span
...>${someText}</span>`) (missing escaping, etc.). I'm quite sure there are
some content scripts out there, that have XSS issues, that can be triggered
from the page itself. This is great on pages like github, where there's quanta
of user controlled content.

So if you want a suggestion for a clever attack:

1] make an extension for facebook or twitter or github that reorganizes the
wall somewhat and make a `mistake` like assigning some user controlled content
via innerHTML. This will probably pass review.

2] Suggest your addon to your target.

3] Post your payload as a message/tweet/whatever to your target. Now you have
extension assisted XSS.

Pretty easy to add XSS to any page, with plausible deinability.

------
gnomewascool
If you use uMatrix, you can easily block the localhost and local network
"sniffing" with the following rule[0]:

    
    
      * 127       * block    ### block access to IPv4 localhost 127.x.x.x
      * localhost * block
      * [::1]     * block    ### block access to IPv6 localhost
      * 192.168   * block    ### block access to LAN 192.168.x.x
    

In principle, you can use this without any other blocking, i.e. with the rule:

    
    
      * * * allow
    

and hence without disabling javascript on any sites.

[0] [https://github.com/ghacksuserjs/ghacks-
user.js/wiki/4.2.3-uM...](https://github.com/ghacksuserjs/ghacks-
user.js/wiki/4.2.3-uMatrix)

Edit: as pointed out by DarkWiiPlayer below, if you want to be able to access
the localhost websites from the same browser, you need:

    
    
      localhost localhost * allow
    

and similarly for the LAN. In full:

    
    
      127       127       * allow
      localhost localhost * allow
      [::1]     [::1]     * allow
      192.168   192.168   * allow

~~~
dredmorbius
Add all the RFC1918 unroutable private networks.

[https://en.wikipedia.org/wiki/Private_network](https://en.wikipedia.org/wiki/Private_network)

    
    
        10.0.0.0 – 10.255.255.255 (10.0.0.0/8)
        172.16.0.0 – 172.31.255.255 (172.16.0.0/12)
        192.168.0.0 - 192.168.255.255 (192.168.0.0/16)
        127.0.0.0 - 127.255.255.255 (127.0.0.0/8)
    

[https://tools.ietf.org/html/rfc1918](https://tools.ietf.org/html/rfc1918)

Possibly also 100.64.0.0/10 for carriers.

[https://tools.ietf.org/html/rfc6598#page-8](https://tools.ietf.org/html/rfc6598#page-8)

~~~
johnp_
Possibly also the IPv6 ULAs:

[https://en.wikipedia.org/wiki/Unique_local_address](https://en.wikipedia.org/wiki/Unique_local_address)

Not sure if those can be expressed in uMatrix as a prefix rule.

------
jsty
"If you see any results like 192.168.0.4:3000 is available!, you should tell
your colleague to secure whatever she has running on that port"

Someone's going to access this page at $BIGCORP with an overly trigger-happy
IDS and get a fun morning meeting with IT to un-quarantine their machine.

~~~
chrisan
edit: I am a dolt. Thank you :)

I got this address as well, do you have anything running on .4?

It's just weird because I have .1 router, .2 AP, .3 pi-hole

then .10 is when I start my static IPs

and .100 is where my dhcp starts

nmap says that host is down as well

~~~
coob
That just the example in the description, not the scan results.

------
gbuk2013
Judging by "ReferenceError: webkitRTCPeerConnection is not defined" message in
my browser it probably uses WebRTC to get your local IP subnet. I installed an
add-on to block WebRTC after I watched a presentation on this tool [1][2] and
I recommend you also do this, unless you actively use WebRTC (and don't want
the hassle of toggling a switch).

Unfortunately the protocol is vulnerable by design. :(

[1] [https://portswigger.net/daily-swig/new-tool-enables-dns-
rebi...](https://portswigger.net/daily-swig/new-tool-enables-dns-rebinding-
tunnel-attacks-without-reconnaissance)

[2]
[https://www.blackhat.com/asia-19/arsenal/schedule/#redtunnel...](https://www.blackhat.com/asia-19/arsenal/schedule/#redtunnel-
explore-internal-networks-via-dns-rebinding-tunnel-14332)

~~~
phiresky
The only thing blocking WebRTC gets you is that it hides which subnet you are
on, right? As in, an attacker can still just enumerate all of the common ones
(10.0.0.0/24, 192.168.0.1/24, and like 5 others) and get the same results as
with WebRTC enabled. So blocking it really just slightly increases the
obscurity, but not really the security.

~~~
gbuk2013
Sure, but it helps and it is cheap. I also don't use a common subnet and enjoy
uMatrix. Obscurity is a viable strategy as part of a layered defence. ;)

~~~
phiresky
True.

My main problem is that blocking WebRTC blocks some very interesting methods
to "re-decentralize" the web, which are only viable if most people don't have
it disabled.

~~~
chatmasta
This is a classic conflict that I’ve observed a lot. Similarly, widespread SSL
pinning and app sandboxing makes it difficult to reverse engineer opaque
outgoing traffic. It might be more secure, but you also cannot easily inspect
what data a sandboxed, cert-pinning service is sending to the outside world.

I think it’s probably something like “security, privacy, anonymity... pick
two.”

------
founderling
I started a local webserver listening on localhost:80 just to see what
happens, but this thing seems to not detect it. It shows me "Scanning
localhost ... localhost complete."

Edit: My guess is that this thing can only detect servers that send a CORS
header that permits cross domain access.

It could probably do way better detection if it did not do xhr requests but
added script/css/whatever elements to its own page pointing to localhost and
detects if those error out.

~~~
jensneuse
CORS is a security mechanism for browsers to prevent leaking user information
(e.g. cookies) when doing cross domain requests from a browser. CORS does not
prevent accessing the server at all. You can always curl a CORS protected
server but you won't be able to make a requests including the user's cookies
from a disallowed domain.

~~~
dillonmckay
Hmmm...

I can see the request in server logs, but it seems CORS is preventing the
response.

I may be missing something.

~~~
kbenson
They way this is set up it's enumerating systems and ports on internal
networks, not actually _accessing_ those systems and ports. The CORS header
(or lack thereof) in the HTTP response of the _requested_ item is what
dictates whether it can be accessed, so what happens is the request first
sends a HEAD request to get the headers for the endpoint, and then if CORS is
set up and allowed for cross-domain access is allows the connection (or if
it's a request to the same domain and CORS for the current page isn't too
restrictive, it allows the initial request without a HEAD request).

What you see in the browser developer tools if you open them up is that a
bunch of requests are being made, but are being denied because they are
failing CORS checks.

Where data is being leaked (in context of security) is that there's a
difference in how the responses are handled. Either it's allowed (because the
remote side CORS is too loose) in which case the page will show s message that
the specific host/port combi is available, or it will get no response and time
out, in which case they skip checking that host any more and assume there's
nothing at that IP (which is when they print the "unreachable" message), or it
continues on with the next port. If at the end there's been no success and no
timeout, it prints the "complete" message, and that means there's probably
something at that IP.

An important thing to note is that CORS is not like a firewall, and it doesn't
actually stop all traffic from happening, so that can sometimes be used to get
additional information that's not necessarily meant to be exposed. That said,
what the page is showing is that the specific way CORS functions (that is,
asking the remote side if they accesible), and the fact that Javascript runs
locally on your browser, means that there's some interesting ways those
interact which can cause security concerns.

A as idea of how this could be used to more nefarious ends, if it found
listening and accessible servers on localhost or the local network, it could
then try to identify them based on the headers/content returned, and try to do
something with that.

Given that you could possibly even compile some network vulnerability
scanner/exploiter to WASM and use it for only the subset of vulnerabilities it
could accomplish through plain HTTP requests (a lot of work, it's probably
easier to just write your own shim and crib their exploit library), this could
be very easily weaponized.

~~~
comex
That all makes sense, but the OP said that the scan _didn’t_ identify their
HTTP server. I wonder why.

~~~
kbenson
Well, "Scanning localhost ... localhost complete." which they report means it
found the sever, but did not allow access (CORS restricted). This is normal,
as the default behavior unless modified by a CORS header is that cross-origin
resource sharing is disallowed, and that includes same host but separate port.
I believe edit is correct, in that this demonstration is for finding poorly
configured local services that have a CORS header that is too open (perhaps
because there are multiple services meant to interact with each other).

------
bill_joy_fanboy
I'm interested in this because I don't fully understand the consequences here.
I'd like to gain a deeper understanding through some concrete examples.

With the PHP CLI, I can run:

    
    
        php -S localhost:8000
    

With Python3, I can run:

    
    
        python -m http.server 8000 --bind localhost
    

The demo fails for me in both cases, even though a request to localhost:8000
is sent. (EDIT: The server log in the terminal window does show that the
request arrived at the local server).

My question is: What is the risk of running one of these servers and then
visiting some random web page?

~~~
maratd
> My question is: What is the risk of running one of these servers and then
> visiting some random web page?

It depends on what you're exposing on those ports. If it's something
sensitive, stop. Any web page can run javascript and as such, any web page has
access to every port and service that your machine has access to ... because
at that point, the web page is a program running on your machine with full
network access.

However, this entire "vulnerability" makes no sense to me. Even if I'm running
something on my machine or local network, I am _not_ going to rely on the
firewall as a security mechanism. That is profoundly stupid and is well known
to be profoundly stupid. So all those servers, including the ones I am
creating and running, will have their own security mechanisms. So you can ping
my server? So what?

~~~
echeese
If it's got CORS enabled you can do a hell of a lot more than ping your
server.

~~~
maratd
Wait, what? I think you means the opposite. If it's got CORS enabled, then you
can't do anything unless the request is originating from the relevant domain.

Anyway, do not rely on firewalls (and CORS is a firewall) as the _sole_
security measure. Do not create unauthenticated endpoints unless you want
everybody to use them.

~~~
echeese
To elaborate, I meant a permissive CORS policy, which is what I see most
often.

------
jasonkester
I only ever run my local dev server on port 80, and use a hosts file to assign
custom (fake) domain names to each of the sites I want to run.

I mentioned as much here a few years ago when I first came across this idea of
assigning (and remembering) random unique port numbers to every one of your
apps in development, and was surprised to hear that it's such a common
practice. It seems sub-optimal for a lot of reasons, beyond the obvious one
noted in the article.

The big one for me is that none of my apps need to know anything about how to
handle port numbers URLs. They know their own domain name via a config setting
that can be flipped on the live server. It's the same pattern (with no colons
or numbers to worry about) so there are no edge cases to test.

~~~
ryanjshaw
Unless I'm missing something, this only works if you are running one site at a
time, or you have a single web server bound to port 80 that supports virtual
hosts. You also need application and/or configuration support to get this
working properly. And even then, if you run any software with an embedded web
server you are usually out of luck unless you want to fiddle with a reverse
proxy configuration.

For these reasons I, at least, simply run applications on different ports. The
problem isn't the port, it's the web browser allowing cross domain requests to
local networks by default (another reply here suggested it is WebRTC
specifically that is flawed).

~~~
jasonkester
IIS handles all this out of the box for me. I just bind the host name to the
site in question. It sounds like maybe this isn’t something that other web
servers do, which is surprising. The whole point of the operation is to
seamlessly run 30 odd websites on one dev machine without needing to fiddle
with port numbers or anything else.

~~~
londons_explore
In the *nix world and with the move to containerization, most applications now
assume they are the only thing running on the web server, and therefore demand
custom config/settings. Node apps for example are their own server.

------
brasetvik
The page doesn't do a great job at explaining the mechanism (which from a
quick glance seems to involve WebRTC).

This is a better resource on this topic, which involves DNS rebinding:
[https://medium.com/@brannondorsey/attacking-private-
networks...](https://medium.com/@brannondorsey/attacking-private-networks-
from-the-internet-with-dns-rebinding-ea7098a2d325)

DNS-rebinding also gets around the cross origin request issue, which some
comments here mention.

------
systemtest
I used to scan the company network for Selenium servers and remotely open
"inappropriate" pages. Always nice to see a coworker wonder why a browser
window with images of Pooh opened up.

~~~
fredley
I hope you're not in China!

------
crishoj
Makes you wonder what other apps are accepting local HTTP connections (e.g.
vendor bloatware [1]).

[1]
[https://support.lenovo.com/th/th/product_security/len_4326](https://support.lenovo.com/th/th/product_security/len_4326)

------
gambler
When will software engineers finally understand that issues like these aren't
problems with some random service you run on your computer, but with the (lack
of) security model behind modern web browsing?

~~~
kyle-rb
Modern web browsing provides for this with same-origin policy. Same-origin
policy can be negated by the server if it sets an overly lax CORS directive.

So this is something that's secure by default, but can be broken if the
"random service you run on your computer" decides to break it. I don't think
that's an issue with the browser's security model.

~~~
gambler
Same-origin policy is _not_ a security model. It's a ridiculous quarter
century old hack. If you pause to think about it, domain names are a horrible
way to delineate security boundaries on the web.

~~~
kyle-rb
I disagree about domain names, but I don't think that's even relevant to this
discussion.

Do you have some kind of security model in mind that would work better than
same-origin policy in this case? I.e. cross-origin requests are still allowed
to happen somehow, but users are still protected against random services
intentionally disabling your security measures?

~~~
gambler
It is bloody obvious that when people spin something up on their local
machines the do not intend to make those pages available to every website they
visit. This _by itself_ suggests that there should be a dedicated security
context for such applications and that the restriction on information exchange
should be based on that context. Not on some random thing the application does
(like authentication, CSRF or sending CORS headers).

Scenarios like that should be the foundation of a sensible security model, not
an afterthought achieved by applying layers and layers of security ducktape in
every single instance.

~~~
brlewis
>an afterthought achieved by applying layers and layers of security ducktape

This is where you're missing the fundamental nature of the issue in this
article. The sensible security model is there by default. An additional layer
is added to make a resource available cross-origin, and the article merely
serves to remind people that making a resource available cross-origin is still
making it available cross-origin when the origin is localhost.

------
shakna
I can see the list of XHR requests made, and I have half a dozen local web
servers running, and there are a bunch of other web servers running on my
network. I have a bad habit of spinning up a server on an ESP32 whenever I
want to remote control something physical.

... Strangely, despite the XHR's hitting ports and IPs I know are running
unsecured web servers, the site sees nothing. Lots of "unreachable".

Firefox 67.0, Arch Linux (5.0.10).

~~~
sepbot
The requests should fail unless the servers you are running allow CORS as per
the headers.

~~~
shakna
There's half a dozen that are.

~~~
ypkuby
Do you have any locked down policy on them though? I assume this would only
work if you gave blanket access.

I noticed in my tests it found one on port 3000 with blanket access, but
didn't see one on port 9999 with restricted access( policy => allow from
*.mydomains )

~~~
shakna
Yes. I'm terrible and have them set to allow all domains, and all methods.

~~~
dillonmckay
Maybe your browsers are up-to-date?

------
nothrabannosir
Anyone using uBlock origin can add these custom rules to protect localhost:

    
    
      * localhost * block
      localhost localhost * allow
    

This should block any non-localhost from accessing localhost.

(note: only protects you superficially, based on DNS. what we'd want is
protection based on IP. otherwise you're still exposed to anyone setting their
own DNS to 127.0.0.1. but it's something...)

------
mehrdadn
Here's a question I've had for a while: WHY in the world do web browsers not
block access to localhost? What exactly is the extremely compelling use case
that has prevented them from blocking this?

~~~
mjlee
There are a fair number of applications that expose a UI with a local web
server. I use the Ubiquiti Controller, but it's also quite common in the world
of Plex, etc. It's also a path used for local OIDC, such as with the gcloud
CLI.

~~~
mehrdadn
> expose a UI with a local web server

I'm not talking about UIs hosted on local web servers being able to send
requests to themselves, I'm talking about UIs hosted on REMOTE web servers
being able to send requests to local ones. It seems far worse than a random
cross-origin request to me and for the life of me I can't imagine uses cases.

~~~
linuxftw
That's not really how the internet works.

What is a local webserver? Running on your machine? Running on your LAN?
Running on your corporate intranet? How should a browser differentiate between
these things?

What qualifies as a remote server? Did you know, some very large enterprise
environments squat on public IP's for private intranet internally due to
address space exhaustion (IPv4 anyway)? Just because something appears to be
on a public address doesn't mean it actually is.

~~~
mehrdadn
This makes no sense. I'm just talking about localhost, I don't care where the
physical computer is. It makes no difference if you're an enterprise with
software running on a private or public or whatever IP. Whatever the case, I
still don't see why you should be able to use JS to access a localhost
address.

~~~
linuxftw
But what if your webserver is bound to 0.0.0.0:80? Which IP ranges do you
block? Just the lo interface? Well, that wouldn't prevent this exploit as the
webserver would be listening on all of your IP addresses.

~~~
mehrdadn
What? Nobody cares what the listener is listening to. Nobody even cares if a
listener even exists. It's completely irrelevant. You just block the outgoing
request if it's sent to a localhost address from a non-localhost address.
What's so complicated about this?

~~~
linuxftw
If you're suggesting that anything not originating on an enumerated list of
IP's for localhost cannot request an url for something that resolves to an
entry in an enumerated list of IP's for localhost, that would be possible, but
is not really representative of the entire surface area of the effect
demonstrated in the article.

It's also entirely possible you have some development api running on a
particular port on localhost, and some app running in a container or VM that
wants to make calls to it.

~~~
mehrdadn
> is not really representative of the entire surface area of the effect
> demonstrated in the article

So let's let perfect be the enemy of good? What's the logic here? Let's leave
a huge security hole because we can't achieve perfection in all scenarios?

> It's also entirely possible you have some development api running on a
> particular port on localhost, and some app running in a container or VM that
> wants to make calls to it.

The VM should be on localhost or you should jump through a few hoops to
whitelist it somehow. I see no reason why this should be allowed by default.

~~~
linuxftw
> So let's let perfect be the enemy of good? What's the logic here? Let's
> leave a huge security hole because we can't achieve perfection in all
> scenarios?

The logic here is don't hastily start implementing changes to how http
requests currently work without a well established plan for doing so. I think
there would be a good deal of corner cases you need to account for to
successfully implement this feature.

Anyway, this problem seems pretty obvious, and I'm sure this discussion has
been had elsewhere already.

~~~
mehrdadn
> The logic here is don't hastily start implementing changes to how http
> requests currently work without a well established plan for doing so.

As if the point of this thread was to push browser vendors to hastily
implement this without thinking it through? Are you just trying to find
something to have an argument over? I'm tired of this.

------
rlue
Can anyone explain what kinds of attacks are possible here? A malicious script
on this website can identify that a service is running on a particular
endpoint (IP + port), and depending on the server's CORS policy, the script
may be able to submit HTTP requests to that service... am I getting it right?
I can see how that might be dangerous if the service responds to simple GET
requests with sensitive information, or has a well-documented REST API and no
authentication. Is this the scope of the vulnerability, or is there more to
it?

I tried this with a few different services running on my machine (a one-liner
WEBrick server in Ruby, Syncthing, a plain-text accounting program calling
beancount, etc. etc.) and the script didn't detect any. I take it that means
that these services all don't allow CORS?

~~~
ericcholis
This comes to mind recently
[https://securityaffairs.co/wordpress/84803/hacking/dell-
supp...](https://securityaffairs.co/wordpress/84803/hacking/dell-
supportassist-flaw.html)

------
j0ev_5
I wrote some code a while back to find and detect other devices on your
network, although it no longer seems to work on safari:
[http://joevennix.com/lan-
js/examples/dashboard.html](http://joevennix.com/lan-
js/examples/dashboard.html)

The fingerprinting db it used can be found in the repo:
[https://github.com/joevennix/lan-
js/blob/master/src/db.js](https://github.com/joevennix/lan-
js/blob/master/src/db.js)

------
suyash
Can anyone share what measures we can take as web developers to secure local
development environment?

~~~
deno
Custom DNS server with DNS rebind protection. E.g. if you’re running OpenWRT
you’re fine[1].

Also just don’t test on localhost. You can use a proper domain (or claim one
in .test TLD[1] if you’re fine with selfsigned certs) and point it to
localhost.

If you’re going to use any redirect flow like OAuth/OpenID you’re going to
need this for testing eventually anyway.

[1] [https://openwrt.org/docs/guide-user/base-
system/dhcp](https://openwrt.org/docs/guide-user/base-system/dhcp)

[2] [https://en.wikipedia.org/wiki/.test](https://en.wikipedia.org/wiki/.test)

~~~
asdkhadsj
What do you recommend if you're running a local server? Eg, I've developed
programs before with the assumption that the user will be running it either
for their local machine, or perhaps for their local network.

Think self hosted Wiki/etc. I was never sure _(and thusly have yet to properly
implement it)_ what would be secure, but also a good UX. A normal auth + self
signed https would be simplest I imagine, but I'm not clear if browsers widely
accept that. I recall Sandstorm having issues with this area, and required a
domain to fully run properly. Which seems.. complex for a minimal install
requirement.

Thoughts?

~~~
deno
I actually wanted to do something similar one time and I figured there’s one
way to do it:

1) Get a domain name for the project, e.g. mycoolwiki.tld

2) In the installer/setup provision for the user a random subdomain, e.g.
d2c8116f19d0.mycoolwiki.tld

3) Use Let’s Encrypt DNS method to provision cert

4) Redirect d2c8116f19d0.mycoolwiki.tld to LAN IP

It’s not ideal because you need some external infrastructure and it assumes no
DNS rebind protection.

However, if your webapp has a client and server, that is communicates via API
only, you can actually do a lot better:

4) Setup local server to accept CORS requests from d2c8116f19d0.mycoolwiki.net
only

5) Host client at d2c8116f19d0.mycoolwiki.tld

Additionally,

6) Make the client a PWA with offline support

and/or

6) Offer browser extension to use local copy of the client when user visits
∗.mycoolwiki.tld

Though for my use case I actually wanted to have ∗.mycoolwiki.tld/ipfs/<hash>
be backed by IPFS and offer generic extension that both verifies that the IPFS
gateway is playing nice and (if configured) redirect to local gateway.

Also offering Electron client instead of browser would work as well and saves
you getting the cert.

------
amenod
I was surprised to notice that this page correctly found my _local_ network IP
address, and was thus able to query correct network range. I have disabled
WebRTC [0] now in Firefox (media.peerconnection.enabled is now false), but the
page still tries the same range. Is there any other way they can get my local
network IP?

[0]
[https://stackoverflow.com/a/26850789](https://stackoverflow.com/a/26850789)

------
kj4ips
Some time ago, I made something using a similar technique to redirect to
router configuration pages. Used WebRTC to get local IPv4, then spawned a
worker and timed it, if it came back fast, I assumed it was there, and
redirected the user, otherwise I tried a different common "router" address. I
was unaware of dns rebinding at the time.

------
brlewis
Here's the tl;dr

    
    
      app.use(cors());
    

defaults to Access-Control-Allow-Origin: *

If you know how CORS works, you already know that even if the resource is on
localhost, it's open to any web page, including not on localhost. You won't
find anything enlightening here.

If you don't know how CORS works but you're using the Express middleware for
it anyway, read the documentation:
[https://expressjs.com/en/resources/middleware/cors.html#conf...](https://expressjs.com/en/resources/middleware/cors.html#configuration-
options)

------
majke
My favorite is retrieving [http://wpad/wpad.dat](http://wpad/wpad.dat) with
ajax.

[https://bugs.chromium.org/p/chromium/issues/detail?id=123166](https://bugs.chromium.org/p/chromium/issues/detail?id=123166)

------
ludwigvan
Also if you are a front end developer and are on an insecure WiFi (coworking
space or public WiFi) make sure you only bind to localhost.

Otherwise other people on the network can see your frontend code which you are
probably compiling with sourcemaps, which will give the attacker almost the
complete source code of your SPA.)

~~~
miguelmota
But frontend applications expose mangled javascript which can be reverse
engineered anyway

~~~
ludwigvan
It can be done, but it is usually uglified. No need to give the plain source
to outsiders.

------
jeltz
His demo did not work so his message does not really get across. Maybe there
ware ways to exploit it but if so he failed. I got a CORS rejection.

~~~
freehunter
The demo may not have worked for you, but it does work and works for many
other people. Just because your particular browser is secured does not mean
the author failed.

------
rootusrootus
This just reminds me why I use Vagrant and only run a local webserver while
I'm actually doing development work on it.

------
jand
Funny enough, the site is reporting port 3000 to be running a web server. It
is not - according do nmap and my knowledge.

Any ideas on this?

~~~
IanCal
If like me you saw the bright red text "If you see any results like
localhost:3000 is available!" as meaning :3000 was actually available, it's
just an example. The yellow box above seems to be where the results would
actually be.

~~~
jand
I must admit, it was exactly this. I read the text without using my brain
properly.

Please accept my sincere apologies.

~~~
IanCal
I did exactly the same thing and spent too long running various things to find
out what on Earth was using the port! Only reason I had a good guess at what
it may have been.

------
XCSme
O only get: "Scanning localhost ... unreachable.", and there are at least 3
web servers active on the network.

~~~
dillutedfixer
Do you have WebRTC disabled? I disable it in Firefox and it breaks this test.

~~~
XCSme
No, default Chrome settings.

------
ryacko
Firejail allows for the creation of sandboxed interfaces, which include the
ability to block local network access.

------
markstos
In the source code, he is attempting to serve images from
[http://localhost:4000/](http://localhost:4000/)

It looks like a mistake because the image URLs are unlikely to exist on YOUR
localhost at port 4000 when you load the page.

------
bg0
So does this mean that if I'm running a local dev machine with un-bundled
source code on my company computer:

Any person that joins the wifi network and goes to a website that sniff's this
out will have access to my computers local server?

~~~
Cakez0r
Depends what server software you're running and how it's configured. The
specific configuration element you want to look for is usually called the
"bind address". If it's set to something other than "127.0.0.1" then it will
probably accept external connections.

If you know the port number your server is running on, you can also open up
command prompt / terminal and check with "netstat -an". Look at the local
address column and make sure your web server is listening on 127.0.0.1.

~~~
bg0
Appreciate the insight. Looks like I see:

tcp4 // 0 // 0 // * . 8080 // * . * // LISTEN

Local address being (*.8080)

------
shurcooL
> It is not sufficient security to only bind to 127.0.0.1 (the “loopback
> interface”)

What would be a better, more secure thing to do when you have multiple web
servers on one machine behind a SSL-terminating reverse proxy?

~~~
cosarara
The thing to do is not run a web browser on that machine. Run the servers in a
VM.

~~~
notatoad
But this seems to be able to see servers accessible to the local machine, so
if my Dev server in a VM is accessible from my browser, it's accessible to any
webpage in my browser?

~~~
cosarara
The reverse proxy is accessible from your browser and is properly configured
to not accept random requests from any webpage (See: CORS). The others are not
directly accessible, but only through the reverse proxy server. Does that make
sense?

~~~
notatoad
not really, no. i still don't see what the reverse proxy or the VM are
bringing to the table here. If i'm understanding the necessary CORS config
here, it's to simply not send any access-control-allow-origin header, which
does not require a VM or reverse proxy, most HTTP services do that by default.

simply being accessed through a reverse proxy instead of directly doesn't add
any additional security

~~~
cosarara
Actually, you are right.

------
tryhackme
I covered how he is doing this here:
[https://tryhackme.com/room/xss#6](https://tryhackme.com/room/xss#6) (Look at
all the tasks first)

------
bronco21016
Does this not work on mobile? I’m not seeing any of my web services show up on
my server when opening this page in iOS Safari.

EDIT: In fact the status of local network scan doesn’t come up at all.

------
joseluisq
I had supposed to see my servers exposed but...
[https://imgur.com/a/hjBaYbG](https://imgur.com/a/hjBaYbG)

------
Glyptodon
On my machine the requests seem to go out but the browser blocks the
responses... Which seems weird, I've have expected the opposite (the browser
blocking outgoing).

------
terrik
Would this have implications for people who use Cryptomator, which uses
loopback connections to a local WebDAV drive in order to encrypt/decrypt
files?

------
yeuking
I have use that extension quite long time but i remove it immediately after
see this news!

------
jasonhansel
I wonder if you could use the websocket API to scan for other (non-http) open
ports.

------
nitwhiz
And why?

------
lostjohnny
Nope, you can't

    
    
        Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing)
    
    

Anyway

    
    
        TypeError: /(192\.168\.[0-9]+\.)[0-9]+/.exec(...) is null i-can-see-your-local-web-servers:169:41

~~~
chronial
The Cross-Origin check be circumvented via DNS Rebinding: When you request
mypage.com, my DNS returns the ip of my webserver. On all subsequent requests,
it will return 127.0.0.1. Now localhost is on the same origin as my page.

~~~
lostjohnny
It doesn't matter, you should be in control of a DNS the user relies on and
you should have your server send

    
    
        Access-Control-Allow-Origin: mypage.com
    

or

    
    
        Access-Control-Allow-Origin: * 
    
    

which is not a default anywhere AFAIK and is domain based, not IP based

And your server should be enabled to respond to mypage.com host header

~~~
chronial
Based on m12k's suggested interpretation of your comment:

> you should be in control of a DNS the user relies on

You always are when a users visits your domain – you control the DNS of your
domain.

> Access-Control-Allow-Origin: *

You don't need access-control headers, because you stay on the same domain.

> Your server should be enabled to respond to mypage.com host header

Most servers listening on localhost ignore the host header.

~~~
lostjohnny
> You always are when a users visits your domain – you control the DNS of your
> domain.

I meant you need to control the poisoned DNS

If I use 8.8.8.8 as DNS you can only work on the domains you already control,
which is kinda useless

> You don't need access-control headers, because you stay on the same domain.

No, you don't

My localhost server only respond to localhost and 127.0.0.1 host header

Not to mypage.com

Nginx does taht too by default

[https://github.com/nginx/nginx/blob/master/conf/nginx.conf#L...](https://github.com/nginx/nginx/blob/master/conf/nginx.conf#L37)

But even if you did, you still haven't resolved the issue: you can't make a
call to a different domain without access-control headers, unless it's the
__same __domain

you can't load mypage.com and then fetch from www.mypage.com, even if you
resolve www.mypage.com to 127.0.0.1 the browser won't let you do it

~~~
tgragnato
> But even if you did, you still haven't resolved the issue: you can't make a
> call to a different domain without access-control headers, unless it's the
> same domain

> you can't load mypage.com and then fetch from www.mypage.com, even if you
> resolve www.mypage.com to 127.0.0.1 the browser won't let you do it

In this part you’re confusing what a rebinding attack is: by serving a DNS
response with a short TTL an attacker is able to associate two different IPs
to the same query, thus it'd be mypage.com and mypage.com (not
www.mypage.com).. bypassing the same origin restrictions of the browsers.

[https://capec.mitre.org/data/definitions/275.html](https://capec.mitre.org/data/definitions/275.html)

~~~
lostjohnny
> In this part you’re confusing what a rebinding attack is: by serving a DNS
> response with a short TTL an attacker is able to associate two different IPs

But it doesn't really work.

I query my DNS, on my home router, not your DNS.

And the DNS on my home router query the ISP's DNS, which caches requests.

I bet you can't go below few minutes resolution.

I had this problem when validating the Letsencrypt DNS challenge, I had to let
certbot run for almost 20 minutes before my home router picked up the new
value.

When I'm at work, I use the company's DNS, which ignores non standard TTLs and
caches the first answer forever (well... almost) and disallow external domains
that resolve to reserved IP addresses.

~~~
OvermindDL1
Wait, doesn't the certbot DNS challenge query the nameservers of the domain
being checked, not your local DNS resolver, otherwise my fast DNS challenges
should fail?

~~~
lostjohnny
Not by default [1], but you can set it to what you prefer.

But in my case my network is configured to always reach the in house DNS
first, to keep latency low

[1]
[https://github.com/letsencrypt/boulder/blob/8167abd5e3c7a142...](https://github.com/letsencrypt/boulder/blob/8167abd5e3c7a1427b1cfb459f50b18b44e45d5b/bdns/dns.go#L244)

------
jsgyx
I use NoScript, you can't see shit.

~~~
julienreszka
You can't see shit neither

~~~
vbsteven
yes, he can, he will see the modern equivalent of "This site is best viewed in
Internet Explorer". Which in 2019 becomes "Please enable Javascript to view
this page"

~~~
larkeith
Honestly, such notices are shockingly unusual - most of the time (at least for
the sites I encounter) they don't bother with <noscript>, you just get a
broken and/or blank page.

I mostly use the web for reading blogs and articles, so the loss of dynamic
sites isn't troublesome, but it's certainly not for most users.

(Edit: Some numerical context I have enabled Javascript for 194 sites over the
last five years, whereas I encounter several new sites daily.)

~~~
vbsteven
I also browse with noscript all the time and I get them quite often. Mostly on
product landing pages and Show HN demos.

~~~
larkeith
Hmm, I wonder if it's confirmation bias on my end, or just a difference in
what pages we each view.

~~~
npsimons
> Hmm, I wonder if it's confirmation bias on my end, or just a difference in
> what pages we each view.

Yes.

Joking aside, I will add that I've been a NoScript/FlashBlock user for quite
some time (more than a decade? I honestly can't remember), and while I run
into some things that are frustrating (just had to disable NoScript for a tab
to order plane tickets), it is refreshingly uncommon.

Yes, you can browse with default deny to JS and Flash.

------
ivolimmen
I only run on local on ports: 8080, 9090, 9191, 8081, 8082 and 8083. So
scanning only on 3000 is a bit narrow...

~~~
y0ghur7_xxx
It's not scanning only port 3000:

    
    
        const portsToTry = [
          80, 81, 88,
          3000, 3001, 3030, 3031, 3333,
          4000, 4001, 4040, 4041, 4444,
          5000, 5001, 5050, 5051, 5555,
          6000, 6001, 6060, 6061, 6666,
          7000, 7001, 7070, 7071, 7777,
          8000, 8001, 8080, 8081, 8888,
          9000, 9001, 9090, 9091, 9999,
        ];
    

view-source:[http://http.jameshfisher.com/2019/05/26/i-can-see-your-
local...](http://http.jameshfisher.com/2019/05/26/i-can-see-your-local-web-
servers/) :125

