
A problem worse than Zoom? - minikites
https://lapcatsoftware.com/articles/zoom.html
======
guessmyname
I posted about another macOS app called “Dropshare.app” a few days ago [1].

I didn’t bother to write a blog post about it because my English is not good
enough.

Basically, anyone who uses this app is vulnerable to cross-infection and data
leaks. Assume that user John installed this app, then hacker Alice tricked
John to visit their malicious website. In this website, they added code that
sends requests to
“[http://localhost:34344/upload”](http://localhost:34344/upload”) to upload
malicious files to any of the services that John’s computer is connected to
via Dropshare, this includes private servers via SSH, Amazon S3, Rackspace
Cloud Files, Google Drive, Backblaze B2 Cloud Storage, Microsoft Azure Blob
Storage, Dropbox, WeTransfer, and custom Mac network connections. The port
number is also static, saving the hacker the need to run a port scanner.

I already contacted Dropshare’s developers to fix the issue, but got not
response.

[1]
[https://news.ycombinator.com/item?id=20399551](https://news.ycombinator.com/item?id=20399551)

~~~
icelancer
>> I didn’t bother to write a blog post about it because my English is not
good enough.

Basically everyone who says this has better English than most American
technical developers. We appreciate your modesty, but please, write it up :)

~~~
asavadatti
Communicating in a non-native language is difficult even if you know it well.
It doesn't come naturally and is taxing on the brain

~~~
abdusco
Certainly. I'm a non-native speaker myself and although the end result looks
somewhat convincing, it takes me an inordinate amount of time to write, look
up words and proofread the text. I'd guess a native would come up with a
comprehensible text at the first try, and it'd take a few tweaks until it
looks immaculate. The reason non-natives seem to write better is that we
simply spend more time working on it, and fix a lot of mistakes.

Just now, writing this, I had to look up if the saying went "inordinate
amount" or "unordinate amount". The difference between "pristine" and
"immaculate". It really adds up when writing longer texts.

~~~
edraferi
Regularly consulting reference materials and editing your work are hallmarks
of a good writer. It enables continuous improvement.

For what it’s worth, I think your examples are nuanced questions about fairly
sophisticated terms. I look up similar things all the time, and I am a native
English speaker, have a first-class education, and write regularly for my day
job.

Don’t sell yourself short!

------
tomxor
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web
> pages to send requests not only to localhost but also to any IP address on
> your Local Area Network! Can you believe that? I'm both astonished and
> horrified.

I'm sorry but WtF who is this guy, every web developer will know this, and
every developer should expect it since it's just an extension of basic
networking knowledge applied to web browsers. It's not horrifying, it's
basics, a great many things have depended on this fact to function for a very
long time.

Yes functionality can work against you when abused, no this is not a special
case.

~~~
sqren
This is Hacker News. We all know this. That's not the point. The point is: is
it reasonable in 2019 that websites you visit can make requests to devices on
your local network? To be honest I'm not sure. But I sure think it's a
relevant discussion to have.

~~~
tlb
There's no uniform criteria for "local network". I can create a local device
at any address I want. These days most are within 192.168 and 172.17 and 10.,
but not all.

As an example of legitimate use, Ubiquiti routers have a web app at
ubiquiti.com that opens connections to manage your local routers. It's all
authenticated with cookies. It seems like a good design.

~~~
amarshall
> There's no uniform criteria for "local network". I can create a local device
> at any address I want.

Certain subnets are _always_ private [1], and thus may safely treated as
“local”. But, of course, non-private addresses could also be local, but that’s
less common in a non-enterprise setting.

[1]
[https://en.wikipedia.org/wiki/Private_network](https://en.wikipedia.org/wiki/Private_network)

~~~
nojvek
This isn't a local network issue though, this is a cross origin issue that
Browsers definitely need to patch.

A script from the internet should not be allowed to interface with a script
from local network (localhost, local intranet e.t.c)

The browser should have strict sandboxes. This is like when you load a site
over https, browsers scream at you if you load a http resource saying it's
insecure.

~~~
tlb
Cross-origin is based on the domain name. It offers no protection against an
attacker poking your local IP addresses.

You can have multiple IPs for a domain name, so if I set "hack.tlb.org" to
include both a server I control and 192.168.1.1, I can repeatedly do fetches
from "hack.tlb.org" until one of them gets your router instead of my server.
And they're in the "same origin" for CORS purposes.

------
niftich
This complaint is real cute, but the trite answer is that this is how things
have worked for a long time. Awareness of it is spreads for a while whenever
high-profile events receive media and blog coverage, and perhaps the
exploitability of this has increased compared to several years ago when
products that opened up various HTTP-accessible servers were less common (or
secured by obscurity).

This isn't necessarily an excuse to not explore mitigations through consensus
in future browser behavior -- after all, that process of loose but eventual
consensus of incremental UX and airquote "security" improvements is how SOP
and CORS and C-S-P came about [1] and the cookie saga evolves [2][3].

But consider that legitimate uses of cross-domain requests to localhost exist
(e.g. an OAuth callback endpoint), while also keeping in mind that users from
all walks of life are, perhaps unbeknownst to them, are managing LANs of
computing devices running dozens of servers, often with modern encryption such
that communications between the program and the remote server are becoming
harder to intercept and oversee, and lack a comprehensive capability to
monitor, analyze, blacklist, whitelist, or snipe traffic in a way that's not
cumbersome or borderline user-hostile. Such is the world where we've arrived.
Etching away on one or two widely deployed corners of it won't fix the overall
landscape, even if it may significantly reduce the change of "drive-by"
exploitation through websites accessed through commonly used browsers.

[1]
[https://news.ycombinator.com/item?id=12408328#12408680](https://news.ycombinator.com/item?id=12408328#12408680)
[2]
[https://news.ycombinator.com/item?id=13689697#13691022](https://news.ycombinator.com/item?id=13689697#13691022)
[3]
[https://news.ycombinator.com/item?id=19853090#19855518](https://news.ycombinator.com/item?id=19853090#19855518)

------
saagarjha
> I believe that Apple also exempts Radar, its internal bug reporting app used
> by Apple engineers.

Safari whitelists a number of URL schemes used by its first-party and internal
apps:

    
    
      adir
      applenews
      applenewss
      itms
      itmss
      itunes
      itms-books
      itms-bookss
      ibooks
      macappstore
      macappstores
      rdar
      radr
      radar
      udoc
      ts
      st
      x-radar
      icloud-sharing
      help
      x-apple-helpbasic

------
Spivak
First, I think this is right and that websites shouldn't be able to hit any
localhost or private address spaces.

But this leads to a bigger question, what makes private address space special?
Not really all that much. Running an internal network using public addresses
isn't super common these days but isn't uncommon by any stretch. Does it make
any sense that any website on the internet is allowed to hit any other site
accessible by your machine that uses a public address? There is definitely a
security boundary being crossed here.

Say for example I run a web service that's private to my work's office. So
spin up a machine on my VPS account, give it a public address, and lock down
the firewall to my office's address range. Someone running Spotify in browser
shouldn't have to worry about a malicious page hitting a potentially sensitive
internal service.

Does it make any sense for me have to establish a VPN connection to my VPS for
the sole purpose of giving it a private address so browsers will block it? Ew.
I could also configure a CORS policy but we're talking about a service that
used a trick to bypass this protection -- and plus nobody knows how to set
that up right anyway.

~~~
icebraining
By default, browsers _do_ block sites from doing dangerous things to other
sites, like sending authenticated API requests; they only let by stuff that is
supposed to be harmless, like hotlinking images. And then they have a
mechanism called CORS that lets those services say "this particular site can
make API requests and such".

The problem is that Zoom, since they didn't understand CORS, and yet did want
to allow their site to make API requests, turned what should have been an
harmless action (GET an image) into a dangerous one.

Browsers could block everything, but all I think would happen is that Zoom
would just find some other silly (and potentially more dangerous) way of doing
the same thing, because they _want_ the site to be able to talk to the
service.

If you're writing your own service to be used on an internal network, you
don't need a VPN or anything. Just don't accept unauthenticated requests that
make changes, and ignore CORS.

~~~
eric_h
> Just don't accept unauthenticated requests that make changes, and ignore
> CORS.

The problem here is that it’s really pretty trivial to scan a local network
and get valuable metadata about the router and other devices on the network,
just using JavaScript and xmlhttprequest. It’s not that the local services are
at risk of being exploited, but the whole (average, unhardened home-) network
could be compromised by identifying devices with known exploits and, well,
exploiting them.

Now I’m trying to come up with a non-PITA way of isolating browsing from my
local network while still allowing direct access to my local network!

~~~
danShumway
> Now I’m trying to come up with a non-PITA way of isolating browsing from my
> local network while still allowing direct access to my local network!

UMatrix will protect you from most of this (with the exception of DNS rebind
attacks).

I don't necessarily disagree with people who are frustrated that their browser
can do this, but I also think it's completely reasonable to make it easy for
browsers to send requests on an intranet. There are multiple devices in my
house that wouldn't work with that capability.

The "problem", to the extent that there is a problem, is that securing these
devices relies on developers doing the right thing -- and developers are
untrustworthy. Theoretically, it would be better to put users in control. But
that's not a specific problem with Intranet requests, that's a problem with
CORS in general as it applies to the entire Internet.

~~~
wool_gather
> it's completely reasonable to make it easy for browsers to send requests on
> an intranet

Agree. But shouldn't we distinguish a request that originates from the local
user's input into the browser from one that originates from a remote entity?
I'm slightly ignorant here; maybe this isn't technically possible?

------
tlb
It's always been the case, back to NCSA Mosaic in 1993, that web pages could
hit URLs of local web servers. Before javascript, you had to use an embedded
image, like:

<img src="[http://192.168.1.1/cgi-bin/reboot-router">](http://192.168.1.1/cgi-
bin/reboot-router">)

And it didn't have to be port 80 -- you could try fuzzing someone's X server
with

<img
src="[http://127.0.0.1:6000/lsjdfjlk23jlrkj">](http://127.0.0.1:6000/lsjdfjlk23jlrkj">)

Fortunately, most protocols bail out on the first 4 bytes "GET ". One of the
reasons that Gopher support was phased out was that you could make a gopher
request contain more or less arbitrary bytes and attack many local servers.

Servers have always had the burden of defending against this.

~~~
la_barba
I believe some networking equipment let you go to "www.routercompany.com" that
loads up the router's config webpage without having to remember its lan IP.

~~~
colejohnson66
That’s just them hijacking DNS

~~~
phamilton
Yes but do browsers do their own DNS resolution? Would a browser know if a
domain resolved to a local ip?

I could pretty easily set up 1234.mymaliciousdomain.com to resolve to your
local network.

------
anaphor
How do you differentiate between a valid and invalid request to localhost /
the LAN?

Lots of websites will link to something like
`[http://localhost:9200`](http://localhost:9200`) (e.g. Elasticsearch) in the
documentation.

So you decide to make it impossible to load that page in the context of a page
loaded from a public IP address. Great.

What is stopping them from tricking you into clicking it (or filling out a
fake form), which is basically the same thing?

You haven't really solved the problem. You've just made it slightly more
difficult.

The solution is:

a) fix your applications so that they do not expose unsafe endpoints that can
cause unintended side-effects merely by navigating to them

b) stop using session cookies (at least stop using them alone) to authenticate
actions. Use token-based authentication (like CSRF tokens)

Edit: and before you say "check the referer header!", no, that will not solve
the problem. The bad web page can simply not include the referer with
something like `rel="noreferrer"`

~~~
pmontra
> How do you differentiate between a valid and invalid request to localhost /
> the LAN?

With the same origin policy? I think the post advocates for something like
allowing localhost and private IP addresses only from those very same
addresses or from the URL bar. Any other page shouldn't be able to access
them.

This will probably break something but what's the case for a web app to
legitimately access local host? Maybe access to some local service installed
by the user and managed "from the cloud".

~~~
anaphor
Define "access". The SOP does not stop a page from initiating a request to any
other origin, it stops it from _interacting with it_ , so that, e.g.
attacker.com cannot steal your session cookies from banking.com

You can do all kinds of things that don't violate the SOP but initiate a
request:

\- Link to another site

\- Redirect someone to another page using Javascript

\- Link an image from another site

\- Put a form that submits data to another site

\- Embed video/audio from another site

\- Embed an iframe from another site

Do you propose disallowing all of those things?

There are plenty of legitimate use cases for all of these things.

If you wanted to do this you would have to disallow _any_ type of links to
local servers.

~~~
SahAssar
Every one of those are solved by CSP/X-Frame-Options headers, CORS headers
with content-type (non-normal form content-types) checks and proper handling
of HTTP Methods. DNS rebinding is solved by https, and if you are doing
something sensitive over a network (even a local one) I'm going to assume
https is the proper way.

We have ways of handling these things, they just require a bit of
reading/implementation and are unfortunately non-default.

~~~
anaphor
Which is sort of exactly my point. There is no need to have a blanket ban on
any linking between local/non-local sites. You just need to make sure they are
set up to handle requests securely.

------
henvic
Great article.

There are legitimate reasons to open a webserver locally. However, the
benefits from these restrictions are great not to consider some sort of
protection. Perhaps there could be an authorization request the user could
allow (similar to how we got rid of the pop-ups) in the most natural way
possible (we don't want to break intranets, for example).

Another security-related bad pattern that annoys me is how some of this
authorization stuff steal your focus making it impossible for you to ignore
them (like, you cannot move to another tab before deciding to allow or not
something).

Another thing is how sometimes it is not completely clear if something is an
element of a website or your browser or system. For example, imagine you have
to type your user password for a random update to complete, but you are
browsing some website... Suddenly you see a prompt with your username and a
password field matching your system's... However, you can only know for sure
this isn't phishing if you try to cmd+tab and it is still there. Heck, the
system should try to detect you are on a window showing unsigned/unsafe
content and paint something out of the frame (like coming from the top address
bar) so you can easily identify it's legit (because a website shouldn't be
able to print a portion of your screen outside of 'window').

------
ben509
> In my opinion, web pages should not be allowed to make requests to LAN
> addresses unless the user has specifically and intentionally configured the
> browser to allow this.

Is there a way to know definitively if an address is "local" rather than
"wide"? Should that be more granular, e.g. host, LAN, WAN? How does that work
with bridged networking and such?

If I'm already browsing something on the LAN, it seems reasonable to be able
to browse other sites on the LAN. But then that seems like an overly broad
definition of LAN would allow privilege escalation.

If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that
was a LAN address, but that's a convention and depends very much on routing
being set up properly.

~~~
asveikau
> If I saw a private IP (192.168, 10, etc) or a .local domain, I'd assume that
> was a LAN address, but that's a convention and depends very much on routing
> being set up properly.

This convention on the address is actually backed by RFCs. eg. rfc 1918. There
is similar for ipv6.

However, blocking traffic to private IPs without careful consideration seems
like it could block some legitimate use. So one does have to tread carefully
when special-casing those.

------
mdtusz
I knew that localhost could be accessed, but the fact that local IP addresses
on the LAN can be accessed is actually quite surprising to me. I suppose it
makes sense, but it definitely makes me much more concerned with the security
of local devices on my home and office networks now.

Are there any best-practices for keeping things locally safe (i.e. LAN devices
like printers, testing boxes, tvs etc.) , beyond just treating them the same
way you would an external facing machine?

~~~
outworlder
> and office networks now

Yeah, forget about home, this is a nightmare. Who knows how many devices are
in a corporate network. Internal networks are usually not as well protected as
the perimeter.

~~~
ithkuil
And yet you'd be surprised how many companies rely on a VPN (basically a
glorified office network). Good read:
[https://cloud.google.com/beyondcorp/](https://cloud.google.com/beyondcorp/)

------
redoPop
Jonathan Leitschuh shared the same complaint in his original writeup of the
Zoom Zero Day, but also mentioned CORS-RFC1918 – a proposal to obtain
permission from the user before allowing a public website to access a resource
that DNS lookup reveals to be hosted on the private or local address space as
defined by RFC1918:

[https://wicg.github.io/cors-rfc1918/](https://wicg.github.io/cors-rfc1918/)

------
lasryaric
Wait wait WAIT! It is much more complicated than that.

You can make XHR (aka ajax) requests only if the CORS policy allows it
(concretely, this local web server you are trying to access is answering with
a specific HTTP header saying "I authorize the website xyz.com to send XHR
request to me via the web browser of the client of xyz.com).

Now for everything outside of XHR(ajax), you can send different type of
requests : <script src="..."></script> but this let you only load js files.
<img src="..." /> but this lets you only load images, you can't really do much
other than try to load images with that.

So if you get into the detail of each "web api" (XHR, <img/>, <script/>, etc)
you will see that you are actually very limited.

------
unreal37
I worked in a company that used custom DNS names to identify the environments:

\- www.mydomain.com

\- stage.mydomain.com

\- local.mydomain.com

The last one referred to the version of the app that developers ran on their
own machines. So they had a DNS-level entry that sent local.mydomain.com to
127.0.0.1.

This isn't a browser issue at all. I think the security issue is "applications
can install local web servers" and "some local web servers are insecure".

We already have XSS controls in place to prevent a domain from accessing the
contents of another browser window or an iframe.

It's not a browser issue. There are plenty of legitimate reasons for wanting a
browser to access a local web server. It might not be common, but it's not
illegitimate nor a security issue.

~~~
banana_giraffe
Yep, I have a sub domain under a personal domain pointing to a few a few
specific specific 192.* addresses and localhost. It makes it easy to test
HTTPS stuff without having to jump through hoops (and with Let's Encrypt it's
free).

------
iameli
> In general, there's no reason why a page on the internet should be allowed
> to access devices on your local area network. Of course, if the user enters
> a LAN IP into the browser location bar, this should be allowed, but that's
> not a cross-origin request.

What's a local area network? 10.x.x.x? That's going to break VPNs and
enterprise integrations in a variety of ways. With IPv6 it's even less
predictable.

The solution to this problem is CORS — accessing LAN servers, or any cross-
origin destination, requires affirmative consent from the LAN server in the
form of the Access-Control-Allow-Origin header.

~~~
felipelemos
If you can't control the lan server, like in zoom case, cors won't save you.
In fact it did not save you a bit in this case.

~~~
iameli
Perhaps I should have said "The solution to this problem is _properly-
implemented_ CORS." My point is that browsers already have a mechanism for
mitigating this particular problem and I don't think the additional proposed
mitigation (restricting browser access to localhost/LANs) would break a lot of
legitimate usage without much benefit.

There's only so much browsers can do to mitigate hostile code running on the
machine. CORS won't save me if Zoom decided to wipe my hard drive, you know?

~~~
danielparks
Hmm… servers on localhost could be required to have CORS headers.

That also makes it much harder to extract interesting information from non-
HTTP servers running on localhost.

------
SCLeo
I don't see any problems here. Even though they are on the same LAN, they are
still on different host, thus subjecting to CORS restrictions.

That is, as long as your devices on LAN do not send a access-control-allow-
origin header, the web pages are not capable of getting the actual response.
Also, the only http method available to them is GET (when preflight is not
required) and OPTIONS (when preflight is required), which are methods that are
almost always side-effect free and only return some value - which the script
cannot even get due to CORS restrictions.

~~~
mehrdadn
You don't need to receive a response to do damage with a request...

~~~
SCLeo
But you cannot send a POST request without preflight. Preflight is done with a
separate request without your payload.

------
baddox
I do agree in principle that web browsers probably should not allow non-local
web sites to make requests to local IP addresses.

However, I don't really see that as the fundamental problem with the Zoom web
server. They just happened to use local web requests to externally trigger the
Zoom application, because it's probably the most convenient to implement. But
couldn't they have, at least in theory, had the Zoom application snoop on the
display output until it finds an image of a QR code and open a conference call
based on the data in that QR code?

Obviously that's a more intensive listening mechanism, but my point is that
the fundamental problem seems to be that their application installs a backdoor
that is designed to expose the webcam without confirmation based on user
actions that do not necessarily imply intent (like clicking on a web link).
The local web request thing is really just an implementation detail: one that
probably should be fixed by browsers, but far from the only way Zoom could
have implemented this feature.

After all, the Zoom client could just have a socket connection to Zoom's
servers, and start a conference call whenever someone requests one. That's how
all native apps for conferencing/message work. They just usually require
confirmation from the user, and they usually (I hope) uninstall that process
when I uninstall the app, so people tend to be less upset.

------
alanfranz
DNS rebinding attacks leverage this very behaviour. And they exist since
years. Nothing new. And I think fixing the approach is complex and error
prone. I can still make the browser connect to myhost.mydomain.com and have it
resolve to 127.0.0.1 - what then?

Of course if your local webservers have a really open CORS header, that could
be a problem. But it's a matter for local webserver, mostly. And DNS rebinding
still applies.

~~~
amarshall
To mitigate this, I configure my LAN’s DNS server to drop records which
specify local or private addresses. Of course, this doesn’t help outside my
LAN. In Unbound:

    
    
      private-address: 192.168.0.0/16
      private-address: 172.16.0.0/12
      private-address: 10.0.0.0/8
      private-address: 169.254.0.0/16
      private-address: 127.0.0.1/8
      private-address: fd00::/8
      private-address: fe80::/10
      private-address: ::ffff:0:0/96
      private-address: ::1/128

------
asaasinator
I recently posted a blog post that exposed a similar issue involving Chrome
extension. The issue in particular is how LinkedIn makes local web requests to
try an identify which extensions you have installed:
[https://prophitt.me/articles/nefarious-
linkedin](https://prophitt.me/articles/nefarious-linkedin)

~~~
ficklepickle
I enjoyed your blog post, good find! I like your writing style. Keep it up.

------
jscholes
This approach was used for years by Spotify[1], to allow websites embedding
their player to load content directly into a running instance of the desktop
app.

[1] [http://cgbystrom.com/articles/deconstructing-spotifys-
builti...](http://cgbystrom.com/articles/deconstructing-spotifys-builtin-http-
server/)

------
vorticalbox
Is it not the job of the server not the browser to configure cors?

~~~
outworlder
It's a shared responsibility. The server has to publish the whitelist(in
headers). The browser has to enforce it.

~~~
vorticalbox
That's what I thought, it's up to the server to define what is and isn't
allowed, so servers on the local network need to set their cors correctly

------
wybiral
Pretty much all modern browsers allow access to localhost, even from TLS
pages. They treat localhost as though it were TLS so it's not even demoted to
"mixed content".

And beyond that they also allow mixed content for asset requests from TLS
pages to non-TLS URLs of any IP address (for instance an AT&T modem
configuration page at 192.168.1.254).

This site detects all kinds of services running on your local network:
[https://github.com/wybiral/localtoast](https://github.com/wybiral/localtoast)

I wrote a more in-depth explanation of this behavior here:
[https://davywybiral.blogspot.com/2019/05/always-secure-
your-...](https://davywybiral.blogspot.com/2019/05/always-secure-your-
localhost-servers.html)

~~~
uxp
> [Warning] [blocked] The page at
> [https://wybiral.github.io/localtoast/](https://wybiral.github.io/localtoast/)
> was not allowed to display insecure content from
> [http://127.0.0.1/](http://127.0.0.1/). (index.js, line 172)

You sure about your claims? Your own website exhibits a countering argument.

~~~
bradyd
That page worked in Firefox, Chrome, and Edge for me with only warnings in the
console.

Chrome v75.0.3770.100: "Mixed Content: The page at
'[https://wybiral.github.io/localtoast/'](https://wybiral.github.io/localtoast/')
was loaded over HTTPS, but requested an insecure image
'[http://192.168.1.254/images/att_globe_logo.png'](http://192.168.1.254/images/att_globe_logo.png').
This content should also be served over HTTPS."

Firefox v69.0b4: "Loading mixed (insecure) display content
“[http://fritz.box/css/rd/images/fritzLogo.svg”](http://fritz.box/css/rd/images/fritzLogo.svg”)
on a secure page"

Edge v44.18362.1.0: "SEC7137: [Mixed-Content] The origin
'[https://wybiral.github.io'](https://wybiral.github.io') was loaded in a
secure context and loaded an optionally blockable insecure image resource at
'[http://fritz.box/css/rd/images/fritzLogo.svg'."](http://fritz.box/css/rd/images/fritzLogo.svg'.")

~~~
wybiral
The image requests only work because of mixed-content where browsers allow a
TLS page to include non-TLS assets. Those are ones outside of localhost.

For localhost-only you don't get those warnings because browsers treat
localhost as TLS even if it's not, such as this:
[https://github.com/wybiral/wtf](https://github.com/wybiral/wtf)

------
pgib
Zoom got around the CORS restriction by requesting an _image_ which is not
subject to CORS. So there are some limitations of what can be done and how you
can access things. But you could certainly use this technique to do some
simple IP/port scanning on a user's local network.

~~~
mdavidn
Just to be clear: There's no need to "get around" CORS in the Zoom case.
Browsers simply allow cross-domain GET and POST requests. Period.

CORS mediates the ability of JavaScript running in one origin to _read_
responses from another origin. If Zoom wanted JavaScript on any site to read
data from the local installation, their local web server would need only
return appropriate CORS headers. This was not necessary for the use case of
joining a meeting.

~~~
jcheng
This is my understanding too, but judging from the comments in this thread,
either a lot of developers aren't clear on the difference between CORS and
CSRF or else they're referring to aspects of CORS I'm not familiar with.

------
bendbro
It is beyond sad that this was upvoted on HACKERnews. This is an intentional
feature of web browsers and a specified feature of web standards. Could it use
improvement? Maybe. Should we disable localhost requests from webpages? Abso-
fucking-lutely not.

"Is this possibility not surprising to you?"

> no

"It was surprising to me!"

> that is because you don't understand the web

"The problem is actually worse than this."

> I wouldn't call it a problem

"The major browsers I've tested — Safari, Chrome, Firefox — all allow web
pages to send requests not only to localhost but also to any IP address on
your Local Area Network!"

> Yes, that's called conforming to a standard and it took years of work to get
> them all to behave the same.

In other news, the sky is blue, trees are green, and shooting yourself in the
foot still makes you bleed.

~~~
wool_gather
I'm interested in learning more about the use cases and standard for this,
because, yes, I don't perfectly understand the web. Can you share your
knowledge and point to resources that I can read?

~~~
bendbro
CORS allows websites to specify which resources (other than their "same
origin") can access their information. A simple is how jquery allows any
website to access jquery scripts from their CDN.

CORS also works on the local network, or even localhost, as the author has
discovered for himself here. Uses in these spaces are less ubiquitous, but if
you have ever needed to set up a web enabled resource in these spaces, you may
need CORS. I'll give some theoretical uses here:

1\. A company sells routers. They host a webpage at company.com that makes
requests to your router at <scary ip>

2\. A company sells a big, expensive hardware component that attaches to your
computer. To manage this component, they set up a website at company.com, and
the component sets up a website on your computer. Company.com makes requests
to localhost, to manage that big, expensive component.

The actual issue here is that companies setting up these websites at localhost
and in your local network do not securely set up CORS (see Zoom, other
issues). Although it would be unreasonable to kill these use cases, it would
be reasonable to require the user of the browser to check off that a localhost
or local network request is okay.

------
danielparks
Simply disallowing access to private network space is a non-starter, since
it’s used so frequently. For example, a typical use case is that an office has
a private IP space, e.g. 10.0.0.0/8, and various external services will link
into it, e.g. Gmail, Okta.

Disallowing access to localhost seems more plausible, especially if there’s an
exception for extensions so that things like 1Password can continue to work.

------
jakegold
The onboard passive hydrogen maser and rubidium clocks are very stable over a
few hours. If they were left to run indefinitely, though, their timekeeping
would drift, so they need to be synchronized regularly with a network of even
more stable ground-based reference clocks. These include active hydrogen maser
clocks and clocks based on the caesium frequency standard, which show a far
better medium and long-term stability than rubidium or passive hydrogen maser
clocks. These clocks on the ground are gathered together within the parallel
functioning Precise Timing Facilities in the Fucino and Oberpfaffenhofen
Galileo Control Centres. The ground based clocks also generate a worldwide
time reference called Galileo System Time (GST), the standard for the Galileo
system and are routinely compared to the local realizations of UTC, the UTC(k)
of the European frequency and time laboratories.

------
danShumway
We can debate over whether or not browsers _should_ work this way, but if
you're reading this and are technically inclined, the best immediate right-now
takeaway is that your intranet isn't perfectly secure.

As always, you should practice defense in depth and work to secure your
internal network from potential bad actors attacking from within your internal
network. NATs offer you partial security and make some attacks harder; but you
can't just throw up a private web server without authentication and say, "it's
on a NAT, so it's secure."

This is especially true in the IOT world, where the threat to your network may
not even be coming from a browser/website. Multiple layers of defense are the
way to go, because no single layer is impenetrable.

------
lasryaric
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web
> pages to send requests not only to localhost but also to any IP address on
> your Local Area Network! Can you believe that? I'm both astonished and
> horrified.

I guess this should serve as a cue that there is something off in what you are
writing. You did not just discover a major security flaw in all web browsers
while not being an experimented (at least web) software engineer.

------
gumby
> The major browsers I've tested — Safari, Chrome, Firefox — all allow web
> pages to send requests not only to localhost but also to any IP address on
> your Local Area Network! Can you believe that? I'm both astonished and
> horrified.

Wait, what? Is this sarcasm? This is by definition how networking operates.

------
jpttsn
Maybe we will look back at today’s web, the same way we look back now at OS
before protected memory was a thing.

~~~
slang800
I hope not. The whole point of the web is that you can link to anything and
make requests to any service. Forcing websites to be walled off from each
other just because some servers aren't secure would suck.

------
pgib
The author points out Safari's handling of this, but in reality, I think you'd
need to address this on a per-browser basis. Zoom said they added this
behaviour _because_ Safari added a confirmation which Chrome (and presumable
others?) did not have.

~~~
helper
Chrome has a dialog but it has a checkbox to say 'always open with this app in
the future'.

------
rococode
Reminds me of this post discussed on HN a couple months ago:

[https://news.ycombinator.com/item?id=20028108](https://news.ycombinator.com/item?id=20028108)

------
trollied
The Zoom bug and this resulting blog pos is going to escalate things massively
in the White/black hat communities.

Scary times.

~~~
anaphor
Pretty much everyone involved in the infosec community has known and
understood this for the past 20 years or more. I'm not sure how this can make
things any worse. If anything it will make things better because now more
people are learning how the web is actually designed, right?

------
gruez
noscript ABE guards against this in its default configuration. Unfortunately,
I think the feature is only in the legacy addon version, not the post-quantum
one.

[https://noscript.net/abe/](https://noscript.net/abe/)

------
api
Meh. This is how networks work.

~~~
pat2man
We don't let websites access file:// urls, why should they be able to access
[http://localhost](http://localhost) urls? Or [http://*.local](http://*.local)
urls?

~~~
baddox
To be fair, this is a slightly different issue. External websites can
presumably _link_ to file:// or localhost URLs (the one in your comment works
fine from the HN website), but they can't transmit any information about the
resource back to their servers. That's also true of images (unless the server
serving the image allows it via CORS).

An evil web page at example.com/evil can certainly contain an img tag for
[http://localhost/me.jpg](http://localhost/me.jpg) or
[http://dropbox.com/private-photo.jpg](http://dropbox.com/private-photo.jpg).
You will see your private images displayed on their web site, but while that
may be disturbing (or even useful for phishing), the evil web page cannot
transmit the image data back to itself. For example, it can't use JavaScript
to load the image into a canvas, base64 encode the canvas, and POST it back to
itself, because the canvas will become "dirty" as soon as the image is loaded
into it, and the browser will not allow JavaScript code to dump a dirty canvas
to any inspectable format.

~~~
kzzzznot
Really interested in this, can you provide some more depth/links to articles
about how this works?

~~~
SCLeo
Not necessarily more depth, but here is one from mozilla.
[https://developer.mozilla.org/en-
US/docs/Web/HTML/CORS_enabl...](https://developer.mozilla.org/en-
US/docs/Web/HTML/CORS_enabled_image#Security_and_tainted_canvases)

