
Shared Cache Is Going Away - luu
https://www.jefftk.com/p/shared-cache-is-going-away
======
snowwrestler
I always felt like this was mostly a dream anyway due to the diversity of
libraries, versions, and CDNs across the web. Everything would have to line up
perfectly, within the TTL, to get the performance advantage of loading from
cache. And even then it was only really an advantage on the first page load of
a site visit; subsequent pageloads would hit cache anyway from the first page.

And speaking of privacy... if everyone across the web is loading resources
from one CDN, that seems like an interesting stream of data for that CDN.

~~~
tyingq
I think it may have had more benefit in the jQuery heyday. Things are much
more fragmented now.

~~~
wpietri
Absolutely. And resources like disk space and bandwidth have gotten much
cheaper in the 13 years since jQuery was invented. Fewer cache hits, lower
cache value, and less cost savings all point in the direction of retiring this
feature.

~~~
noobermin
You say this, but for people who have shit internet are acutely aware how
cdn's no longer help things from the users' perspective beyond the mere "cdn's
are better at delivering some assets than joe website."

It doesn't help that relative to everything else the churn in websites is
immense, making the chance you'll have to pull in things more likely. And
relative to everything is quite a statement, as churn in software is
pervasive.

EDIT: that is, I'm just complaining, not claiming the status quo (or what was
before) was better, obviously.

------
wongarsu
> I'm sad about this change from a general web performance perspective and
> from the perspective of someone who really likes small independent sites,
> but I don't see a way to get the performance benefits without the [privacy]
> leaks

Maybe I'm missing something, but the obvious solution to me would be more
cache-control headers.

The only notable case where shared cache is useful are resources on public
CDNs hosting libraries and other common resources. These could just send a
"cache-control: shared" header, or "cache-sharing: true" if adding new values
to existing headers breaks too many existing implementations. This puts them
in a shared cache, everything else gets a segmented cache.

~~~
judge2020
I think the page that loads the resources itself would need the 'cache-
sharing' header since other websites could still perform a timing attack if it
loads a CDN asset that specifies 'cache-sharing true'. Even then, enabling
cache sharing would still make you open to a timing attack and the
effectiveness of a shared resource cache would dwindle as less and less sites
share that cache.

~~~
wongarsu
If Google Fonts serves Roboto with cache-sharing true that is unlikely to leak
any data. Sure, you can detect that I at some point visited some site that
uses Roboto, but that's vague enough to be useless.

There is some potential for leakage with uncommon assets. Maybe only a handful
of websites use JQuery 1.2.65 or Helvetiroma Slab in font weight 100. It's a
less severe vector than just testing if someforum.example/admin.css is cached,
but still it's leaking data. The CDN could mitigate that by only sending a
cache-sharable header on sufficiently popular assets, but depending on others
going out of their way to preserve privacy is probably a bad idea.

~~~
lozenge
If a website uses 10 common assets, that's often an uncommon combination. And
if you have 100 websites on your "targets list" (let's say, fetish websites,
or LGBT communities) then you could get a positive match on some of them.

~~~
wang_li
The ten common assets have to be uniquely uncommon for this to be a risk.
Tinymodeltrains.com might have a distinct combination of ten assets, but if my
browser caches two of them from my visit to reddit, three from hackernews, two
more from imgur, and the last three from pornhub, your tracking data will be
meaningless.

~~~
nitrogen
Not entirely meaningless; it's kind of like a Bloom filter. False positives
exist, but false negatives are unlikely. Combined with other data in the style
of the Panopticlick, one can obtain a target set to which to apply closer
scrutiny.

------
hyperpape
I think a lot of people are unclear on the threat model here. If I have it
correct, there's no way around it: either you live with the privacy leak, or
you disable the shared cache.

The threat is that when you navigate to creepy website, it loads some library
and tracks the timing. They use that to infer that you've accessed some
resource from a sensitive site.

None of the workarounds with extra attributes are going to help, because they
rely on the web developer to

1\. know about the attack

2\. know that some library or asset is a realistic candidate for the attack,
and take appropriate action.

Neither one is that realistic. We developers are just too lazy to get stuff
like that right, even if we know about it. Cargo culting is the rule.

As for the effects, I suspect this will have a modest effect on the average
website. The sources I've encountered seem to cast doubt on the effectiveness
of share cache
([https://justinblank.com/notebooks/browsercacheeffectiveness....](https://justinblank.com/notebooks/browsercacheeffectiveness.html)).
I poked around the mod pagespeed docs and project, and couldn't find any
indication of how they'd measured impacts when they implemented the
canonicalization feature.

I wonder if you'll see a big impact on companies like Squarespace and Wix,
where there are a lot of custom domains that are all built using the same
stack.

------
willvarfar
Off the top of my head I can think of several ways to compromise on this by
making shared caching opt-in.

One way is for the requester to specify if the asset is shared. A new 'shared'
attribute on html tags and XMLHttpRequest would do this. Browsers enforce
cache isolation _unless_ the shared attribute is set, in which case it comes
from a 'shared' cache.

So if the attacker requests a www.forum.example/moderators/header.css from the
_shared_ cache, but the forum software itself didn't specify it was shared so
it never got loaded into the shared cache, then nothing is leaked.

And as it would only make sense to opt to share stuff like jquery.js from a
CDN, the forum wouldn't naturally share that css file and so on.

The other approach is for the response to specify sharing, e.g. new cache
control headers. Only the big CDNs would bother to return these new headers,
and most programmers wouldn't have to change anything to regain the speedup
they just lost from going to isolated caches once the CDNs catch up and return
the header.

In either case, sharing can _still_ be an information channel if the shared
resource is sufficiently rare e.g. the forum admin page is pretty much the
only software stuck on version x.y of lib z. The attacker can see if its in
the cache, and infer if the victim is a logged-in admin or not. Etc.

~~~
wpietri
I think the trouble with both of these plans is that it shifts cognitive load
to a lot of people who aren't expert in the topic. How many people would put
"shared" on something because it sounds good, or is the default in a template?
And even if they don't, how many brain-hours do we have to burn on people
understanding the complexity of an optimization that probably doesn't make
much difference to the average website?

~~~
halfdan
Isn't the bigger problem that the developer then choses for the user whether
to leak information or not?

~~~
willvarfar
If the enemy is the developer then you've already lost. Its not like cache
sharing is how a developer chooses to unmask your anonymity when browsing
between sites; they have cookies to do that in much better ways.

A long time ago PHK wrote some very salient comments about HTTP 2.0 efforts
[https://varnish-cache.org/docs/trunk/phk/http20.html](https://varnish-
cache.org/docs/trunk/phk/http20.html)
[https://queue.acm.org/detail.cfm?id=2716278](https://queue.acm.org/detail.cfm?id=2716278)
etc. He puts forward the case for a browser-picked client-session-id instead
of a server-supplied cookie.

~~~
Majromax
> If the enemy is the developer then you've already lost.

It's not that the developer is the enemy.

Pretend I create a website called "Democratic Underground: how to foster
democracy under a repressive regime." I'm naive, or I want it to load quickly,
or I accidentally include a framework that is either of those two -- some
library versions are cached.

Now, the EvilGov includes cache-detection scripting on its "pay your taxes
here" webpage. Despite my salutatory goals, shared caching leaks to the
government some subset of my readers.

------
breck
From Google Chrome design doc:
[https://docs.google.com/document/d/1U5zqfaJCFj_URrAmSxJ0C7z0...](https://docs.google.com/document/d/1U5zqfaJCFj_URrAmSxJ0C7z0AilLLJ30lgAqShVWnck/edit)

> "early experimental results in canary/dev channels show the cache hit rate
> drops by about 4% but changes to first contentful paint aren’t statistically
> significant and the overall fraction of bytes loaded from the cache only
> drops from 39.1% to 37.8%."

What about exceptions for loading common JS libraries from a shared CDN? I'm
looking at the Google Chrome design doc and don't see how one gets around
this. Maybe I'm just missing something, but if not it seems like they need to
dig more into perf from the perspective of the slower end of the distribution,
it could make a big difference.

~~~
londons_explore
I too find their performance numbers hard to believe... More digging required
I think!

~~~
breck
After reading the other comments I think I was probably wrong. A lot more
choices of libraries and versions nowadays that chances of cache hits on cdns
has decreased.

------
mintplant
Good. I've been advocating for this since publishing a history-leaking attack
on Chrome's shared bytecode cache, which also doesn't rely on the network
(CVE-2019-13684 - see page 8 of [0]). Would also like to see this applied to
visited link state eventually. Shared state between origins inevitably leads
to information leaks.

[0]
[https://www.spinda.net/papers/smith-2018-revisited.pdf](https://www.spinda.net/papers/smith-2018-revisited.pdf)

~~~
Andrex
I've wondered about visited link states for a while, and I could easily see
them getting focused on soon as well.

------
mikl
Is it just me, or was shared caching not on its way out already? I mean, it
was great when every website had jQuery on it, but with the proliferation of
new JavaScript libraries, the chance of getting a shared cache hit must be
getting smaller.

Besides, Webpack and similar bundlers with tree-shaking abilities makes it
practical to just load a subset of a large library.

And last (but certainly not least) there is the security angle. Imagine if
someone managed to sneak malicious code on to CDNJS or Bootstrap CDN, how many
nasty things they might be able to get up to, even if everyone remembered to
set crossorigin="anonymous" on their shared assets.

~~~
bepvte
That is why SRI exists

~~~
SamHasler
It's not clear from the Chromium Design Document[1] whether resources loaded
via Subresource Integrity (SRI) will have a shared cache or not. It's not
explicitly mentioned, so it's probably best to assume it's not until someone
has tested it.

[edit] The SRI spec github project has an issue for shared cache [2] that
seems to be coming to the consensus that there will _not_ be a shared cache
for SRI:

> _" it seems rather unlikely that we can ever consider a shared cache"_

[1] [https://docs.google.com/document/d/1XJMm89oyd4lJ-
LDQuy0CudzB...](https://docs.google.com/document/d/1XJMm89oyd4lJ-
LDQuy0CudzBn1TaK0pE-acnfJ-A4vk/edit#)

[2] [https://github.com/w3c/webappsec-subresource-
integrity/issue...](https://github.com/w3c/webappsec-subresource-
integrity/issues/22)

------
SigmundA
Why doesn't the browser just record the original request time for the resource
and simulate the same download speed when a different domain requests it for
the first time? Maybe even randomize the delay some.

Of course you get a false delay on first load but still saves network
bandwidth while still preventing information leakage.

~~~
ken
That's clever, but it still sounds like a way that information could be
leaked. Download the target resource, and then download it again concurrently
with a known unique resource, and see if the timing changes, for example.

It's an arms race where the browser would ultimately have to simulate every
consequence of actually downloading every resource over the slowest link in
the network. You're making the problem (and its solution) more complex but not
completely solving it.

~~~
SigmundA
Yes I suppose adding a cache breaker query string will trigger a true network
download that can be compared.

Although if the network hasn't changed much the true and simulated should be
very similar where would you really know if its a real or simulated request.

------
Mathnerd314
> I have Firefox 70.0.1 and it doesn't seem to be enabled.

It's behind a flag, browser.cache.cache_isolation:
[https://hg.mozilla.org/mozilla-
central/file/tip/modules/libp...](https://hg.mozilla.org/mozilla-
central/file/tip/modules/libpref/init/StaticPrefList.yaml#l687)

Similarly Chrome has a bunch of feature flags, I'm not sure if they can be
enabled from the UI:
[https://cs.chromium.org/chromium/src/net/base/features.h?typ...](https://cs.chromium.org/chromium/src/net/base/features.h?type=cs&q=SplitCacheByNetworkIsolationKey&sq=package:chromium&g=0&l=34)

~~~
unilynx
chrome://flags/

(but I can't find this one yet)

~~~
zamadatix
Just a note chrome://flags/ only has some of the flags, many must be passed
manually. [https://peter.sh/experiments/chromium-command-line-
switches/](https://peter.sh/experiments/chromium-command-line-switches/)

Still can't find this option as a flag though, must be compile time only.

------
james-skemp
I'm having difficulty determining how this impacts subdomains.

From what I can tell a.example.com, b.example.com, and example.com would all
have their own caches, correct?

We have multiple (sub)domains a|b|c.xxx.example.com that share a template, and
therefore resources (we're a .edu). If we're now looking at an initial load
hit for all of them, that may impact how we've been setting up landing pages
for campaigns.

I can't see us completely moving away from a CDN because of the other benefits
they provide.

~~~
maxyme
It depends on the implementation currently. Some implementations consider
subdomains isolated and others don't. See Chrome's implementation experiments:
[https://docs.google.com/document/d/1U5zqfaJCFj_URrAmSxJ0C7z0...](https://docs.google.com/document/d/1U5zqfaJCFj_URrAmSxJ0C7z0AilLLJ30lgAqShVWnck/edit?usp=drivesdk)

------
lovecg
I wonder how the dust will eventually settle when these happy naive times of
using shared caches for great performance gains are in the past, anywhere from
CPUs (meltdown, spectre) to www. Will we decide that the extra cost of
security is not worth it in all but a few critical applications? Or will we
accept is as the necessary tax?

~~~
vbezhenar
It should be safe by default with option to disable security in favour of
performance. Just like I can disable CPU patches now because I don't believe
in their severity.

------
ElectronShak
>Unfortunately, a shared cache enables a privacy leak. Summary of the simplest
version:I want to know if you're a moderator on www.forum.example, I know that
only pages under www.forum.example/moderators/private/ load
www.forum.example/moderators/header.css.,When you visit my page I load
www.forum.example/moderators/header.css and see if it came from cache.

You would expect fewer requests to www.forum.example/moderators/private/ than
to, for example, www.forum.example/public. If you look at caching from the
server load angle vis-à-vis security, then it could be inexpensive to not
cache www.forum.example/moderators/header.css so you would simply not allow
browsers to cache this resource.

If site A thinks that allowing the user's browser to cache a certain resource
puts them at a security risk, then this resource should be treated as not-
public.

~~~
amaranth
That makes the experience worse for the moderator though, especially on mobile
networks. Caching isn't just about server resources.

~~~
viraptor
In this specific case, I think only marginally worse. If you're a moderator,
you use the resource every day, going through many entries. The slowdown on
the first request is both unlikely (already cashed), and insignificant for the
task. This is not a "extra 100ms will cost you a customer" situation.

------
osrec
Perhaps a little 'allowcache' property on script tags/images/other resources
could be of use here to prevent leaking info?

Something like:

<script src='jquery.js' allowcache></script>

That way we can specifically say which items we're willing to share with other
sites and which ones we want an independent copy of.

~~~
ken
How would that help? You're incentivizing developers to mark everything
"allowcache" (to make their pages faster), and it's the users who will suffer
(via privacy attacks).

If your plan to fix this situation is "trust that developers are competent and
benevolent", we can achieve the same result by not doing anything.

~~~
osrec
By default, all existing code would not have the allowcache property, so would
load from an isolated cache. Those devs that explicitly care about speed for
certain resources, where leaks are not a concern (e.g. loading a lib from a
CDN) can set the allowcache property on those resources.

~~~
wpietri
I think there are two categories of developers who would use that. One is
smart, experienced people who have correctly evaluated both security and
performance concerns and decided to turn this on for a specific narrow case
where it's truly valuable to speed up first-time page loads.

The other is people who want things to go faster and flip a lot of switches
that sound fast without really understanding what they do, and then not
turning off the useless ones because they're not doing any real benchmarking
or real-world performance profiling. This group will get little or no benefit
but open up security holes.

Given the declining usefulness of shared caching (faster connections, cheaper
storage, explosion of libraries and versions), I expect the second group to be
one or two orders of magnitude larger than the first.

~~~
osrec
I agree with you, for now. But, I can imagine a future where library payloads
will increase significantly. In those cases, shared caching will be pretty
useful (I'm thinking along the lines of a ffmpeg WASM lib for web based video
editing apps - sounds crazy, I know, but I think we're heading in that
direction!). I could of course be totally wrong, and instead we just get
fancier browser APIs with little need for WASM... I guess we wait and see!

~~~
Andrex
If you're opening a video editing web app, I would expect a bit of loading
time the first time trying the app.

WASM modules also execute as they load (unlike JavaScript, which only executes
after being loaded), decreasing the value of relying on a cache in general.

> I can imagine a future where library payloads will increase significantly.

TBH I see the opposite; to use the focus of the article, jQuery was obviated
by browser improvements, the pace of which is not really slowing down.

------
buboard
CDNs only embolden developers to pile up more and more resources on a page.
Good thing to see it go away.

And maybe they made sense for things like JS in the 2000s but many super-cheap
hosting providers provide unmetered bandwidth nowadays. (and OF COURSE the
privacy/security things)

~~~
iforgotpassword
But I assume that unfortunately web devs will still keep using cdns for all
their libs since this is now industry standard and shows how pro you are.

~~~
wilsonrocks
Unless the libraries are bundled in, which is the case for most of the stuff I
use?

Exception - the analytics stuff I'm obliged to add.

~~~
iforgotpassword
You're right, I think I mostly see fonts from Google and trackers nowadays.

------
dna_polymerase
Great, now let's go and build websites that don't require 100 request per
page.

~~~
unilynx
Well, this might remove one argument against splitting the JS bundles too far.

If you'll never benefit from eg. a shared JQuery library on a CDN anyway,
might as well include a (reduced) version of it in your bundle.

~~~
james-skemp
That's as long as the shared library is changing about as frequently as your
custom code.

If you're using the same framework-x.y.z library for months at a time, but
doing daily/weekly code changes and pushes, you're losing out on the
cachability of the library.

But if your project is only being updated as frequently as the third party
libraries it uses, maybe it makes sense.

------
tofflos
Broswer vendors are in a good position to make this call because they can use
telemetry to measure the effectiveness of shared caching. Personally I doubt
shared caching is as effective as it used to be. Surely that remaining
effectiveness would be decimated by any attempt to implement an isolated-by-
default policy that would require website authors to opt in. So all in all
disabling the shared cache strikes me as a reasonable option.

Broswser vendors could choose to bundle some popular fonts and libraries but
that comes with its' own set of problems.

~~~
burpsnard
If you're pulling in things from a dozen or 2 different domains, it gets
expensive (In milliseconds) for client dns lookup, tls negotiation, when added
all up.

------
Pxtl
In general the ability for HTML to reference resources outside of the current
domain seems to be a privacy and security nightmare. xss attacks, privacy
leaks, adtech tracking cookies, etc.

------
_bxg1
It's weird to me that sites get to know whether files loaded from the cache in
the first place. I guess you could time it, but that wouldn't be perfect.

~~~
buckminster
It's not intended. The fine article prominently links to a page that explains
how it works:

[https://sirdarckcat.blogspot.com/2019/03/http-cache-cross-
si...](https://sirdarckcat.blogspot.com/2019/03/http-cache-cross-site-
leaks.html)

------
tyingq
Guess this kills SXG? Or will they consider it "same site"?
[https://developers.google.com/web/updates/2018/11/signed-
exc...](https://developers.google.com/web/updates/2018/11/signed-exchanges)

~~~
progval
How would it kill it?

------
tolmasky
Does the browser at least dedupe these files internally? For example, it goes
through the motions of a real download and so forth but afterwards it just
stores things in a content-addressable fs. Or will I now have 50 identical
copies of React on my hard drive?

~~~
jayflux
I don’t think it’s that clever, the origin will now be part of the cache key,
so more likely 50 identical copies

------
moron4hire
The performance benefits of using the shared CDN copy of resources versus
hosting your own with HTTP Keep-Alive are vastly overstated. _In-theory_ , if
everyone were using the same version of the resources and everyone were using
the same CDN, you'd see a benefit (maybe). _In practice_ , there are too many
variables and you end up cache missing most of the time, anyway.

Besides, this was only ever a concern for bad devs loading tons of tracking
scripts and hacking together sites via copy-paste anyway. If you're really
concerned about performance, you should be building, tree-shaking, and
minifying all of your JS into a single file.

------
move-on-by
With HTTP/2 pipelines, and upcoming HTTP/3 updates, I’m skeptical how much
performance gains are actually achieved with shared cache anyways. As far as
I’m aware, a TCP connection is still opened when using the cache, as well as
TLS with all its overhead. This is all just speculation, but it would be an
interesting experiment to compare how much (if any) benefits are seen pulling
in that common JQuery script while still loading custom JavaScript from your
own host vs bundling it all together or just loading them both separately from
your own host. HTTP connections are certainly not free.

~~~
icebraining
> As far as I’m aware, a TCP connection is still opened when using the cache,
> as well as TLS with all its overhead.

If the cache control headers says it expires in the future, the browser will
not usually make any request, just load it from the disk. Hence a typical
practice of setting a expiration date very far in the future, and just
changing the URL when the resource is updated (thereby forcing the browser to
request the new representation).

------
asdfasgasdgasdg
I wonder if a compromise could be caching things only if they are widely used
across public sites. A browser vendor could use telemetry or crawling to
aggregate information about commonly used resources across the web. The
browser could cache these resources, even proactively. It's certainly more
complex than the shared cache, but it could achieve an end that is broadly
similar. Then again, maybe the vendors' telemetry it telling them that first
site load is not that common and that the shared cache doesn't move the needle
that much. This wouldn't be surprising to find out.

~~~
finchisko
Please no for telemetry of such kind. How would you distribute such a list?

------
jacobkg
“ What does this mean for developers? The main thing is that there's no longer
any advantage to trying to use the same URLs as other sites. You won't get
performance benefits from using a canonical URL over hosting on your own site
(unless they're on a CDN and you're not) and you have no reason to use the
same version as everyone else (but staying current is still a good idea).”

~~~
GetOutOfBed
Isn't this just a direct quote of the 4th paragraph?

------
hannob
From a security and privacy perspective there are already good reasons to
self-host JS code and other external artifacts instead of sourcing them from
CDNs. In some situations even without this change it's faster (if it's not
already cached - because you can fetch it from the same host in the existing
connection via http2).

So self-host those JS files, and also use fewer of them if possible.

------
izacus
This might be a crazy idea but... why is it that browsers haven't implemented
something like Java Maven's package cache and proxy yet?

Basically the website says "I need com.google.angularjs:2.0.1" and browser
grabs and caches the package for all future usages? It seems to work very well
for Java... why hasn't there been any such initiative for the web?

~~~
icebraining
That's essentially what the browser cache is.. except you indicate the package
by URL, since there's no central repository of packages.

~~~
izacus
So why isn't there a central repository of packages?

------
microcolonel
If the idea is to track a browser, can't you just use DNS resolution time? Are
they looking at per-host DNS caches?

~~~
progval
Less effective, because browsers don't cache, recursive resolvers do, and they
are often shared; and it may be harder to tell the difference between a cache
hit and a cache miss in DNS (responses can be very fast).

But I guess it could work too

------
voidmain
Browsers could safely pull a list of very commonly requested, content
addressable resources from various CDNs and pre cache it (independently of any
request). That would even help with first request latency, and for mobile
(where bandwidth is expensive) you could do the pre caching on Wifi.

------
tasty_freeze
> I know that only pages under www.forum.example/moderators/private/ load
> www.forum.example/moderators/header.css.

Correction: only moderators and anyone who visits the author's page which
loads header.css for everybody. And any other page which is doing the same
speculative probe.

------
axilmar
Anyone care to explain why, in the presented example, the resource
www.forum.example/moderators/header.css is accessible to anyone and not only
to clients with moderator access?

~~~
majewsky
On many web applications, static files like CSS/JS files and non-user-
generated images are not served by the application server, but directly from
the filesystem. This conserves CPU resources and might also improve network
throughput because one application less is involved in the path.

------
bandrami
Do people still use nopkg and the like? I thought leftpad showed the problem
there.

------
nwah1
The general trend with regard to the most popular javascript and css libraries
is that their features have eventually made it into web standards.

We've known that cache can fingerprint forever. This change won't be that bad
if it encourages greater adoption of web standards.

------
stefan_
Great, now just ban third-party origin resources.

~~~
londons_explore
[http://mysite.com/proxy.php?url=http://othersite.com/adscrip...](http://mysite.com/proxy.php?url=http://othersite.com/adscript.js)

------
fnord123
Isn't this solved with Firefox containers?

~~~
progval
Yes, but only if users know about containers, and they do proper
compartmentalization. Which they don't.

------
LordHeini
As a side note:

CDNS, which usually are the use case for global caches, are also kind of
critical when it comes to the GDPR and other privacy laws.

Having no global cache may kill of the usefullness of CDNs (which is somewhat
doubtfull given the number of stuff available). But you are not allowed to use
them anyway unless the site is plastered with some allow-all-the-things-popup.

~~~
nly
CDNs will always be useful all the time the speed of light exists and DDoS is
a threat

~~~
LordHeini
I meant more in the context of caching for fonts and js libraries like the
article mentioned.

Afaik this is the main use case of CDNs.

I am pretty shure that there are way more pages with google fonts than
cloudflare protection.

And even for the sole purpose DDoS prevention the privacy issue still holds.
Sadly that means popups, redirects or other user unfriedly crap on the pages.

------
dedalus
what if the cachekey to use is sent as a response header

~~~
Nullabillity
This already exists as the ETag: [https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/ET...](https://developer.mozilla.org/en-
US/docs/Web/HTTP/Headers/ETag)

------
vwuon
>When you visit my page I load www.forum.example/moderators/header.css and see
if it came from cache.

Why can your page know if a certain resource came from cache? Can't that hole
be plugged, instead?

~~~
Semaphor
Timing attacks. Not really pluggable.

~~~
mindslight
It's eminently pluggable if you stop running hostile general-purpose code on
our own machines, giving it a large poorly-defined attack surface! That's the
eventual answer here. Websites have a perfectly cromulent place to run
whatever code they'd like - _on their own servers_. If you knew someone was
trying to kill you, you wouldn't invite them into your home for a party so
they could easily tamper with your medicine cabinet.

------
throwaway_n
Can't we just show a "This site may harm your computer" message whenever a
site is recording too much timing data? The page is justifiably considered
malware at that point.

For example there's nothing preventing someone from timing all my keyboard
events for keystroke biometrics:
[https://en.wikipedia.org/wiki/Keystroke_dynamics](https://en.wikipedia.org/wiki/Keystroke_dynamics)

~~~
maxyme
Consider the requestAnimationFrame API. It will give you a 60hz timer (even
higher on high refresh rate displays) but is used for a ton of animation
related tasks as well as games. That said it effectively can be used as a
timer which in this case would likely be precise enough.

What do you do in the case where a ton of website's use this API for
legitimate animations?

------
rrss
So, based on the response to Spectre and friends ("intel knowingly sacrificed
security in the pursuit of performance and everyone should sue them") [0][1],
is the correct response here "browser vendors knowingly sacrificed security in
the pursuit of performance and everyone should sue them?"

0:
[https://news.ycombinator.com/item?id=20867672](https://news.ycombinator.com/item?id=20867672)
1:
[https://news.ycombinator.com/item?id=20873452](https://news.ycombinator.com/item?id=20873452)

~~~
mlyle
It's not exactly the same as having bought a processor, and then having to
give up a significant fraction of performance to have it be secure, while
other processor manufacturers had much smaller performance penalties...

------
finchisko
Why not inline such a critical resource directly in html? For css and js it
would work just fine. And for fonts, I don't think they leak any private
information.

