One benefit of doing it on the client is the client can cache the result of an include. So for example, instead of having to download the content of a header and footer for every page, it is just downloaded once and re-usef for future pages
How big are your headers and footers, really? If caching them is worth the extra complexity on the client plus all the pain of cache invalidation (and the two extra requests in the non-cached case).
I’m willing to bet the runtime overhead of assembly on the client is going to be larger than the download cost of the fragments being included server or edge side and cached
If you measure download cost in time then sure.. If you measure download cost in terms of bytes downloaded, or server costs, then nope. The cost would be smaller to cache.
Not necessarily, compression is really effective at reducing downloaded bytes
In server terms the overhead of tracking one download is going to be less that the overhead of tracking the download of the multiple components
And for client side caching to be any use then a visitor would need to view more than one page and the harsh reality is many sessions are only one page long e.g. news sites, blogs etc
To be fair, it was pretty complicated. IIRC, using it required using Javascript to instantiate the template after importing it, rather than just having something like <include src="myinclude.html">.
As an EATER of food what is the benefit of CRISPR/GMO?
There answer after a good 40 minutes of searching is... nothing.
It's a technology 100% in service of being lazier/sloppier for industrial scale food production and in service of IP restricting the food supply in favor of shareholder X or Y.
"but we can make tasteless US tomatoes on even more inappropriate cropland!"
...
Great for my stock portfolio to screw over developing countries but useless for me as a first world eater of food.
Some US food products are banned for concerns about safety, but they're hardly unique - the US also bans some food products from the EU and UK that are considered unsafe in the US.
None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.
no GM crops, no milk with growth hormone (nearly all of it), no beef with growth hormone (nearly all of it), no chlorinated chicken (nearly all of it), no washed eggs (nearly all of them)
and now pork will end up on that list too
> None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.
I couldn't care less if US'ians want to eat shit (here, literally)
There are also trust issues the other way. I've seen a lot of contention between developers and security teams and marketing about putting third party code or proxying third party domains on the first party site for analytics, tracking, ad attribution, etc.
It seems like this requires you to have very high availability for the refresh endpoint. If that endpoint is unavailable, the user can end up being effectively logged out, which could lead to a confusing, and frustrating experience for the user.
It doesn't require a TPM though. It just says it CAN use one, if one is available. If it is changed to require a TPM though, then that will be a problem.
One way you could potentially combat that is to make it so that a single short lived token isn't enough to accomplish more dangerous tasks like that.
Many sites already have some protections against that by for example requiring you to enter your password and/or 2fa code to disable 2fa, change privacy settings, update an email address, etc.
Right. The idea is that the short lived cookies would have a very short lived expiration, so even if you get access to it, it isn't very useful.
> The proof of possession should happen at the start of each connection. With HTTP3 you shouldn't need a lot of connections.
That could possibly be workable in some situations, but it would add a lot of complexity to application layer load balancers, or reverse proxies, since they would somehow need to communicate that proof of possession to the backend for every request. And it makes http/3 or http/2 a requirement.
I think imitating TLS (and who knows how many other protocols) by coupling the asymmetric key with a symmetric one instead of a bearer token is the obvious upgrade security wise. That way you could prove possession of the PSK with every request, keep it short lived, and (unlike bearer tokens) keep it hidden from callers of the API.
That said, the DBSC scheme has the rather large advantage that it can be bolted on to the current bearer token scheme with minimal changes and should largely mitigate the current issues.
If DNS is wrong, that server can get a domain validated certificate.
What I am imagining here is that you set a cookie with Domain set, and not __Host, possibly because you need the cookie to be accessible on multiple domains, and then someone sets up a CNAME that points to a third party hosting service without thinking about the fact that that would leak the cookie.
You could have similar secure handling of cookies on your server.
For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.
This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.