Hacker News new | past | comments | ask | show | jobs | submit | more thayne's comments login

One benefit of doing it on the client is the client can cache the result of an include. So for example, instead of having to download the content of a header and footer for every page, it is just downloaded once and re-usef for future pages


It’s amazing how people vociferously argue against this. If it was implemented we would be arguing over something else.


How big are your headers and footers, really? If caching them is worth the extra complexity on the client plus all the pain of cache invalidation (and the two extra requests in the non-cached case).


I’m willing to bet the runtime overhead of assembly on the client is going to be larger than the download cost of the fragments being included server or edge side and cached


If you measure download cost in time then sure.. If you measure download cost in terms of bytes downloaded, or server costs, then nope. The cost would be smaller to cache.


Not necessarily, compression is really effective at reducing downloaded bytes

In server terms the overhead of tracking one download is going to be less that the overhead of tracking the download of the multiple components

And for client side caching to be any use then a visitor would need to view more than one page and the harsh reality is many sessions are only one page long e.g. news sites, blogs etc


To be fair, it was pretty complicated. IIRC, using it required using Javascript to instantiate the template after importing it, rather than just having something like <include src="myinclude.html">.


Then you should want regulations about how the pigs are raised, not banning the use of CRISPR.


> Then you should want regulations about how the pigs are raised

We have those. EU animals have "five freedoms".


As an EATER of food what is the benefit of CRISPR/GMO?

There answer after a good 40 minutes of searching is... nothing.

It's a technology 100% in service of being lazier/sloppier for industrial scale food production and in service of IP restricting the food supply in favor of shareholder X or Y.

"but we can make tasteless US tomatoes on even more inappropriate cropland!"

...

Great for my stock portfolio to screw over developing countries but useless for me as a first world eater of food.

No proof of existence of a benefit.


Uh. Healthier animals.

This specific approval is for a gene therapy to prevent PRRSV infection - a major porcine virus and one that regularly infects pigs in the EU.

It has nothing to do with mistreatment of animals or factory farming.


poor husbandry is the primary objection to US food products

the chicken has to be chlorinated because it has literally been produced covered in faeces

this would seem to be enable it to become even worse


So don't import US food products if it scares you. That's a separate issue from whether to allow CIRPRed livestock.

Again, this disease regularly affects pigs in Europe and causes immense animal suffering.


> So don't import US food products if it scares you.

this is exactly the position of the EU, UK governments

and is one of the few policies that is universally supported by their populations


The EU and UK both import food from the US.

Some US food products are banned for concerns about safety, but they're hardly unique - the US also bans some food products from the EU and UK that are considered unsafe in the US.

None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.


no GM crops, no milk with growth hormone (nearly all of it), no beef with growth hormone (nearly all of it), no chlorinated chicken (nearly all of it), no washed eggs (nearly all of them)

and now pork will end up on that list too

> None of that has to do with whether or not countries should allow CRIPRed livestock to be raised domestically.

I couldn't care less if US'ians want to eat shit (here, literally)


It also disadvantages any apps that compete with Google's own apps.


There are also trust issues the other way. I've seen a lot of contention between developers and security teams and marketing about putting third party code or proxying third party domains on the first party site for analytics, tracking, ad attribution, etc.


It seems like this requires you to have very high availability for the refresh endpoint. If that endpoint is unavailable, the user can end up being effectively logged out, which could lead to a confusing, and frustrating experience for the user.


It doesn't require a TPM though. It just says it CAN use one, if one is available. If it is changed to require a TPM though, then that will be a problem.


One way you could potentially combat that is to make it so that a single short lived token isn't enough to accomplish more dangerous tasks like that.

Many sites already have some protections against that by for example requiring you to enter your password and/or 2fa code to disable 2fa, change privacy settings, update an email address, etc.


Right. The idea is that the short lived cookies would have a very short lived expiration, so even if you get access to it, it isn't very useful.

> The proof of possession should happen at the start of each connection. With HTTP3 you shouldn't need a lot of connections.

That could possibly be workable in some situations, but it would add a lot of complexity to application layer load balancers, or reverse proxies, since they would somehow need to communicate that proof of possession to the backend for every request. And it makes http/3 or http/2 a requirement.


I think imitating TLS (and who knows how many other protocols) by coupling the asymmetric key with a symmetric one instead of a bearer token is the obvious upgrade security wise. That way you could prove possession of the PSK with every request, keep it short lived, and (unlike bearer tokens) keep it hidden from callers of the API.

That said, the DBSC scheme has the rather large advantage that it can be bolted on to the current bearer token scheme with minimal changes and should largely mitigate the current issues.


The cookie jar isn't the only place the cookie could be leaked from. For example, it could be leaked from:

* Someone inspecting the page with developer tools

* Logs that accidentally (or intentionally) contain the cookie

* A corporate (or government) firewall that intercepts plaintext traffic

* Someone with temporary physical access to the machine that can use the TPM or secure enclave to decrypt the cookie jar.

* A mistake in the cookie configuration and/or DNS leads to the cookie getting sent to the wrong server.

This would protect against those scenarios.


That last one should largely be solved by

1) TLS

2) make your cookie __Secure- or __Host- - which then require the secure attribute.

If DNS is wrong, it should then point to a server without the proper TLS cert and your cookie wouldn't get sent.


If DNS is wrong, that server can get a domain validated certificate.

What I am imagining here is that you set a cookie with Domain set, and not __Host, possibly because you need the cookie to be accessible on multiple domains, and then someone sets up a CNAME that points to a third party hosting service without thinking about the fact that that would leak the cookie.

Sure


Oops your developer accidentally enabled logging for headers. Now everyone with access to your logs can take over your customer accounts.


You could have similar secure handling of cookies on your server.

For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.

This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: