Hacker News new | past | comments | ask | show | jobs | submit login

I think there's a fundamental principle/security in privacy that we don't really understand broadly enough across the industry -- that if you allow someone to know whether or not you're hiding/disabling something, they can often just force you to change the setting.

Just as one example, active-permissions that can be revoked after being granted aren't perfect, but are a big step up over manifests, because they're more work to exploit and often allow users to retroactively change permissions after an app checks if they're allowed.

Not to pick on the Privacy Budget specifically, but I worry that proposals like this don't really get that larger principle yet -- that it's still something we haven't quite internalized in the privacy community. If a site exceeds the privacy budget, it shouldn't get told. It should just get misinformation.

It's like autoplay permissions. Autoplay permissions on web audio are awful, because you can just keep trying things until you get around the restriction. What would be better is to auto-mute the tab, because that would be completely invisible to code running on the page.






Agreed, for things like autoplay. But dual-use features like feature detection that also enables fingerprinting cannot be replaced by randomized misinformation because that would really randomly break legitimate stuff.

The only privacy-conscious way would be no feature-detection at all or a very coarse-grained approach like "I support HTML2021".


Sort of.

You can't lie that you do support something, but you can lie in the opposite direction. And for sites that legitimately need that feature to function, you don't get much benefit -- if a site genuinely needs Chrome's Native File access, saying that you don't have it just means the site won't work.

But there's a grey area past that, which is that sites that don't need a feature, but are just using it to fingerprint, can have that feature broken without repercussion. If a news site requests Native File Access, and I say "I don't support that", then whatever.

This puts engineers in an interesting position. You can't just break your site whenever the full range of features that every browser supports aren't available, because:

A) You want to support a wide range of browsers, and if your news site doesn't work with multiple browsers you're just losing potential market.

B) A fingerprinting metric that just rejects every browser that doesn't support everything is not an effective fingerprinter. At that point, we basically have the corse-grained approach you're talking about.

The problem with this approach is that when a site requests capabilities, you need some way to figure out whether or not they're actually required, and whether or not you can lie about them. Permission prompts are... well, there are probably UXs that work, but most of them are also probably too annoying to use. In practice, I suspect that manually tagging sites is not an insurmountable task -- adblockers already kind of do this today.

One thing to push for with Client Hints is that it really, really needs to be an API that's interceptable and controllable by web extensions.

The same thing is true of fonts today -- if you lie and say you already have a font that you don't, congratulations, your text rendering is broken. But you can still lie about not having fonts, and you can still standardize your installed fonts to a smaller subset to make your browser less unique.


And all of this incidental complexity wouldn't exist if we had a sane document-based web that doesn't allow webmasters to run scripts in the browser.

I've written about this in the past, but we do really need at least one user-accessible, general computing environment that protects against these kinds of privacy attacks. It doesn't need to be the web, but I don't know of a better, currently-usable platform.

I often hear proposals that the web should just be for static documents, and I'm fine with that, but very rarely are those proposals followed up with alternative ways for ordinary people to run untrusted code. The assumption seems to be that if the web didn't exist, users would instead be responsibly vetting every binary on their computer, rather than downloading them en-mass from dozens of sources. And just looking at the smartphone app market, I don't think that assumption is true.

Again, not to say that a better alternative platform couldn't exist, but who's working on it? The native desktop platforms I see almost all do a worse job than the web at protecting against fingerprinting. It's almost universally better for privacy to use Facebook in a browser instead of downloading their native phone app.


I don't think sandboxing is the solution. It gets in the way of functionality and you still have to trust the developer: They can abuse the permissions that their program legitimately needs and they can use dark patterns. You should never run untrusted code so there is no need for a platform that facilitates it.

What we need are standardized protocols, strict customer protection laws and trustworthy software repositories so users can get software they can trust without having to vet it.

> Again, not to say that a better alternative platform couldn't exist, but who's working on it?

~Nobody is working on such a platform because the app web exists. Remove it and there is a lot more incentive to create a replacement. Regression to the mean alone practically guarantees that it will be superior.


> You should never run untrusted code so there is no need for a platform that facilitates it.

I don't believe this is a practical philosophy given the way that ordinary people use both the web and apps today. If you can convince me that you have a plan to make everyone en-mass stop installing the Facebook App, I might be persuaded to change my mind. But I regard the advice, "don't run untrusted code" to be a bit like saying, "stick to abstinence to avoid pregnancy". The advice isn't technically wrong, we just have good evidence that it doesn't work for society in general.

I also think that "untrusted" is being used as a really broad catch-all here. Trust isn't binary. I trust a calculator app to give me the correct answer to a multiplication problem, I don't trust it to store my banking information.

What sandboxes do are they allow us to set up boundaries for apps that we partially trust. Of course, sandboxes don't remove the need for consumers to be taught not to blindly trust everything. But they're not designed to solve that problem, just to make it easier to manage. There is no way to remove the need to educate consumers; trust is too broad of a topic to divide every app into a single "trusted" or "untrusted" bucket. So what sandboxes do is embrace that grey area rather than ignore it.

In the physical world, if I'm securing a bank or an organization, there are going to be people I distrust so much that they can't even enter the premises. There are going to be people I trust enough to walk around while supervised. There are going to be people I trust enough to be unsupervised, but not enough to give them keys to my vault or server room. And finally there will be a minuscule number of people I trust to have full access to everything.

Imagine if instead, our policy was, "you shouldn't let untrustworthy people into your building in the first place, so security measures past that point are useless." Would we be able to build a database of 'trustworthy' people who could have access to the vaults of every bank they walked into?

When we get rid of sandboxes, we're still using a permissions system -- all we've done is made that permissions system less granular and harder to moderate, because we've removed our ability to say nuanced things like, "I trust Facebook to connect to the Internet, but not to read my contacts."




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: