Hacker News new | past | comments | ask | show | jobs | submit login

Google recently introduced a new “secure” login page which forces my chromium based browser (qutebrowser) to fake a Mozilla firefox user agent in order to work.

As such, I think if detection comes in some other form, it might be harder to trick some sites into working properly.






Despite all the PR about "privacy", they don't seem to intend to prevent detection at all and are introducing even more surface for detection with client hints. Basically the opposite of what the title implies.

Client Hints (https://wicg.github.io/ua-client-hints/) move a passive fingerprinting vector to an active one, i.e. information must be explicitly requested by the site and then the browser can choose how to respond.

The default level of information exposed drops to just the browser name and major version, which is only sent to sites on HTTPS and with JavaScript enabled.

Additional hints are only sent on subsequent requests by the browser if the site sends the matching header in its initial response and the browser chooses to send a value. The current set of proposed hints define the same amount of information exposed Chrome's User-Agent string.


Yes, but the fear remains that all sites will just always request everything from ua-client-hints. It is also totally unclear how browsers will handle this. I think making this permissioned will just add to the PrivacyNagOverload. Also, browsers will continue to lie in the hints because sites will always make broken assumptions or even try to do mischief with the info.

I think the only winning move is not to play: Freeze the User-Agent and do not provide a replacement. Or at the very least, make the replacement based on actual feature bits, not version numbers and device models.


This is part of what the Privacy Budget (https://github.com/bslassey/privacy-budget) proposal aims to tackle. Freezing the User-Agent string reduces the amount of information exposed by default. UA Client Hints means the site has to explicitly request the additional information. The browser makes a choice about how to allocate / enforce budget. You're right though about how that works and how it would be exposed to the user in their browser still being open questions! More permission pop-ups certainly aren't the answer.

That would still cause pages to do evil things if users set their privacy budget to "0/paranoid" or anything below "11/just gimme all".

Just as with adblockers users will be nagged about "please turn that dial to 11". On average nothing will improve except for users who are able enough to get around those shenanigans even now.


I think there's a fundamental principle/security in privacy that we don't really understand broadly enough across the industry -- that if you allow someone to know whether or not you're hiding/disabling something, they can often just force you to change the setting.

Just as one example, active-permissions that can be revoked after being granted aren't perfect, but are a big step up over manifests, because they're more work to exploit and often allow users to retroactively change permissions after an app checks if they're allowed.

Not to pick on the Privacy Budget specifically, but I worry that proposals like this don't really get that larger principle yet -- that it's still something we haven't quite internalized in the privacy community. If a site exceeds the privacy budget, it shouldn't get told. It should just get misinformation.

It's like autoplay permissions. Autoplay permissions on web audio are awful, because you can just keep trying things until you get around the restriction. What would be better is to auto-mute the tab, because that would be completely invisible to code running on the page.


Agreed, for things like autoplay. But dual-use features like feature detection that also enables fingerprinting cannot be replaced by randomized misinformation because that would really randomly break legitimate stuff.

The only privacy-conscious way would be no feature-detection at all or a very coarse-grained approach like "I support HTML2021".


Sort of.

You can't lie that you do support something, but you can lie in the opposite direction. And for sites that legitimately need that feature to function, you don't get much benefit -- if a site genuinely needs Chrome's Native File access, saying that you don't have it just means the site won't work.

But there's a grey area past that, which is that sites that don't need a feature, but are just using it to fingerprint, can have that feature broken without repercussion. If a news site requests Native File Access, and I say "I don't support that", then whatever.

This puts engineers in an interesting position. You can't just break your site whenever the full range of features that every browser supports aren't available, because:

A) You want to support a wide range of browsers, and if your news site doesn't work with multiple browsers you're just losing potential market.

B) A fingerprinting metric that just rejects every browser that doesn't support everything is not an effective fingerprinter. At that point, we basically have the corse-grained approach you're talking about.

The problem with this approach is that when a site requests capabilities, you need some way to figure out whether or not they're actually required, and whether or not you can lie about them. Permission prompts are... well, there are probably UXs that work, but most of them are also probably too annoying to use. In practice, I suspect that manually tagging sites is not an insurmountable task -- adblockers already kind of do this today.

One thing to push for with Client Hints is that it really, really needs to be an API that's interceptable and controllable by web extensions.

The same thing is true of fonts today -- if you lie and say you already have a font that you don't, congratulations, your text rendering is broken. But you can still lie about not having fonts, and you can still standardize your installed fonts to a smaller subset to make your browser less unique.


And all of this incidental complexity wouldn't exist if we had a sane document-based web that doesn't allow webmasters to run scripts in the browser.

I've written about this in the past, but we do really need at least one user-accessible, general computing environment that protects against these kinds of privacy attacks. It doesn't need to be the web, but I don't know of a better, currently-usable platform.

I often hear proposals that the web should just be for static documents, and I'm fine with that, but very rarely are those proposals followed up with alternative ways for ordinary people to run untrusted code. The assumption seems to be that if the web didn't exist, users would instead be responsibly vetting every binary on their computer, rather than downloading them en-mass from dozens of sources. And just looking at the smartphone app market, I don't think that assumption is true.

Again, not to say that a better alternative platform couldn't exist, but who's working on it? The native desktop platforms I see almost all do a worse job than the web at protecting against fingerprinting. It's almost universally better for privacy to use Facebook in a browser instead of downloading their native phone app.


I don't think sandboxing is the solution. It gets in the way of functionality and you still have to trust the developer: They can abuse the permissions that their program legitimately needs and they can use dark patterns. You should never run untrusted code so there is no need for a platform that facilitates it.

What we need are standardized protocols, strict customer protection laws and trustworthy software repositories so users can get software they can trust without having to vet it.

> Again, not to say that a better alternative platform couldn't exist, but who's working on it?

~Nobody is working on such a platform because the app web exists. Remove it and there is a lot more incentive to create a replacement. Regression to the mean alone practically guarantees that it will be superior.


> You should never run untrusted code so there is no need for a platform that facilitates it.

I don't believe this is a practical philosophy given the way that ordinary people use both the web and apps today. If you can convince me that you have a plan to make everyone en-mass stop installing the Facebook App, I might be persuaded to change my mind. But I regard the advice, "don't run untrusted code" to be a bit like saying, "stick to abstinence to avoid pregnancy". The advice isn't technically wrong, we just have good evidence that it doesn't work for society in general.

I also think that "untrusted" is being used as a really broad catch-all here. Trust isn't binary. I trust a calculator app to give me the correct answer to a multiplication problem, I don't trust it to store my banking information.

What sandboxes do are they allow us to set up boundaries for apps that we partially trust. Of course, sandboxes don't remove the need for consumers to be taught not to blindly trust everything. But they're not designed to solve that problem, just to make it easier to manage. There is no way to remove the need to educate consumers; trust is too broad of a topic to divide every app into a single "trusted" or "untrusted" bucket. So what sandboxes do is embrace that grey area rather than ignore it.

In the physical world, if I'm securing a bank or an organization, there are going to be people I distrust so much that they can't even enter the premises. There are going to be people I trust enough to walk around while supervised. There are going to be people I trust enough to be unsupervised, but not enough to give them keys to my vault or server room. And finally there will be a minuscule number of people I trust to have full access to everything.

Imagine if instead, our policy was, "you shouldn't let untrustworthy people into your building in the first place, so security measures past that point are useless." Would we be able to build a database of 'trustworthy' people who could have access to the vaults of every bank they walked into?

When we get rid of sandboxes, we're still using a permissions system -- all we've done is made that permissions system less granular and harder to moderate, because we've removed our ability to say nuanced things like, "I trust Facebook to connect to the Internet, but not to read my contacts."


> Client Hints (https://wicg.github.io/ua-client-hints/) move a passive fingerprinting vector to an active one, i.e. information must be explicitly requested by the site and then the browser can choose how to respond.

And how much of a hurdle is that, exactly?


That's not an active fingerprinting vector, especially within the context of https, still passive one with a minor protocol detail.

> Google recently introduced a new “secure” login page which forces my chromium based browser (qutebrowser)

That's an embedded chromium browser and according to [1] you should really be switching to oauth for login, not faking a UA string.

[1] https://security.googleblog.com/2019/04/better-protection-ag...


It isn't - in that blogpost, Google describes:

"However, one form of phishing, known as “man in the middle” (MITM), is hard to detect when an embedded browser framework (e.g., Chromium Embedded Framework - CEF) or another automation platform is being used for authentication. MITM intercepts the communications between a user and Google in real-time to gather the user’s credentials (including the second factor in some cases) and sign in."

In other words, the OAuth sign in is intended for applications which want to access the Google APIs in some way.

qutebrowser (as an application) doesn't need access to your Google Account in any way. It doesn't care about your Google authentication data, and it doesn't use Google APIs.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: