Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] Google forbids login with niche Linux browsers (omgubuntu.co.uk)
160 points by neoromantique on Dec 21, 2019 | hide | past | favorite | 85 comments




Sounds like it's

> However, one form of phishing, known as “man in the middle” (MITM), is hard to detect when an embedded browser framework (e.g., Chromium Embedded Framework - CEF) or another automation platform is being used for authentication.... Because we can’t differentiate between a legitimate sign in and a MITM attack on these platforms, we will be blocking sign-ins from embedded browser frameworks starting in June.

(tldr have to use oauth now)

https://security.googleblog.com/2019/04/better-protection-ag...

cause Falkon and Konquerer use embedded Chromium via QtWebEngine 5.

(If it's not triggered by switching to a Firefox UA string, though, sounds like detection isn't that robust[1])

[1] https://old.reddit.com/r/kde/comments/e7136e/google_bans_fal...


Is this okay, even assuming it is true? I'm sure they could improve security somewhat using some form of remote attestation, so (hypothetically) only non-jailbroken phones and PCs with UEFI signed by a trusted (by Google) key could log in.

Start with a service open to everyone, and when you have enough market share, shut out everyone except major players. And of course, they don't give users the choice on whether they want this added security.


> so (hypothetically) only non-jailbroken phones and PCs with UEFI signed by a trusted (by Google) key could log in.

They could also ban everyone without their dna on file, but I don't see any evidence for a move toward that particular scenario either :)

Sounds like there's known riskier clients that have to use a different authentication band (oauth) instead. There is a line there at some point where requirements become onerous, but this doesn't seem like it to me.


First they came for the api users but I said nothing because I wasn't an api user.

Then they came for the text based browsers users but I said nothing because I wasn't a text based browser user.

Then they came for the small browsers but I said nothing because I didn't use a small browser.

When they came for me there was no one left to speak.

Googles behavior is clearly anti-competitive. At this point it is in everyone's interests for google to be broken up into multiple companies. One might even figure out a way to be good at search again.


But the "they" in this story is hostile third-parties trying to steal users' Google password.


I didn't mean this as a slippery slope example (though that certainly applies), but as an example of the same thing, just to a greater degree. If we're not okay with that, then why is this okay? It's the same kind of imposition, telling users what they may use.


Even "hackers" are willing to give away control of their devices for so called security so no wonder things like that are happening and are the norm. Most of the people argue here that Apple selling totally locked devices that you can't open with even 100 security alerts and total wipeout is totally good because of security. And mind you this is hacker news where people ought to be curious, tinker and improve.

Of course ironic changes come from Google in the name of security who doesn't care about my data security or privacy and keeps following me around the internet like a pervert. I guess you can be a peeping Tom in real life too if you claim to keep your collected photos secure from other perverts and only sell the ones with blurred out faces. What's the harm if you can't see the face?


>Google is known for A/B testing changes to its various web services all the time, so this specific hiccup could resolve itself in time.

>“Couldn’t sign you in. This browser or app may not be secure. Try using a different browser. If you’re already using a supported browser, you can refresh your screen and try again to sign in.”

A/B testing security? Is this what peak A/B looks like?

Can we go back to giving users a consistent experience and develop software based on some other type of reasoning, rather than this A/B gaslighting?


Its really bad outside of just Google. My wife was complaining about Instagram and certain features not working properly. Like sharing a story she was tagged in which she has done before but she was tagged in some new way. Then she saw a celebrity unable to do things my wife can do that she could do months ago like adding music to stories.


A/B testing is pretty much the gold standard. It took medicine a thousand years of bloodletting to get there.

Of course its’s not always possible, which is what economists and nutritionists and a few other sciences struggle with. But measuring security incidents and correlating them with browser use seems sensible.

There is still plenty of room for theory, such as coming up with what to test.


Some changes are slow-rolled so if they cause an unexpected regression, impacted users are minimized.

It's possible a change is causing an unexpected side-effect with blocking these browsers (maybe they have a specific known browser config that has a problem and they botched the UA identifier to be too broad?)


You get a consistent experience with a consistent user agent. Anything else is best effort, trusting developers to successfully implement standards correctly, and hoping they all made similar decisions in places the standard is open.


This amounts to a "best viewed in Chrome" philosophy. It's an argument against Open Standards draped in worry about security.

It's no different than Google's policies of restricting apps on Jailbroken phones, a practice that was ostensibly introduced for security, but quickly expanded to be used for DRM and to discourage users from taking control of their own devices.

The whole point of Open standards, the only reason we built these things in the first place, was so companies like Google couldn't decide which browsers worked with the web.

You might as well rephrase your first sentence to read, "you get a consistent experience with a current Chrome browser" -- and I have no doubt that there are people at Google who would be happy to have that policy if they thought they could get away with it.

After all, why should Google's security team trust that Mozilla's browser is secure, given that they don't get to pre-vet new versions? Even if a current version is implementing the standard correctly, there's no guarantee that a future Firefox version won't have problems. And Mozilla is already refusing to deprecate old APIs with Manifest V3, when according to Google that deprecation is super necessary to make extensions secure.


You also get a consistent user experience with a current Firefox browser; it might be slightly different than the one you get with a consistent Chrome browser. There's a reason a lot of companies standardize on a single browser for their internal web applications.

The purpose of avoiding browser monoculture was to avoid a world where a single company deploying a single closed-source browser owned the gateway to the World wide Web. The goal was never to make it easy for an arbitrary number of browsers to operate independently of each other; in fact, browsers are very complex software to get right and service providers have a responsibility to make it hard for users to get their accounts compromised even if they bring a browser with implementation errors into the loop.

I'd be more concerned if Google was blocking, say, Firefox, or if the block was harder to overcome than changing the UA. This block is inconsequential. Change the UA and be done with it.


> The purpose of avoiding browser monoculture was to avoid a world where a single company deploying a single closed-source browser owned the gateway to the World wide Web. The goal was never to make it easy for an arbitrary number of browsers to operate independently of each othe

Making it easier to build browsers is how we avoid a monoculture. If we were OK with there being 2 browser engines (Firefox and Chrome) we wouldn't need to make all of these standards. Mozilla and Google are perfectly capable of collaborating on their own in private, and it would be faster for them to do so -- its just everyone else that would be left out.

Open standards are why we can have interesting browser experiments like Beaker. It's important that there be multiple software projects pushing the web forward. Of course browsers are hard to build, just like Operating Systems are hard to build. It doesn't follow that there should only be two of them.

> Change the UA and be done with it.

To be honest, you're right on this point. This is a kind of pointless debate -- because what will actually happen here is all of the insecure browsers you're worried about are just going to take Vivaldi's route and by-default mock a different user agent for Google login screens. End users don't know or care what user agent is being sent to the remote server.

Let's be honest -- the Kmail devs are not going to throw their hands up in the air and say, "well I guess we just abandon the project now." They're going to push a change that invisibly mocks the user agent for all of their users.

Because, again, you can't trust an insecure browser. If you can't trust a browser-maker to implement the standard correctly, you also can't trust them to voluntarily go along with a scheme that will make a nontrivial portion of the web unusable for the majority of their current user-base.


While it doesn't follow that there should be only two browsers, it also doesn't make sense that Google should expose its end users to other people's lab projects if that increased the security risk of those end-users. Security is a balancing game and always in flux.


This is really bad. First: UA is unreliable, and is a complete mess. Second: it can kill competition. Third: it should be possible for anyone to write a browser. We need more of them, not less.

The web is one step away from needing to buy secure boot UEFI key like permissions for accessing it.

EDIT oh. This could also explain my constant token refresh issues with Evolution to access google calendars.


> We need more of them, not less.

While not sacrificing security, of course.


What security? The very narrow instance where your browser allows a MITM attack, and for some bizarre reason the attacker can't change the user agent to match Chrome's?

I am of the feeling that if Google's security practices require trusting an attacker not to change a user agent -- well frankly, in that case I am skeptical these changes are actually about security. Because the Google security team is smart, and I assume they wouldn't do stupid things.

But I am cautious about jumping to "this is malicious." There is certainly enough anecdotal evidence for a reasonable person to claim that Google targets competitors and tries to use changes like this to hurt them. However, it's not (currently) enough evidence for me. I am still naive enough to believe there is some semblance of good will at the company.

But I feel very comfortable saying that this is security change won't do much good, and the Google security team probably just thought, "why not turn it on?" without caring about potential consequences to competitors, because they don't think about anything outside of Google's ecosystem. I generally don't get the feeling that Google engineers are malicious, just that they're thoughtless and/or careless. I don't get the feeling that they're trying to mess up the web ecosystem, just that they act impulsively and feel very strongly that people shouldn't be questioning them; and that when people do question them, they tend to dig in their heels and become very condescending very quickly.

But again, I know there are Edge devs and Vivaldi devs that would call me naive.


If the browsers in question don't correctly implement all necessary standards to guard against XSS, frame-busting, and MITM attacks, Google will do what it can to protect its users against foot-shooting.

Changing the UA is equivalent to "voiding the warranty," so I'm not surprised Google isn't taking extraordinary measures. At some point, if your users really want to shoot their feet, there's only but so much you can do to stop them.


If a browser doesn't implement the standards to guard against MITM attacks, what makes you think it implements the standards to guard against user-agent manipulation during the MITM attack?

You've misunderstood what I'm getting at here. It's not the user that's going to purposefully change their agent -- it's that a browser that is insecure to the point that you can't trust it to log in is also insecure to the point that you can't trust its user agent to be reported correctly.

The entire security exercise is pointless because compromised browsers lie. They don't respect user preferences. An attacker who intercepts and modifies a request isn't going to suddenly start being honest with you when you ask what browser that request came from.


The code paths to change UA and implement XSS protection are different code paths.


No, User-Agent is no longer a forbidden header for Javascript fetch requests[0].

To be fair, both Chrome and Firefox have outstanding bugs where they haven't yet implemented the correct specs. But there is no reason to assume that a spec-compliant browser will block Javascript from setting the User Agent for a request. It's likely to allow it, because allowing it is the correct behavior.

Even if it wasn't the correct behavior, it's silly to assume that a browser that doesn't implement XSS protection is suddenly going to get good security when it comes to implementing UA freezing in request headers. I don't think there's a world where a browser maintainer says, "it's too much work for me to respect CORS, but I really want to make sure I'm following this obscure forbidden headers list".

[0]: https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_...


Software security is not a super-goal. Acceptable risk levels vary with circumstances. If you browse wikipedia on a kiosk-style machine it doesn't really matter whether it's mining bitcoin or not.

Or if you're visiting north korea then perhaps using your host's prescribed software stack may be preferable over bringing your own.

Even logging into an account from an insecure machine can be acceptable if you're using a low-value account. Some people do use throwaways regularly.


Google's assumption for its user's accounts though is that they're used as "keys to the kingdom." They're not optimizing for the throwaway account experience regarding security.


I felt compelled to once again post the classic quote that's taken on a whole new meaning in this coming digital dystopia:

"Those who give up freedom for security deserve neither."

The vision of "security" that these large companies have is to secure complete control over every aspect of your life, and the sooner the population realises that and stops believing their propaganda, the better. Google's continued destruction of the Internet through things like obfuscating URLs, purposefully degrading search results, and invasive RECAPTCHA-based surveillance is absolutely disgusting.


Whose security? If Google can track everything I do online, that's not 'secure' in my book, so Chrome is out.


Security is distinct from privacy. The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software, regardless of their producers' business models and data hygiene.


> The four mainstream browsers - Chrome, Firefox, Edge and Safari - have the most secure software

I disagree that they are "the most secure" browsers, let alone software. They fail to isolate remote scripts properly; that people were capable of executing timing attacks against the CPU (Specter et.al.) shows that they are not really very secure.

Browsers which don't execute Javascript and advanced CSS (Lynx being one extreme example) are going to be much more secure by default.


There are four major dimensions to security: attack surface; depth of defense, or how much an attacker can do once they're in; proactive measures to find security bugs (e.g., fuzzing); and code quality.

You're focusing on attack surface. But from a security standpoint, attack surface is probably the least important factor. Every sufficiently large application has a hole in it, and all attack surface does is crudely control how likely it is to stumble across that hole. Defense in depth, by contrast, lets you keep the attacker from doing bad things such as installing ransomware on your computer just because your HTML parser had a buffer overflow.

The major browsers spend a lot of time sandboxing their scripts in separate processes, and then disabling capabilities of those processes using techniques such as pledge(1), giving them much better defense in depth. They also put a lot more effort into finding and closing security bugs through use of techniques such as fuzzing. No one questions their much larger attack surface, but they do have much more effort into ameliorating attack vulnerabilities.

I should also bring up Spectre because you did. At its core, Spectre allows you to read arbitrary memory in your current memory space, nothing more. As a result, it basically means that you can't build an effective in-process sandbox... which everyone already knew to begin with. What Spectre did was show how easy it was do such arbitrary memory reads, since you can proxy it through code as innocent as an array bounds check. There are mitigations for this, which requires rebuilding your entire application and all libraries with special mitigation flags... guess which browser is more likely to do that?


This is kind of a strange analysis. Sort of infamously, Dan Bernstein, who is sort of a pioneer in these privilege-separated defensive designs, foreswore them in a retrospective paper about qmail. Really, though, I'm not sure I'm clear on the distinction you're drawing between attack surface reduction and privilege separation, since both techniques are essentially about reducing the impact of bugs without eliminating the bugs themselves.

You might more coherently reduce security to "mitigation" and "prevention", but then that doesn't make much of an argument about the topic at hand.


What I meant by "attack surface" here is probably a lot narrower than what you're used to. I'm using it to focus on the code size concern. I was trying to visualize it in terms of "how many opportunities do you have to try to break the system" (as surface area) versus "what can you actually do once you've made the first breach" (as volume), and didn't fully coherently rewrite the explanation to excise the surface area/volume distinction I originally made.


Google actually has additional security checks that require JavaScript, and they won't let you log into a secured account with JavaScript disabled.

https://m.slashdot.org/story/347855


> Security is distinct from privacy.

No, it's not. Security is not a goal in itself, it can not be, security is only about guaranteeing other goals, there is no security absent all other goals. What it means for software to be insecure is that it doesn't ensure your goals are met. For many, privacy is an important goal. If the software that you are using compromises your privacy that you value, then that software is not secure.


I am much more concerned about someone being able to impersonate me (security) than to know what I'm doing (privacy). This doesn't mean im unconcerned about the latter.

If secure software compromises privacy in ways that concern you, it may not be the right software for you to use, but it is still secure (and potentially more secure than other software that you feel better protects your privacy).


> I am much more concerned about someone being able to impersonate me

Well, great?!

> (security)

Erm ... no?

> than to know what I'm doing (privacy)

Privacy is not about what your software knows, it's about who else gets access to that information. Software allowing access to your information to parties other than the ones that you intended is a vulnerability class commonly called "information leak".

> This doesn't mean im unconcerned about the latter.

And thus it is, as per the common understanding of the word, a security concern.

> If secure software compromises privacy in ways that concern you

That's just logical nonsense. You might as well be saying "If secure software kills you in ways that concern you, [...]".

> it may not be the right software for you to use, but it is still secure

So, let's assume your browser had a bug where for some reason, every website could read all the data in the browser. Like, could access the storage, cookies, cache, history, page contents, everything. But no write access. This is obviously purely a privacy violation ... but, according to your definition, not a security problem, right?


> And thus it is, as per the common understanding of the word, a security concern

Yes, but not when talking about cyber-things. Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.


> Yes, but not when talking about cyber-things.

Yes, precisely there.

> Generally, we only enter the realm of security of the information leak is secret or unintentional, neither of which is the case here.

So, you are telling me the user is intending the information leak? I'm not sure I understand: You say it's not a security matter if the "leak" is intentional. But then, if a user is transmitting information intentionally ... why would you call that a leak?

Or do you mean the leak is intended by Google or whoever and that is why it's not a security problem?! But then, what if a hacker intentionally installs a back door on your system and uses that to leak your information ... then that wouldn't be a security problem either, would it? Or is that where the "secret" part comes in, and it would only be a security problem if the hacker didn't tell you that they stole all your data?


Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here). If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.


> Yes, it's a security problem when they can do something without your permission. (So I'd argue it's less a leak and more a disclosure when they do have your permission, as is the case here).

Well, but do they actually have your permission?

> If it was done secretly then it would be a security problem, but without secrecy or lying, it's simply Google not living up to your privacy preferences.

Well, for one, are they not doing their things secretly? Is the mere fact that you can find out about it enough to call it "not secret"? Is the mere fact that you didn't refuse where you didn't even really have an option to refuse permission?

Let's suppose a food manufacturer put a new pudding on the market. Included with the package is a 500-page explanation of everything that you need to know about it. Somewhere on those 500 pages, all ingredients are listed. Most are mentioned using the most unusual names. Among the ingredients is a strong carcinogen. A carcinogen that doesn't contribute anything to the taste, the look, or anything else you would value. All it does is make producing the pudding cheaper to produce.

Now, a biochemist could obviously know what is going on if they were to read the 500 pages, so it's not secret that the carcinogen is in the pudding. Also, the packaging says that you agree to the conditions of use in the 500 pages if you open the package, so you gave them permission to feed you that carcinogen.

Would you agree, then, that this pudding is not a health safety risk, it's simply the manufacturer not living up to your health preferences?

Also, I don't really understand how permission can make something not a security problem. It seems like that's all backwards?! I generally would first check a product for security problems, and then give permission based on the presence or absence of security problems. And one of the security risks to check for would be software leaking information to whereever I don't want information to leak to. Why should the fact that the manufacturer of some piece of software announces or doesn't announce that they leak certain information have any relevance to whether I condier the leak a security problem? If I don't want my information in the hands of Google, then how am I any more secure against that leak just because Google told me about it?


Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

It's not a good analogy for a whole host of other reasons, but that's one of them.

You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.


> Remember when I mentioned "cyber"? That's because I'm using the terms in the context that professionals do in cybersecurity contexts. What that means is that the pudding analogy is irrelevant.

1. Well, one thing that IT security professionals surely don't use is "cyber", that's a term from the snake oil corner of the industry.

2. People in IT security most definitely do not distinguish between security problems that the manufacturer intended as a feature and security problems that were caused any other way. You create a model of things you want to protect, and if a property of a product violates that, then that is the definition of a security problem in your overall system, obviously. The only difference would be whether you report it as a vulnerability or not, as that would obviously be pointless for intentional, publicly announced features.

> It's not a good analogy for a whole host of other reasons, but that's one of them.

Really, even that would not be a good reason, as it smells of essentialism.

> You're using a nonstandard definition of computer security. That's your prerogative, but don't be surprised if it continues to cause confusion for those you interact with.

No, I am using the exact standard definition, and the only sensible one at that. It obviously makes no sense to have a definition of "security" that says nothing about whether your system is secure. If you consider Google having access to your data a threat in your threat model, then whatever properties of your system that give Google access to your data is a security problem in your system, it's as simple as that.

The only thing that matters is whether your overall system reaches its protection goals or not, not whether some component by itself would be considered vulnerable in some abstract sense. And that obviously applies in the opposite direction as well: If you run some old software with known vulnerabilities that you can not patch, but you somehow isolate it sufficiently that those vulnerabilities can not be used by an attacker to violate your protection goals, then that system is considered secure despite the presence of vulnerabilities.


I, for one, am happy that 3 months ago I decided to start my transition away from Google products and services. Subscribed to a Fastmail account and couldn't be happier. Firefox is my browser of choice both in desktop & mobile, so I don't have any of these issues, but for how long until google decides otherwise?

I'm still quite a heavy user of maps, it's hard to step away from all the convenience, but OsmAnd is already installed on my phone and I'm getting used to it too.


I moved most of the important stuff away the other year. Went with Protonmail and then Fastmail since their IMAP support was better. I really prefer to use a native email client over the web interface; I don't need to be online to sift through my archive.

Ended up with a Google account still because I get value out of youtube, and of course I don't have a choice at work since most startups seem to go G-Suite by default (I'd take the MS offering over that too; they're not in the ad business).

At risk of going off topic, I hope that if one day I have children (if at all), I will remember enough of the old internet back in the late 90s/early 2000s to be able to describe just how weird and wonderful that world used to be. Not all of it was good or great but they'll be forgotten relics of an ancient era, in tech terms. You know, like getting on the internet by picking up a free AOL CD from the local supermarket; not being able to use the internet and make a phone call at the same time; cracks and keygens with the epic graphics and music; making a really bad website with frames and tables with Macromedia Dreamweaver or MS FrontPage...

Feels like these days that the internet, in the mainstream (so not HN or similar), is just an advertising and surveillance platform.


> I'd take the MS offering over that too; they're not in the ad business

Have you seen Windows 10 (even the Pro version)?


I have, and I've used it. I thought it was tasteless to put ads in windows explorer and stuff but it was all obvious. Telemetry in and of itself isn't bad if they don't connect it to an IP or account or other kind of fingerprint, they just look at what is interacted with and what isn't without knowing who did it.


OSs aside, the pro tools are very good


I also kept my account, hard to ditch it with everyone around me sharing google docs. I still use it once in a while, and I have plenty of stuff in drive that I need to take out. Funny enough, office live might be an alternative...


> plenty of stuff in drive that I need to take out.

Check this tool out https://rclone.org/. It's a command-line sync tool that fully supports Google Drive.


I've just recently switched a lot of things over to Office. You get the exchange server, of course, and now it's oAuth based I get a much better integration with my iPhone and computers without having to set up app-specific passwords.


Try http://invidio.us to get rid of youtube. At least their cookies, tracking scripts and account.


thanks for the trip down memory lane, hopeful times they were


I was actually forced off chromium recently, because it became very laggy after an apt-get upgrade. Firefox has been pretty nice.

I still use google search and gmail in that environment, though. I use DDG for personal work, but for work work, google search is noticeably more efficient. And work requires me to use a gmail account.


I use DDG too. In many cases I have to resort to google to find better results, usually I do it with !s ;)


It’s surprisingly easy to switch. The most striking thing for me when I happen to use Google is the number of ‘hits’ that are adverts. Either the volume has increased or I wasn’t noticing them all before.


Google is kind of a mixed bag. There's plenty to mistrust about them. IMO, they still run the hardest-to-hack email service out there. Who else has anything comparable to their Advanced Protection program?


Unsurprising. When I was using an Android phone with Firefox, Google search always served me the “bare bones” degraded layout; changing the UA got me the same content as Chrome.


They don't do that any more, after they were called out hard enough on deliberately sending broken layout to Firefox.


I don't see why they would need to be called out on anything regarding this. I just thought "oh, Google looks like shit" and would prefer using DuckDuckGo even more.


That's not true. For example, search results for businesses is still different, and many things, like reviews, are missing in Firefox.


Good to know they are monitoring the User-Agent header for security.

A Chrome user changes her User-Agent header in Developer Tools, logs in to Gmail and finds that Google has sent her a "Security alert" e-mail message that a "new device" has logged into her account.

Maybe the e-mail should inform her that Google has stored a new device fingerprint (https://en.wikipedia.org/wiki/Device_fingerprint) and the ways in which Google can use it are not limited to "security". It will be used to further Google's business, online advertising services.

Sure, Google can argue these fingerprints will be used for "security purposes" and are not being gathered for online advertising services, but what are the real risks to Google of "less-than-perfect" Gmail security.

With over 60% of webmail users on Gmail how easy would it be for a user to protest Gmail insecurity by moving to a competitor, one who would not also be taking fingerprints.


Is it just checking the user agent?


Yeah... I've seen this before. It was a company called Microsoft with a browser named Internet Expolrer 6 and it caused a decade of browser wars.


Vivaldi found out they are being targeted by Google, see: https://news.ycombinator.com/item?id=21758814


> simply changing the browser user agent in an ‘excluded’ browser to that of a supported one, like Firefox, instantly lifts the bar and lets the app/service/site to load.

> Where, surprise, surprise, it works fine without any major issues.


> You are trying to sign in with a browser or app that doesn't allow us to keep your account secure.

Sounds like it's not just a question of whether the page renders correctly?


Google almost seems like they are trying to be Microsoft number 2. They can't stop themselves from being overly competitive. I predict they will get hit with a serious us govt case that causes them to allow more competition, just like Microsoft in the next 5 years. Blocking ad blockers, controlling Android, banning unions, it's sad to see them coming to this.


Unlike Microsoft of the 90s, Google knows everything our lawmakers do online. Every sordid search, the location and participants in every late-night tryst. Pretty much anything that could embarrass or bring down anyone who might oppose them is available to them through our phones, browsers, and search histories. I think we are a long way away from any serious repercussions for Google's actions.

/tinfoil hat off


If there’s been one positive development in politics in the last few years it’s the disappearance of the sex-scandal.

Sure, anything too extreme, funny, or illegal would still be problematic. But someone’s run-of-the-mill pornhub playlist is unlikely to make much of a difference, even on the Republican side. The reputable (and influential) media wouldn’t even report it.


This is exactly the concern many have with anyone having the keys to to many silos of data.


Well, "works fine." I'd suspect the issue is more Google is aware of ways an attacker can spoof someone into logging into their Google accounts in a context where those UAs leak credentials. The issue night be it works fine for the user and also the hacker or phisher who just got valid auth tokens.


Which is nothing new for Google; they’ve blocked valid browsers from accessing content all the time. I remember using Inbox in Safari for about a year before they didn’t make me play tricks with my User Agent.


Google has been hostile to niche Linux browsers for a long time. The worst offender in my experience is their recaptcha service. Almost every site which supports it will hand you one of those horrible "click on all the pictures of cheese" where after every click it takes 3-5 seconds to decide if it wants to give you more pictures.


This is incompetent behaviour.

(I'm sure someone from Google will now argue no no it's malice not incompetence!!)

If they're worried about capabilities, test on capabilities.

Filtering on user-agent strings - which is what they seem to be doing - is the act of an incompetent, and it doesn't matter how big the company is.

Can Google not hold on to competent people any more?


There have been quite a few threads on HN about why Google uses a combination of capability and UA sniffing. The tl;dr is it's the Wild West out there and both approaches are unreliable (also, capability testing requires pushing more code than the client can use and harms latency).


I am absolutely certain that the intention is malice - but it's incompetent malice.

It weakens their brand every time, making users either think that they're bad at building products, or hate them for trying to force them into their (stupid) ecosystem.


I'm using Waterfox and gmail still seems to be working. For how much longer though?


If they could stop putting me in captcha hell for not connecting to their servers that would be phenomenonal.


Devil’s advocate, whitelisting is often much more effective than blacklisting.


Whitelisting or blacklisting UA strings is meaningless. Any actually malicious person is going to spoof that without even giving it much thought. It's only the honest people who are affected.

Funnily enough Google itself used to advocate against detecting UA strings (I don't know if they still do). Mind you, groups within Google don't always listen to each other.


Indeed, I'll happily (and honestly) report my user agent as PHP to you and add my email address to it so you can reach out if there are any problems, but if I notice you're blocking my proxy that removes your crap and turns a multi-megabyte news page (which I'm paying for) into something that loads instantly even on GPRS/EDGE, I'm just going to spoof it.


The devil has enough advocates already.

This is incompetent behaviour, however you slice it.


Webmasters could help to fight back and give Chrome users a crappy experience on their website (e.g. add 3 seconds to every page load), until Google fixes this.


>Webmasters Now that's a word I haven't heard in a long time...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: