> As I looked at the permissions and what our extension actually needs to operate, I noticed a great opportunity to reduce our permissions requests. We do not need to request access to data on https://*/* and http://*/*. Instead, we can simply request data access for https://*.pushbullet.com/*, http://*.pushbullet.com/*, and http://localhost/*. This is a huge reduction in the private data our extension could theoretically access. A big win!
While I agree with the larger part about the lack of transparency of what they want you to fix, this is an amazingly huge oversight, and the fact that the extension review process got an established, popular extension to go "Wait, we don't actually need to request access to every website ever" is a point in favor of the review process - and, unfortunately, a (weak) argument in favor of the review process taking the attitude that they get lots of crap and don't have the time to explain to all the authors of crap what they're doing wrong. How did the extension ever ask for this in the first place?
Also why do you need http://localhost/? Is the extension running a web server on localhost with native code? If so, can you use the specific mechanism/permission for communicating with native code via a subprocess (because it turns out communicating with a web server on localhost is very hard to do securely)? If not, what's it for?
I'm sympathetic to the broader argument here, but given the provided information, all of this is consistent with an extension that should be kicked off the app store within 14 days.
(Among other things, if you have an approved extension with https://*/* permissions and active users, malware authors will offer to buy your extension for a very high price. So it's definitely in the public interest to make sure there are as few of those as possible and that they're only in the hands of people who have the ability to understand why the friendly person offering them way too much money for their extension isn't just being nice.)
But you're completely ignoring the point that even without the all http(s) permission they will still be kicked off the store, so that has nothing do do with the issue at hand.
If localhost is the issue, Google could literally respond exactly the way you did and the problem is gone, "why do you need http://localhost/?"
This isn't about permissions at all. This is about communication and whether it's worth putting effort and trust into a company that acts like this as SOP.
That's the thing I'm sympathetic to - having fixed the bug, it's frustrating that it's not clear what the next steps are.
But given that they had the bug, Chrome was absolutely in the right to deny them the first time. And while I don't like Chrome's position that they're too busy to explain to everyone what they're doing wrong, if extensions that go "oh hey, we don't actually need access to literally every website, imagine that, oops!" is what you expect out of good, competent, non-malicious extension authors, I have a lot of trouble disagreeing with their conclusion, in practice.
(Would it be better if Chrome said "There are 200,000 extensions on the Chrome store, each and every one of them deserves attention, we need to spend at least 30 minutes looking at each one and composing a response, we have a ten-person team, we'll get back to you within 5 years?")
I can only assume that this isn't the result of a human flagging Pushbullet like this; I expect it's an automated system. And if that automated system can make a decision to flag the extension, it could also include in the email specifically what caused that flagging to happen.
At this point I'm really starting to become unsympathetic to the idea that they can't tell you what you're doing wrong because it'll enable people to figure out ways to work around the flagging algorithm. I kinda just don't care anymore. If you're going to run a platform like this and capriciously threaten to kick people off of it, it's pretty antisocial to hide the reasons why.
(And yes, I know, Google has no obligation to do anything differently here. But I can perfectly well think they suck for their current practices.)
I agree, and will go further in that I believe they actually should have such an obligation: it should be a consumer right to find out why you were denied access to a platform--or a restaurant, or a barber shop, or a wedding cake maker, or whatever else that someone is refusing you service due to, as a business operating in the public--and that whatever reason that is must be something that either 1) has nothing to do with the customer and can be shown to have nothing to do with the customer (such as "we do not have enough workers to take more orders and we didn't do anyone other than you") or 2) is clearly and obviously "correctable" (so like "because you are gay" doesn't count); if the platform (or service or business) refuses to tell you, or their reason isn't "correctable", there should be heavy penalties attached for what amounts to opaque discrimination.
(FWIW, I appreciate the idea that eventually it would be useful to be able to ban a bad actor entirely from something to exclude the possibility of future harm: the past tense of "correctable" was "avoidable", and it needs to be very clear exactly what was done wrong and what could have been done differently for someone to end up in the situation of being banned from the usage of a platform, service, or business: the alternative where people get to apply secret and obscure reasons to refuse service is madness.)
I was ready to get behind this line of reasoning, but then I thought about e.g. comment sections. Anyone who has moderated a forum will know that trolls find every possible way to walk right up to the line without technically crossing it.
If you say "personal attacks aren't allowed", they'll say "I wasn't attacking him personally, I just suggested that anyone who says [the thing he said] should consider getting evaluated by a doctor for mental retardation."
At some point you just have to say "I know what you're doing, you're clearly not acting in good faith, and I have to ban you." Are all of those people now potential lawsuits?
Yeah, this is the issue. They're not going to catch every dirty thing extensions (perhaps unwittingly) do. If they are able to detect one or more problems with an extension, there's a good chance that there are other problems that were undetected. Forcing extension authors to go back and think about what they could be doing better rather than just telling them the minimal set of changes that are needed seems like it can only result in more well-behaved extensions across the board.
There should at least be some appeals process that involves a human then. In this case they fixed legitimate cases where they had too many permissions (including removing some functionality), and were still rejected. What if the reason they were flagged was because of a legitimate usage of a permission that the automated (or manual) system wasn't aware of? At the very least there should be a way for a flagged extension author to go through a process where they explain why they need each permission they request, and if that is they are still rejected, told why their explanation wasn't good enough.
This is especially important given google's market power in the browser industry. What if the extension is from Google's competitor? Google could ban the extension without a legitimate reason to hurt their competition.
> If you say "personal attacks aren't allowed", they'll say "but I wasn't attacking him personally, I just suggested that anyone who says [the thing he said] should consider getting evaluated by a doctor for mental retardation."
1) That pretty clearly qualifies as a personal attack
2) There is a significant difference between a forum and a business critical software distribution platform.
> That pretty clearly qualifies as a personal attack
Okay, let's take the obscurity up one level: I used to moderate a very small internet forum, where there was one user who had certain mannerisms in his writing. I also had reason to suspect this user was mildly suicidal.
A troll began to mimic these mannerisms as a way to make fun of that user, and talk about how horrible their life was. The troll wasn't overt about it, but if you knew the history between these two, you could tell what was going on. The troll of course feigned innocence, but as he'd been given warnings before, I made the call to ban him.
I firmly believe I was acting within the forum's stated policies—but I am not 100% confident a judge would agree. If these types of actions could lead to lawsuits, I don't know what I would have done.
---
> There is a significant difference between a forum and a business critical software distribution platform.
Great—but where's the line? Is Youtube critical, for example?
It sounds like this case would comply with the proposed rules to me. You informed the bad actor of what he was doing wrong, the behaviour was correctable, and he continued to exhibit said bad behaviour, so you banned him.
It would be common enough to have laws that say only businesses of a certain size are fully obliged to comply. GDPR has this for some clauses, for example. I guess the idea is along the lines of the market determines whether you're critical by letting you become a huge company?
> I can only assume that this isn't the result of a human flagging Pushbullet like this; I expect it's an automated system
There isn't. Take a quick look at stupid popular apps on the Play Store, almost all of them require completely unnecessary permissions literally in conflict with the bullet points that Google listed in their email to PushBullet.
Also note that almost all of them have 10-50x the amount of downloads compared to PushBullet.
The odds just spell bullshit.
Their "automated scanning process" didn't just happen to pick the popular app that gives users more control and platform independence from Google.
Because really that's what it's about, PushBullet is dangerous to a closed ecosystem like Google wants.
(also, I believe that given the permission system, and if an automated scanning system was in fact in place, it should be super easy to exactly point out what is wrong, like compiler errors. and being granular permissions, there's no "gaming the system" about it, you either get the permission or you don't)
> If you're going to run a platform like this and capriciously threaten to kick people off of it, it's pretty antisocial to hide the reasons why.
I don't believe Google gets to be "antisocial", they're not a person. They exist since ~20 years and are made of shifting people, none of whom really control it. As far as it can be "antisocial", I wonder if it even vaguely "realises" that its interchangeable parts are the same entities as its consumables.
That's a bit extreme. Google sends out emails every few months about new policy that may get checks. They definitely aren't very good at doing automatic checks that are relevant or have much relationship with the policies they publish.. But that is a bit different than strategically attacking specific apps beyond choosing the X thousand most popular apps to get the most users affected per presumed threat.
(I get various compliance spam and ignore it and among things with small numbers of users and/or owners who lost interest a small percentage a year fall out of the stores. The only differences are that we don't care enough to forward the threats Google sends us and no one would bother to amplify them.)
> And if that automated system can make a decision to flag the extension, it could also include in the email specifically what caused that flagging to happen
Unless it's a black box machine learning model that just says yes/no but can't tell you why
I think it's more likely that their flagging is heuristic based and could easily defeated by a malicious extension author and giving detailed feedback makes reverse engineering the rules trivial.
Law enforcement has a solution for that: Do tell people what the rules are and what they are doing wrong. Don‘t tell them how they got caught and what methods were used to catch them.
Telling people that they are going to be severely punished within days by a very powerful global entity, without telling them which specific rules they are in breach of or what they have to change is a dystopian nightmare.
Being audited isn't an explicit accusation that you've violated anything either; at best, it's an implicit suspicion to which the tax authority is fully entitled to probe and we as taxpayers are given reasonable opportunity to corroborate claims.
Absolutely. My point is simply that you can tell people what the rules are and which ones they may have broken without revealing absolutely everything about how you detect any malicious behaviour.
> flagging is heuristic based and could easily defeated by a malicious extension author
Well, if the reason is the "https://*/*" permission or something like that, and it is flagged and reported automatically, and everyone learns not to use it, it's a win-win. Benign extensions will have smaller targets on their backs, and malicious extensions will become less malicious OR will have to jump through hoops to achieve what they're trying to achieve (and will become easier to detect).
This isn't something vague or hard like OCR, voice-recognition or image classification.
It's a finite list of app permissions.
Unless it's a random number generator that just says yes/no but can't tell you why -- is just as valid of a non-excuse for using the entirely wrong solution to a problem, and surprise the solution doesn't work very well.
> Unless it's a black box machine learning model that just says yes/no but can't tell you why
Recently there's been some interest in Explainable Artificial Intelligence (XAI) which addresses the problem of "black box" machines. There are techniques such as SHAP and LIME that can help the ML designers provide explanations for why any specific observation has been "rejected". That said, I'm not sure if that's what Google uses here.
Sure, it's anti-social. But is it more or less anti-social than running a huge software platform that advertises the ways to exploit itself?
I think it's a very interesting question. Who is more deserving of a good experience - uninformed users who might get taken advantage of by malicious extensions, or developers for the platform, who definitely do get taken advantage of by virtue of running afoul of rules they can't read?
Even if the average damage to a user is very low, the ratio of users to developers is massive.
> But is it more or less anti-social than running a huge software platform that advertises the ways to exploit itself?
Security through obscurity is no security at all. Google is not doing its user a favor by hiding the criteria it uses to determine whether an extension is malicious or not. Also, just because Google won't publish the criteria, it does not mean that it can't be discovered by someone with enough determination. I think that a malicious actor is more likely to spend the necessary time and effort to do so than a developer who is more interested in creating an extension that will serve its users well. If anything, these types of policies drive away developers who act in good faith and leave the marketplace with more developers who are looking to exploit the users.
It seems likely to me that it's impossible to have an extension system that allows useful extensions that users want while also being completely secure against malicious actors. Security by obscurity is an important tool in the abuse fighting toolbox, because it allows you to have cheap heuristics while increasing the costs for malicious actors.
Not really, because malicious actors don't care about their reputations, accounts, etc.
A reputable developer has a reputation to maintain (by definition), which makes Google's threat to permaban them a threat indeed.
A disreputable developer doesn't care about their reputation (again, by definition). They can create a new throwaway account every day and apply using the same (or slightly, easily, altered) code with different permissions every hour until they get permabanned, and start again tomorrow.
So the "obscurity" can be discovered easily through experimentation by the bad guys, but is still obscure for the good guys. This is not a good outcome.
The point is making it more expensive for malicious actors. You can't make it impossible to get malicious extensions in, but you can make it harder. It's the same as captchas. You can also defeat captchas, but adding them reduces abuse a lot. As with captchas you're making it harder for good actors too. The difficult part is finding a good balance.
> The point is making it more expensive for malicious actors.
Malicious actors don't care about expenses nearly as much as benign ones do. So the point ought to be, if you want me to fix something, tell me what's broken.
Except that in this case, it's misplaced, and causing benign actors far more pain than malicious actors. If they want to hurt malicious developers, they need to flag extensions as untrustworthy for:
1. age < 180 days
2. dau < 1000
3. some rule around user reports of malice on uninstall?
and this gets a bright warning banner on the top of the page, and it can't be discovered through the chrome store until crossing these thresholds.
But this is not email, where literally everybody can send things. Raise up the barrier at submission level - where you must be authenticated - to get rid of automated trial/error attempts, I bet they can.
But when an extension/developer is there, when it has a long history in the store and used by 1M users, hey, give some human feedback. It won't break your automated anti-spam/scam rules.
Getting kicked off the platform happens on HN itself a lot/ I get flagged on HN on almost regular basis (and my ability to reply/talk back is taken away) for speaking my mind (no curse words, just opinions that are others strongly disagree with) about Google, HN itself, Amazon et al. If I ascribe a n ulterior motive to someone in power, I get kicked off. Worse, sometimes I get attacked personally. So I think it's pretty ironic to complain about being kicked off on HN. This is probably my 50th username.
Who is referring to getting kicked off HN? I believe the post you're responding to, which said "If you're going to run a platform like this and capriciously threaten to kick people off of it," -- the "it" referred to Google.
> given that they had the bug, Chrome was absolutely in the right to deny them the first time
Sure, but if that bug gets fixed, why reject the fix?
Also, if that's the bug that was the concern, it would certainly be nice to have something in the email along the lines of "We don't want extensions to ask for access to every website anywhere on the Internet, so please narrow your http and https permission request."
> they're too busy to explain to everyone what they're doing wrong
If a human flagged this, how much longer would it have taken to add the sentence above to the email? Or even write that instead of the useless generic boilerplate that's in the email?
If, OTOH, an automated bot flagged this (which is what I suspect, and what others seem to suspect too), why didn't whoever wrote the automated bot put at least some kind of clue in the script that writes the email? Something like "if bad thing #6 is found, add text XYZ to the email".
> we have a ten-person team, we'll get back to you within 5 years?
It is TOTALLY not OK to have a ten person team (and I have no way to know if they even have that).
Considering the fact that the browser platform is now a real service required by billions of people, they should be responsible to have enough staff to handle extensions properly.
Aside from the amount of money the Chrome extensions make them, the amount of influence it buys them, and the amount of damage it can do, it should be required of them to have a real team to handle it.
Imagine if the I ran a private airlines that provided service for 4 billion people around the world, and the entire team that inspects, approves, and monitors the pilots and co-pilots had ten people. Would you think that is acceptable, or would you expect multiple governments to intervene?
> Considering the fact that the browser platform is now a real service required by billions of people, they should be responsible to have enough staff to handle extensions properly.
Extentions don't have to exist at all. At least for a very long time, there were no extensions for Chrome on Android (not sure about now but that may still be the case).
I suspect google would love to just deprecate all extensions entirely, they're probably a massive headache for them (as this thread shows).
A lot of things Google used to do that attracted geeks like us are really a problematic thing when the average masses of people are using them. Google is learning that the hard way, and the process of fixing that pisses off us geeks.
Apple took a different path of locking things down more aggressively earlier, and a lot of geeks (myself included) hated them for it. Now, after we're seeing how the modern world is so incredibly tech illiterate and how security issues are affecting the world, I think Apple got this right and Google got it wrong, and Google knows it and is trying to fix it, which is pissing us off in the same way Apple used to piss us off (but we apparently got over it).
Yes. So more importantly, these may or may not even be the kinds of things that the system flags. So we can’t even say that Google’s system is working correctly and is flagging things correctly. I’m happy to concede that the plugin needs to fix vulnerabilities, but it seems like it’s distracting from the main argument, which is how the hell do devs know what to fix in these review processes. I’ve had similar experiences with Apple, Google, and especially Facebook in the past.
By "the bug" I mean the fact that they're asking for permissions on any site when they don't need them - which is a bug (and a serious one) whether or not Chrome notices.
I'm not sure I get your point, yes that permission was overreaching but they fixed that and were still rejected and left in limbo without Google's system giving them enough specific information to reliably mitigate whatever it is the system did not like.
They haven't fixed the localhost thing which is in a roughly similar category. Google's response was vague and poor but the problems this extension had are quite serious. More serious than, say, Zoom's recent ones.
The theoretical problems that are also hypothetical because the browser vendor isn't specifically saying that it's what they take issue with here even after the developer tried to mitigate it? And which is apparently pretty common with the prescribed best practice being buggy and badly documented as per the thread below? I'm fine with putting some blame on the extension developers here but this communication by Google is pretty abysmal and for all practical purposes more worrying to me than overreaching permissions that have apparently not impacted security in the real world at this point. The thread below indicates that the localhost thing was/is pretty standard for some use cases, if Google wants that changed they could just communicate it clearly, openly, and with a good upgrade path. Not with vague, unspecific "do something or you're out" messages. Even a few links to documentation that likely didn't exist when this was first implemented would be fine here to change the odd messaging to something actionable.
I just did not see any indication of the permission actually being abused by this specific extension, hence the hypothetical. That's not meant to dismiss that it is an issue but in this specific case I think the communication is worse than the potential problem given that the devs don't seem to have a negative track record and actively work on mitigating it.
As for the redundancy, I blame my lack of coffee for that, apologies.
Would it be better if Chrome said "There are 200,000 extensions on the Chrome store, each and every one of them deserves attention, we need to spend at least 30 minutes looking at each one and composing a response, we have a ten-person team, we'll get back to you within 5 years?"
Yeah, but with 100 people it would take only a few months and after that it would take far less people to maintain everything. Also, they could just have their algorithm do the flagging and then have a team of 10/20 people to handle users. It's just that they don't care (enough) Not all 200.000 extensions are being killed in the next 14 days. Also if this is done for security reasons then put this responsibility with your security team and make that team bigger.
I see a lot of threads on HN discussing complaints about how locked down Apple is. I suspect many of the extensions that are problematic for Google never would have even been allowed into Apple's ecosystem in the first place.
In general, Safari's extension system is more restricted than Chrome's, and changes that Chrome is attempting to make to be more similar to Safari's approach result in fierce backlash exactly like this thread (e.g. https://www.wired.com/story/google-chrome-ad-blockers-extens...).
> we have a ten-person team, we'll get back to you within 5 years?"
I'm of the opinion that a company with revenues > 30 billion dollars per quarter could afford to increase the ten person team ten-fold to get the response time to a reasonable number.
The tool could tell them what the problem is. There are fine grained permissions, and Google could say which of those fine grained permissions are used badly. OR, they could just restrict those permissions themselves in the browser.
If it did, it'd be easy for malware authors to work around the scanner. The system we've got right now isn't great, but I've yet to see any better ideas.
How does that make any sense? There's some set of permissions that's safe enough to approve, but too dangerous to tell you it's safe enough to approve?
I don't understand what you're talking about. If the scanner looks for extension with permissions for all your web browsing activity, so malware authors stop asking for permissions for all your browsing activity...isn't that great news?
I understand that's the reason they give. I just don't believe them. At some point you have to assume good faith. Maybe that point is when the item in question has over a million users and a good rating.
Couldn't malware authors start from the other direction? Create a no-op extension with no permissions and gradually add things until it's no longer approved.
Only years later, as in the case of a certain Vietnamese hacking group that did exactly this starting at the end of 2015 and didn't have their apps yanked until Nov & Dec of 2019 and another batch located & yanked only last month, well after any and all damage was already done to those who used the apps.
Honestly i wouldnt give them the excuse. they have more than anyone else to be able to either:
- automate this with a message that tells you what thing is dangerous
or at least:
- automatically push to a human that will spend 30 real minutes on it for extensions that have many users/popular (that sounds like the basic thing any business would do!)
> But given that they had the bug, Chrome was absolutely in the right to deny them the first time.
This is not a given at all.
Google's communication was and is absolutely unclear on what is their problem.
And if you'd take their communication on their word (dumb idea), they have been pretty clear that this was in fact not, "the bug", and that is has in fact not "been fixed".
Regardless of your (and my) opinion about apps requiring access to every site on the www -- Google did exactly the opposite of indicating that this was the problem why PushBullet is threatened to get kicked off the Play Store.
How many of those extensions have 1mil+ users though. Its called prioritizing. The fact that they treat popular extensions just like one that tom wrote for him and his family to share pictures is poor mgmt. They are making money so there should be more accountability on Google's part even if they dont "have" to .
Google's customer service is pretty terrible across the board and it seems their review process is no exception, but even so, reading the account of the above issue with domain access does not imbue in me a sense that the developer of this extension is competent. It would not surprise me in the slightest if there are many many other issues with this extension's code.
> ... all of this is consistent with an extension that should be kicked off the app store within 14 days.
No, it absolutely isn't. You can't take someone's livelihood off the app store within 14 days when they have every interest in complying with your policy but you just don't feel the need to even tell them what all the policy is. The fact it was approved in the first place is Google's fault, not the app developer, so is totally irrelevant to this saga.
If you can hit the Reject button you can fill in a text box that says why. There's no excuse for that.
The outcome seems to be that this extension fixed the bug and will still be kicked off the Chrome Web Store. That's probably not a net benefit for users _as a result_, even setting aside the unclear communication.
Imagine using a build system that doesn't give any error messages, except to say at the end "build failed, there was a syntax error". This sucks as a tool. Everyone would avoid it if they could, even if it found real bugs.
We shouldn't accept this level of information from services either.
> ... an amazingly huge oversight ... a point in favor of the review process
This seems like a rosy interpretation. It's not actually clear whether http permissions really are the reason the extension got flagged, and the fact that fixing them didn't affect the review feedback implies that the process isn't working the way you're imagining.
And both Grammarly and LastPass have had security bugs that let any website worm their way into the extension and access all the data from the extension (anything you've ever typed, for Grammarly, and all your passwords, for LastPass). Extensions with wide-ranging access are useful, and there's a reason Chrome has support for it, but they're also very very hard to get right, even if your entire business is writing a security-focused browser extension.
You could go the approach Firefox is going on mobile where there are currently six vetted extensions. As it turns out, they all need access to every website (or fine-grained APIs, perhaps). But... there are six of them. https://blog.mozilla.org/addons/2020/04/14/april-extensions-...
Do you have a link for that claim on LastPass? I use the extension and am wondering if I shouldn't use an PM extension thats more reliable in terms of security. Any recommendations obviously welcome.
The LastPass issues are all pretty old at this point - I mostly mention it to drive in the point that getting this stuff right is hard. (For what it's worth, the researcher who found those issues has good things to say about LastPass: https://twitter.com/taviso/status/1167311357957435392 and also fairly negative things to say about 1Password, which is what I happen to use.)
IMO a password manager is an extremely critical piece of software that I'm ok with if I trust its security model.
There are a couple whose security models I do trust. However, merging those security models with random extensions that may or may not have full run of all code executing in the same context as my password manager is a hard no. It's baffling to me that any legit password managers go to the trouble to write and support browser extensions, given the risk. It's betting your reputation for security on a very small amount of user convenience.
The sad thing is that there are so many bad extensions but they are run by professionals (crime gangs). They can figure out how to game the system (since there is no humans - that is definitely possible). So this "half-ass" approach by Google just helps malicious actors - they just need to be under the "radar" and game the system.
Google is trying to do "half ass" approach. Which does not help users, does not help developers, and it also does not help Google.
The right approach is to do these reviews fully and that human is involved at some point. Of course, these things are are not free. So they can require $1000 or more per year subscription if extension require more than basic permissions. They anyway require at least 60K per year for using an API. Or something like that.
But it could be also that Google does not care about real security: just appearance of security. They need to sell ads and all other things they do (Chrome, Android, G Suite, etc.) is just a smoke screen.
The problem is that extension behaviour is very limited without the https://*/* permission.
Say you have an extension that implements spelling check or grammar check. That need access to every single website to find the text fields it want to add functionality too.
Same thing with a password manager extension, can't find the login boxes without the https://*/* permission.
You want to read data off any page users are on, or add ui elements to every page users are on, or modify style shares to any page, you need that global permission.
About the only extensions that don't need global https permissions are extensions that are designed to only work on a finite set of websites (facebook improvement extensions, or reddit improvement extensions), or things like push-bullet that don't really need to be a chrome extension in the first place, they could be implement as a system tray applet.
This isn't the fault of extensions, this is the fault of chrome for not providing ways to do things without full wildcard https://*/* permissions.
Pushbullet actually has a desktop application with a tray applet. But if you keep your browser running 24/7 it might well be worthwhile to use an extension instead of another piece of software that installs separately, and has more access to your computer.
It should be possible to have an extension interface that allows a spellchecker/grammar checker to get a button on text input fields such that if you click it, then the extension is activated on that field at that time. That seems like a much better Least Privilege design to me than giving the extension access to... literally everything you do.
>if you have an approved extension with https://*/* permissions and active users, malware authors will offer to buy your extension for a very high price.
This is interesting, and I didn't realize this was a thing.
Are there any lists of extensions with these permissions, or ways that as an average consumer could easily audit my list of extensions for this?
I'm using Firefox and all I have to do to view permissions for an extension is to go to about:addons and click an extension, and there will be a "Permissions"-tab that lists what permissions this extension requests. It's similar in Google Chrome IIRC. Other than that, both browsers lists the permissions an extension requests when installing the extension, and further when the extension requests new permissions.
Yeah. I'v refused to install extension in the past because Chrome tells me this extension wants to read all data on all web pages... No thanks. (Except those extensions I KNOW must have those permissions)
Very good thing app store may nudge people changing permission scope.
Hope extension authors manage to meet the deadline and get their extension re-approved.
I agree. This is hackernews, so it is easy why devs would feel otherwise, but as a nondev, I represent the the end users.
Why would anyone think it is appropriate for google to reveal their hand, and allow blackhat operators to build apps up to the max limit of permissions? (If they were revealed by google via white glove customer service).
If goog did provide guidance on permissions, goog would literally have to audit every app in the store, or come up with a way to separate bad actors from good ones.
So, Im sorry. No. If its between 1 hacker's inconvenience or in extreme case , livelyhood....and the retirement savings bank account of many grandmas, i am going to side with grandmas.
Google is doing many things wrong. Keeping the "red line" of allowable permissions secret, from data-hungry developers.... is not one of them.
> If goog did provide guidance on permissions, goog would literally have to audit every app in the store, or come up with a way to separate bad actors from good ones.
This makes no sense. For the sake of the grandmas, Google already needs to audit every app in the store and separate bad actors from good ones.
How in the world would making it more clear how to write more secure extensions possibly worsen the extension store's malware problem?
> How in the world would making it more clear how to write more secure extensions possibly worsen the extension store's malware problem?
Unfortunately, information that helps the good guys get their extensions past the audit check is exactly the same information that helps the bad guys get their extensions in too. The bad guys simply move onto the next security flaw that Google hasn't anticipated.
Maybe the bad guys use some common tactics to get their scam extensions in the store which good guys don't, which is easy for Google to detect and flag. If you release a list of known no-no's, the bad guys just get smarter and avoid them.
This obviously skews in favour of refusing some good extensions to keep most bad ones out.
In terms of Google already auditing every app, check out the source code for Dark Reader https://github.com/darkreader/darkreader. It's fairly complex. I can only imagine how many extensions are as, or more, complex than that. I wonder how much auditing is done manually vs automated.
Unfortunately, information that helps the good guys get their extensions past the audit check is exactly the same information that helps the bad guys get their extensions in too.
This is just an assertion I'm wrong. It can't possibly persuade me or the people upvoting me. Would you be persuaded by me just asserting you're wrong?
In terms of Google already auditing every app, [e.g.] the source code for Dark Reader [is] fairly complex.
Firstly, Apple is able to do it, there's no reason to make excuses for Google. See anecdotes elsewhere in the thread about how Apple attaches screengrabs, explains rejections by phone conversation, even decompiles apps to point to exact methods/lines of code in apps they reject from the iOS App Store, even small free ones: https://news.ycombinator.com/item?id=23170498
And anyway, without reading a single line of Dark Reader's source code, I can deduce plenty of permissions it shouldn't need, e.g. "cookies" (which it doesn't ask for). Without reading a single line of PushBullet's source code, one can easily deduce it shouldn't need access to "https://*/*", and indeed, it doesn't, yet it asked for it.
Can you explain concretely (not just by pointing to unnamed "no-no's") how could harmful side-effects result from Google telling PushBullet that "https://*/*" specifically was in violation?
If you release a list of known no-no's, the bad guys just get smarter and avoid them.
Firstly, isn't bad guys avoiding no-no's exactly what we want?
If you're saying that there may be some, probabilistic red flags that Google uses to find possible bad guys—sure, that could be true, I have no idea and you don't either, you admitted it was speculation. But in this case, "Request access to the narrowest permissions necessary to implement your product’s features or services." is not a probabilistic red flag, it's a hard rule.
Again, concretely how could harmful side-effects result from Google pointing out the specific violation?
> If goog did provide guidance on permissions, goog would literally have to audit every app in the store
You’re talking about vetting suppliers and products in order to ensure they’re selling safe products that consumers want. That sounds like an ordinary part of every retailer’s job to me.
Huh? How many products are sold at macys which allow its consumer to go perfectly bankrupt due to purchasing?
How many products ask for permission to ask for visibility into your bank account info? Your personal travel history? and disclose insight on how they are using this info in a way you can understand.
When devs start signing their apps with their full name and information where they live, lets talk.
Hit it right on the head. The localhost thing and coupled with the http thing make me cringe.
And to know they went from https://* to just one domain, yikes indeed. Then they left localhost. Hell, that probably made the case worker’s knee jerk even harder because of such a dramatic change, I can’t imagine they spend much time on each case.
Yes, 14 days is not a lot of time, but this is the ecosystem that needs to change. So the maintainer should have started out being more transparent on their architecture: period.
The fun is over, Google doesn’t care about loyalty or gestures of effort, this is an emotionless process. They care about provable, auditable compliance.
When you think about it, for Google, this is just a little sad story about an extension that didn’t make the cut. But it’s a small price to pay in exchange for evidence they are “enduring self-inflicted wounds” and using it as ammunition in avoiding billions of dollars in fines they face for violating user privacy laws.
Yeah, it sounds like they were working on a "it works, ship it" approach. That's not good enough when there are security implications to what you're doing.
The toughest thing in Google is getting in touch with a Human there. Its virtually impossible unless you know people working there. And this isn't just about scale. Take cloud services. Getting in touch with GCP customer service is a lot more difficult than other providers.
There is a native app for Pushbullet on Windows that you can install to add more features, and this is how the extension is able to communicate with it. I know there was a problem getting duplicate push notifications from the app and Chrome itself, and that was a way to remediate that.
At least that's what I remember from the time I used it.
> As I looked at the permissions and what our extension actually needs to operate, I noticed a great opportunity to reduce our permissions requests. We do not need to request access to data on https://*/* and http://*/*. Instead, we can simply request data access for https://*.pushbullet.com/*, http://*.pushbullet.com/*, and http://localhost/*. This is a huge reduction in the private data our extension could theoretically access. A big win!
While I agree with the larger part about the lack of transparency of what they want you to fix, this is an amazingly huge oversight, and the fact that the extension review process got an established, popular extension to go "Wait, we don't actually need to request access to every website ever" is a point in favor of the review process - and, unfortunately, a (weak) argument in favor of the review process taking the attitude that they get lots of crap and don't have the time to explain to all the authors of crap what they're doing wrong. How did the extension ever ask for this in the first place?
Also why do you need http://localhost/? Is the extension running a web server on localhost with native code? If so, can you use the specific mechanism/permission for communicating with native code via a subprocess (because it turns out communicating with a web server on localhost is very hard to do securely)? If not, what's it for?
I'm sympathetic to the broader argument here, but given the provided information, all of this is consistent with an extension that should be kicked off the app store within 14 days.
(Among other things, if you have an approved extension with https://*/* permissions and active users, malware authors will offer to buy your extension for a very high price. So it's definitely in the public interest to make sure there are as few of those as possible and that they're only in the hands of people who have the ability to understand why the friendly person offering them way too much money for their extension isn't just being nice.)