> Google Play uses your app bundle to generate and serve optimized APKs for each device configuration, so only the code and resources that are needed for a specific device are downloaded to run your app. You no longer have to build, sign, and manage multiple APKs to optimize support for different devices, and users get smaller, more-optimized downloads.
But as all this logic sits on Google servers and might involve lots of signing of apks for a single app and version, Google has decided it needs your signing keys for that feature. Which is weird already because you could also think of a model where you provide Google not with the keys but a service where Google presents you an apk, and you sign it. Then you can inspect it retroactively and run scanners on it, if you want to. The keys stay yours and you would know what Google is up to with your application.
If you have problems with giving Google your signing keys, you can just avoid this feature. But apparently there is the fear that Google wants to make the feature required. Which would give them ability to alter basically any app on the play store as they deem fit. Or they might in fact be forced by governments. Already now many providers like facebook take down public posts because a local government disliked a post. What if a govt told Google "please install this altered Signal app on this person's device"? And yes, Google apps already run as system app so they could already do something like that, but an implementation of that is way harder to make consistent among different vendors.
The reason for digital signatures is that they make a claim. "As a representative of organisation A, the binary with shasum XXXX is our work. We stand behind it." Why would I generate a private key, then share my private key with google? If google wants to claim that a binary they're shipping to users is same the one they received, they don't need my private key to do that. They can make their own signature, with their own key. Using a key I generated then handed to them is just dangerous security theatre. Google is asking me to vouch for binaries they sign and serve. But I can't vouch for those binaries - I didn't produce them and can't make any claim about their provenance.
> If google wants to claim that a binary they're shipping to users is same the one they received, they don't need my private key to do that. They can make their own signature, with their own key.
IIRC this is how it works by default for new apps. Uploading your existing signing key is only necessary for backwards compatibility to allow you to update existing apps that have already been published using that key.
> IIRC this is how it works by default for new apps.
Personally, I'd like to see Apple, Google, and possibly Microsoft take this to what I think is the obvious conclusion: developers and independent software vendors submit source code, artwork and other such "assets", sufficient meta data, and build instructions to the store, the store builds and publishes the applications and makes them available to users. F-Droid builds and publishes using its own keys and while there are problems with delay for some time-sensitive apps (most notably Newpipe, an application to watch YouTube videos), it works out quite well for the most part. I can't imagine why Apple and Google couldn't have what are essentially multiple build runners running at the same time to cut this time shorter to something like an hour at the most?
In return at least for Android (Apple is a bit of a special case), I would like to see it made possible at least for F-Droid or something similar to be able to update apps without requiring user intervention. Not sure how the technology will work exactly but my understanding (please correct me if I am wrong) is Google Play Store has super cow powers and I think it should be able to "bless" other applications to have the same super powers?
> The model also shields to a certain extent against conflict of interests (the product is the user, i.e. ads/tracking/hostile maintainership takeover)
Can you explain how? Since I've published things to F-Droid and since they also control signing and building (just like Apple and Google in this article), they can freely modify and change what's published on their store.
Just like with Google and Apple, you need to inherently trust them that they don't let people with access tamper with your app.
The question is more of where the money is coming from. Google gets paid by advertisements so that's where their loyalties are.
F-droid is funded by contributions and donations, and they need both. They also have everything out in the open, which brings extra scrutiny.
And the last part is just culture. F-droid is a community project with clear set goals. Google also has clear set goals, they just don't happen to align with their users for the most part.
One example would be basic apps and games like flashlights, editors, sudoku, minesweeper, note taking apps etc, of which 95% on the play store are ad/tracking infested. I just skip those and install f-droid's "reccomendation", problem solved. Also: Fennec is great (a rebuild of Mozilla Firefox mobile) and they also offer older versions in parallel (also via their archive). Their Firefox "fennec" build while not being a fork nevertheless has some tweaks, optimizations and brought back addons (via collections, you need to read the whole thread, it's long though), much appreciated:
I have just submitted 882 a Fennec update to 81.1.1. Should be available soon™. This version brings a lot of changes, like a new UI and modular codebase. The bad news:
Mozilla now tracks you even more actively using proprietary 3rd party services. I removed all tracking I found. (Firebase, Adjust and Leanplum libraries were replaced with stubs, so some analyzers can erroneously report their presence in the APK.)
The new UI may break your habits and disappoint you. (IMHO it’s not that bad as one can conclude from reading r/Firefox.)
Android 5.0 or later is now required. Mozilla decided so.
x86 devices are not supported anymore. I stumbled upon linkage errors and gave up. Help is welcome.
The good news is that Fennec F-Droid is alive and continues to be truly free software."
> Can you explain how? Since I've published things to F-Droid and since they also control signing and building (just like Apple and Google in this article), they can freely modify and change what's published on their store.
I think the perspective is that the distribution shields its users from possible upstream shenanigans (think stories that we used to hear about how popular free and open source Chrome extensions get bought and sold and ended up showing ads on Chrome opening page)
> The model also shields to a certain extent against conflict of interests (the product is the user, i.e. ads/tracking/hostile maintainership takeover)
What I find difficult to wrap my head around is that the Debian model (I know other distributions do this as well but just have to give it some name) is very difficult to scale. We basically need maintainers at every single Linux distributions who will (I imagine) go through all the changesets/diffs and painstakingly build the deployable artifacts for their distributions. I can't imagine a single maintainer being able to maintain more than a dozen or so packages and there is a lot of duplicated effort. The Play Store has about three million apps. I know we want to be able to escalate to a human when necessary but I imagine some automation is necessary.
As I write this, I can see the contradiction in what I am asking for... if the store builds, signs, and distributes binaries using the store's credentials but cannot vouch for the quality of the application. ...
I was just thinking that if the app stores had access to the source code and the build instructions maybe that would help somehow but I didn't think it through.
Everything is standardized and automated. There's no need for human interaction. You can tweak your code if it fails to build. The important thing is, that it's easier for Google/Apple to inspect your app if they have the code. (Maybe.)
For example they can simply refuse to release/publish anything if the code looks shit/obfuscated. They can explicitly ask questions about sections of code.
But since probably 99.9+% of "app review" is already automated ... likely there's no point in spending resources on creating a "GitHub clone" for submitting code to the various app stores.
The signing keys are more important for the security model of the device than for people to confirm that an apk was actually created by a particular corporation. Every single android user makes use of the former feature. There are 1B+ android users. I'd wager that well under 10,000 have ever checked the signature on an apk file themselves.
Most developers will let Google just generate the keys for them.
It is one of those situations where the excuse is really out there and much more complicated than the simple, "we 'need' the ability to modify your app before it ships." Since the article gives a plausible scenario where that would occur (could be totalitarian regime, could be an NSL letter from the FBI[1])
Here is a time where having a history of doing the "good and correct" thing would help reassure people that you aren't being nefarious, sadly with the lack of such a history, I don't very many believe anything Google says any more.
It is sad really but there isn't anything App developers can do except leave their platform.
Yes, I think this will be the way that Google satisfy e.g. the Australian government's mandate to aid intelligence and law enforcement agencies to surveil in a targeted way. Not "everyone gets a subverted copy of Signal", but "these three people get a subverted copy of Signal."
And also the European anti-encryption proposals [0]
Lawfull intercept was done on IronChat, PGP-SAFE and Ennetcom that gave direct access to the communication of criminals which are now successfully used in court. In the most recent case, the tacktic of subverting all copies was successfully used by the police to decrypt Sky ECC and get access to .5 billion messages used primarily by criminals.
In this comment [1] regarding Sky ECC the question was raised
> How could I get a court order to get blanket access to Signal?
To which we now know the answer: On Android you don’t, you ask a court order for a list of specific phones you want to listen onto.
I’m baffled that Human Rights & Cilvil Liberty organisations are not standing up more fiercely against the new Play Store policy.
It's just a little bit weird that Google designed the Play Store and Android with key signing if they then have to ask for those keys. They control the OS and the store, couldn't they just make devices trust Google's app-repackaging-service's key? This would be easier for everyone, and more honest for the consumer user who gets packages signed by whoever actually built it.
Why? They control the Play app itself anyway. Isn't verification done by the privileged "Google Play Services" special background service? Which is basically the userspace, which is where Google pushes security updates (because carriers and phone makers don't).
I see, so the limitation is that app updates have to keep using the same key, and that's enforced by the OS? Couldn't the Play Store uninstall then reinstall in that situation, to update to the new key?
Is it? They control the store, which can install apps remotely, and most developers are handing them their private keys for convenience. You also have to trust that Google gave you the right one every time you install a new app.
What is the practical advantage of having apps signed by long-lived keys that were handed to Google without your knowledge over a Google key bundled with the store?
Either way, you keep the same option of installing an alternate store like F-Droid or downloading APKs if you don't trust Google.
Smaller apks are great, but throwing away the security model to get smaller apks really isn't. :(
Has Google come out with a way to compress string localization information yet? That would make a big difference for apps that support lots of languages, and last I checked (which was a few years ago, so happy to be wrong), Google didn't have a good solution there.
There's support for APK splits by locale, where you can upload a separate APK for each language which keeps the size down.
The AAB system criticized in this article will fix your issue too - Play Store delivery will automatically split APKs into per-locale slices and only download languages you need on your device.
> There's support for APK splits by locale, where you can upload a separate APK for each language which keeps the size down
How does this work if the user changes the locale on their phone? I guess you need to redownload the APK? I would still prefer compressed localization for all languages, and uncompress the needed one at install/start/sometime if it needs to be mmaped. If the phone locale is changed, uncompress the new one at some point.
> The AAB system criticized in this article will fix your issue too - Play Store delivery will automatically split APKs into per-locale slices and only download languages you need on your device.
Sure. The system fixes a problem Google (or its predecessor, not sure about timing) created, and let linger for over a decade, by handing control of signing keys to Google. Not a great fix. Sure, de facto, Google could patch any app via Play Services which has lots of permissions, but if they control the signing keys, they can more easily meddle.
Files in zip archives can be compressed, but don't have to be. If you compress resource.asrc, there are bad runtime consequences; much worse than the package bloat that results from not compressing it.
Are you saying that the SDK tools that build apks (aapt? I'm not sure) only compress files selectively? And what kinds of consequences are there if you do compress resources.arsc? I suppose it never keeps a copy in memory but reads it a lot, and if it's compressed, you're taking a performance penalty from it having to decompress the whole thing on every access?
Yes. Take a random apk and look at it with a zip lister. If resource.asrc is compressed, someone didn't use the SDK tools to build it (and I'd be shocked).
What you described is about my understanding of what happens. I would hope it doesn't uncompress the whole thing on every access, but only far enough to read the value; but I'm not sure. Note that it's not just the app itself that uses the app's resources; other elements of the OS use them too.
I know nothing about the real situation, but commenting off the cuff based on what I read here, it sounds like either:
1) google just sign these apks with their own certs.
2) google should present the developer with a google public key that the developer signs, allowing google to sign an apk with a google key that had a chain of trust to the developer.
>a service where Google presents you an apk, and you sign it.
This doesn't work if the idea is to dynamically generate APKs for the vast Android ecosystem. In theory they could dynamically upgrade apps to be compatible with future OS versions etc.
To add some more context on this, when you provide an app bundle the Play server looks at your device's display size, density, OS version, and architecture. If your OS version is new enough (L+ I believe), it can send you a bunch of split APKs. That means one big APK with DEX code, one with just the armeabi-v8a native code, one with just English translations, one with just xxhdpi assets, etc... In theory a developer could build all these split APKs, sign them, and upload them to Play.
However, on older devices that don't support split APKs, Play must compose and sign one custom "fat APK" with all of the stuff specific to your configuration, on the fly. There are a lot of different options for this (you can generate them using bundletool if you're curious). The upload size of all these redundant APKs alone would be a huge burden on developers.
This isn't to say that there couldn't be a way to do APK splitting while maintaining the integrity of the app signing system. My guess is that it wasn't a high priority to do so.
it doesn't matter, they could just push an android update that bypasses your signature if they really wanted to. granted that's a bigger deal, but they control the ecosystem in google play and hold the signing keys for android and google play itself, you already trust them.
Except verifying outside of Android tells you nothing about whether the application as installed on an actual device has been tampered with, so you don't really gain any security from this.
they could push an update that makes it appear like the author signed version is on the device, but in reality a different version is run.
they control the keys to the software delivery channel and the operating system. large parts of this are closed source. users and developers trust them and that's just how it is.
that said, it's not pretty and i hope it's just a backwards compatibility stopgap as discussed upthread.
I seriously doubt 99% of Google Play developers worry enough that they would take the time and money to run a signing server, and that would introduce a lot of complexity for Google.
> that would introduce a lot of complexity for Google
Oh well, they're imposing it upon everyone else.
Google controls what apps are distributed and run on over 40% phones and tablets in the US.
Users deserve the right to know if what they're downloading and installing is what they're supposed to be getting. Developers deserve the right to know that what's shipping to their customers is what they intended. Billions of people are vulnerable if the Play Store's infrastructure gets, or is, compromised, or if its owners or governments decide to do something nefarious.
Users, right now, are quite happy trusting Apple (paragon of privacy and security) and F-Droid (the main opensource store) with signing their apps for them, so there doesn't seem much reason for Google to waste extra effort not following them.
What's the business reason for Google to not follow Apple in this respect?
The percent of developers is likely small, but that’s the wrong stat, as we don’t care very much about the ~90% of devs that have one or two apps with a handful of downloads. Rather, this is about attacking big targets - a small number of apps with massive numbers of downloads.
Put another way, the percent of total app installs coming from devs who have the resources to run a signing server is likely much higher.
How is an automated signing server better security anyway? Google can still sign what they want but now every dev has a missive security hole in the form a server that can sign code reachable from the open web?
> Which would give them ability to alter basically any app on the play store as they deem fit.
Google already controls the operating system, the Play Store, and the SDKs you used to develop your app in the first place. If they wanted to alter your app there is already ample opportunity to do so, what additional trust do you gain by managing your own signing key here?
I don't want to defend Google here, but in theory, since Google controls the OS, it can also make it lie to you.
So you tell the OS to "show me this app's signature", and the OS can just lie and show you the expected signature. You want to copy the app to an SD card so you can check it on your Linux PC? The OS can copy the "legal" app.
Also yeah, it seems code signing won't affect anything if the OS wants to be malicious. "Super Secret Messaging App" asks the OS to load encrypt.so, its custom encryption library, and the OS can deliver a no-op library and say "Here it is!". The app wants to check the file's hash, the OS can intercept the hash method's return value and change it to the expected one...
> in theory, since Google controls the OS, it can also make it lie to you
Google controls Android, but it does not control every other OS and every piece of hardware.
If someone downloads an apk with their own custom Google Play client, running on their own computer, they can check whether it was tampered with. In the past a tampered apk from Google servers would have been signed by wrong key (because the proper key is controlled by developer), pointing to Google as culprit. Now it will be signed by "developer's" key (shared with Google), creating plausible deniability for Google and US intelligence services.
> "Super Secret Messaging App" asks the OS to load encrypt.so, its custom encryption library, and the OS can deliver a no-op library and say "Here it is!". The app wants to check the file's hash, the OS can intercept the hash method's return value
This sounds extremely labor-intensive. Who will write all those no-op libraries? Who will pay for it?
Yes I'm aware of why giving Google our signing keys is a stupid policy...
As to the labor and cost-intensive issue, the examples mentioned were, what if Google gives up the fight about end to end encryption under regimes that demand it (e.g. China, Australia). There's your answer of who's writing, or at least paying...
If the signature does not match that tells you the app was tampered with, but the inverse is not true when your "adversary" controls the compiler, installer, and the operating system itself. Reflections on Trusting Trust (https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...) provides a good explanation as to why.
Also what is this "device configuration" thing exactly? Vector drawables help avoid the issue of having unneeded resources in an apk entirely. You definitely should be using them for icons if you aren't yet. Other resource types, like strings in languages you don't speak or layouts for tablets, are small enough that their impact on the apk size is usually negligible.
Given that Google has a history of accidentally breaking things in YouTube that only impacts Firefox, I'm 100% certain they can be trusted to muck around in apps written by others.
Think of the opportunities. Next time Google releases a new social media system they can automatically add it into every existing Android app as a login option!
Google dropping their payment system again? Not a problem, they can just change everyone's billing code.
Or when they do the monthly random feature deprecation on Google cloud they can just modify any code that accessed it, across all apps!
Why bother testing when your app code could be changed at any time by Google. The time and cost savings will be massive.
Idk sometimes seems like YouTube’s just been breaking YouTube lately, even outside of Firefox.
I think they must have recently changed playback sync to be cloud-first, as jumping back 10s in a video has recently been badly glitching for me both in Chrome and on the iPad app. Often jumps back 20 mins to the start of a video, or where a prior session’s playback had been saved, thereby losing progress you’d made. Not the end of the world but frustrating when you’re watching a lecture and want to catch a detail you just missed. Really breaks continuity.
I’m sure maybe it consolidates implementation making each client simpler and probably satisfies a couple buzzword checkboxes from the business side but why would I possibly trust picking up playback across devices when I can’t trust it on one?
They claim that because Google strips the developer signature and signs it themselves, they can modify the app and re-sign it. They suggest that an authoritarian regime could coerce Google into serving modified versions of eg. E2E encrypted messaging apps to people of that regime’s choice as a condition of doing business there.
Does anyone know if the iOS App Store has the same vulnerability? I know that they do clever things like universal apps and App Clips, but I’m not sure if they achieve it by stripping developer signatures and re-signing. Alternatively, since all signing certificates must be issued by Apple could they technically re-sign any app anyway if they’re coerced into holding onto the private keys they issue? I’ve never written an app in their ecosystem so I’m not sure exactly how it works or if they have an opportunity to do that.
Apple issue the certificate but you never supply them with the private key. There's nothing stopping them from issuing a certificate with their own key though. It's not like Android where the signing key has to match otherwise apps can't be updated (amongst other things).
They must be doing some re-signing on their side because the binary you upload is huge and it goes through optimisation on Apple's side so the user has a much smaller download.
They do something but it's not really comparable to Google's signing system. In the case of Google, developers voluntarily give Google the keys (or have them generated), and it's explicitly clear that Google will take your uploaded APK file and sign it themselves.
With Apple it's a big more of an unknown. In terms of Bitcode, I can't see how Apple could take your binary, "recompile" the Bitcode into a device specific format, whilst still preserving the signature.
From a 2 minute test of an iOS app I work on, I can see that what's downloaded on an M1 Mac has changed quite a bit from what I uploaded. The most obvious thing is the code signature on the binary itself has been replaced with one issued by Apple and isn't the one my build server added.
> I can see that what's downloaded on an M1 Mac has changed quite a bit from what I uploaded. The most obvious thing is the code signature on the binary itself has been replaced with one issued by Apple and isn't the one my build server added.
So yes, Apple is already doing what Google wants to.
It's silly, because if you control the OS you control the app. They can push an OS or trusted app update that reads/writes the app's private data, or changes the shared libraries the app depends on, or with a little more work reads/writes the app's memory. Anyone claiming to provide protection from Google on a phone Google has remote root access to is selling a theatrical experience.
Google do not control the signing keys for Android for any phones other than their own Pixel line. So whilst true in theory, in practice the open source nature of Android with OEMs in the middle distributes the power around.
My impression is to implement things like the Play Store, Google Play Services has effective root access. I can't find any great sources for that though.
With older versions of Android, what Google could do was pretty limited (in terms of messing with the core OS). While Google Play Services has a lot permissions, it all fits into Android's permission model and does not run as root. Package installation is done via communication with the Android framework's PackageManager class and the corresponding /system/bin/installd daemon. Silent installations and automatic updates are also handled via PackageManager using a permission that system apps can obtain.
Overwriting most core OS files (eg. shared libraries) in a persistent way, even with an exploit, would be difficult since the entire /system and /vendor volumes are signed using the device manufacturer's dm-verity keys.
However, with Android 10+ shipping with APEX modules [0], Google's ability to push core OS changes to existing devices might be changing. I'm not sure if any devices ship with the unflattened (ie. updatable) type of APEX modules yet, but I'd suspect these would be signed by Google instead of the device manufacturer and would be distributed through Google Play.
This isn't about Google's control over OS. Of course, Google fully controls Android it, so they can compromise it anytime. But if such compromise gets detected, Google will lose trust.
The move away from developer signing towards Google's signing will makes it harder to detect such event.
My argument is that a hypothetical compromise of an app never gives google more power than they could hypothetically have now. I also don't see why it would be harder to detect. Why do you think that's the case?
This is a slippery slope fallacy. Your government has enough power to detain and execute you. Does that mean, that you should give them even more power?
Even if one OS component (Google Services) is centrally controlled and can be used to attack you, this does not mean, that you should make other parts less secure. Real-world attacks are complex and backdoors are fragile and prone to being detected. Embedding a backdoor in proprietary code of Google Services is easier than embedding it into AOSP. Hijacking a specific application is easier yet.
Google Playstore is a walled garden, like Apple's. The walls are only growing higher and higher. Once Apple adds a layer of bricks, Google follows and vice versa.
They are very different. Google provides open source alternatives. On Linux I can use Chromium. On my Android I can install F-Droid, or just install APKs manually.
If you want to update all your F-Droid apps at once on a non-rooted Android, you need to go through all of them one by one. Basically: click upgrade, click "Install", wait for Android to do its stuff, do the same for the next app. It is extremely impractical and most of the apps I installed from F-Droid are severely out of date because of this Android restriction.
Fortunately F-Droid has an "Upgrade all" button which will download all the APKs in the background, but the click&wait loop sequence cannot be avoided.
Not that this is perfect either but if you know how to grab the APKs and developer mode is enabled, you can script the installs using adb from a computer. I agree there is room for improvement to help make this process more seamless, but it still much more flexible than what is being offered by Apple.
Except it's not mandatory like the Apple's app store is. You could as well distribute your app straight from your website, and some developers actually do that. It's a very important difference and it's one of the main reasons I can't imagine myself using an iPhone.
Google has done the same with YouTube to create lock-in for content creators, just like the Play Store. My pet conspiracy theory, they're also doing the same with Chrome browser, by creating as many useless features as possible.
Chrome adding features is spray and pray monopoly.
I.e. once you pass a certain market share and have a larger dev team, wasting your competitor's time by maintaining a high feature addition pace
I'm not saying they aren't good and useful features... to someone. But the net result is that if Google adds more features, quickly, and they're adopted on the web, competitors have to spend more money keeping up, and Chrome becomes more dominant.
So when adding features is a strategic advantage, why would you limit the features you add?
I don’t think it’s so malicious. I think they just see a feature they’d like in the browser, and add it.
Google appears to be built on the idea of creating a feature or a service, sharing it, making some excitement, and then moving on to the next thing. I think a lot of these concepts are just engineers making things because they can get sign off on it.
> I think they just see a feature they’d like in the browser, and add it.
I'm not sure. Their official policy is that almost everything should be an add-on. Even things like "don't let random sites install search engines into my browser" should not be a setting, but should be a plugin that works around the browser to make it happen.
Every content creator I follow seems pretty desperate to get away from YouTube. Creating a system where critical content creators are rewarded by an algorithm that requires burnout behavior continually does not seem like a long term stable business design.
> Creating a system where critical content creators are rewarded by an algorithm that requires burnout behavior continually does not seem like a long term stable business design.
Sadly, history has shown that this is completely sustainable. Content creators that burn out will be replaced from among the legion of up-and-comers who are eager for their own shot at the spotlight, and are happy to sacrifice their well-being to do so. If ruthlessly exploiting youthful naivete weren't sustainable, then the games industry would have folded decades ago.
It should be regulated. Creators for all intents and purposes are employees of YouTube and should at least be paid minimum wage, get holidays and sick pay. It's time YT gets Ubered.
Such systems will always work much better for undifferentiated labor than specialized labor. Uber’s workers need protection because they’re so replaceable; anyone with the same class of car in the same city can replace them.
YouTube creators are irreplaceable and wildly unequal in their reach and impact. The issue here isn’t that content creators need a minimum wage, the issue here is that the algorithm needs tweaking. This is possible with regulation, but minimum wage and holiday pay won’t do it, especially since they’re not paid by the hour anyways.
YouTubers would be much better served looking at what NFL players and similar organizations do to protect players rather than what factory workers did to protect themselves, since their labor looks more like sports players rather than service workers.
Again, the walled garden has an exit. But why would you leave it when you have to fend for yourself outside of the walls, and everything is given to you on a silver platter within them?
Well, with Android's walled garden, competitors outside of it are not allowed to compete on a level playing field with the Play Store.
User installable 3rd party mobile app stores cannot implement automatic upgrades, background installation of apps, or batch installs of apps like the Play Store can. These limitations are designed by Google and are implemented in Android.
If the user tries to install an app on their own, they're shown scary warnings and must adjust arcane settings, but if they use Google's Play Store, no scary warnings are shown and no settings need to be adjusted. They're told they're "protected" by Play Protect, but aren't shown scary warnings about the fact that the Play Store is the main distribution method for malware on Android[1] when they go to install apps with it.
Continuing with the actual garden analogy, that's like trying to compete with the garden itself by planting your own flowers and the garden making it ugly with weeds. The point Apple and Google are making is that Google's walled garden is Android, as Apple's walled garden is iOS, and the app stores are simply features of those products, and thus to compete with them you need to compete with the entire product. And whether or not that's actually the case is currently being decided in Epic v Apple.
Why do I care? Google can already modify the behavior of an app without the developer's permission; they can just push an update to Android that changes the behavior of that app. It's "reflections on trusting trust" all over again.
An OTA Android update which modifies your apps would make it incompatible when those apps try to update themselves at a later point and find different signed apps. This would out Google as hostile immediately since there would be no other party who could feasibly swap out your apps.
Whilst not preventative, even one attack would likely get enough media coverage it'd destroy Android by Google trust irreversibly.
My android updates do not come directly from Google, but instead my mobile provider. Google has no control over them, but they have direct control over the play store.
Third, it's distasteful to point the finger at the other person instead of simply taking responsibility for your actions.
Why not just use HN as intended? If another commenter is wrong or you feel they are, the way to respond is with correct information, neutrally and respectfully. If you don't want to do that, not responding at all is the other good option.
To a first approximation, the internet is wrong about everything anyhow, so for sanity's sake we all need to learn how to let go. Believe me, I know that's not easy, but it's what we all have to do if we want a forum that doesn't suck.
tldr: Google's new app bundle signing might be a precursor (well, almost certainly is) to Google's being able to replace parts of and modify your app on the fly when installed by certain targeted users or within certain targeted countries at Google's whim, with users being none the wiser.
Google might do this for a lot of reasons, and none of them seem to be good. FWIW, Google promises not to change the functionality of your apps.
Finally, it appears to be the intention that this will justify setting a new norm and become mandatory for all apps.
> Google might do this for a lot of reasons, and none of them seem to be good.
As a Play Store developer, I give Google the benefit of the doubt. By the way, before you assume a nefarious purpose, consider all Android phones connecting to the Play Store (by definition) have an auto-updating root process. Why does Google need to impersonate an application developer? This is fundamentally why Commonsware scare tactics don't resonate with me, the application has less privileges than the system and the app store, the calls are coming from inside the house!
But, there are more common and mundane reasons. Honestly, a lot of people lose their private signing key. And if that happens, no more updates to your app. By using App Signing, Google can help regenerate a key for you. They want to make this ability consistent across their whole store, that's why they're making the change.
They can also optimize the app bundle the device downloads from the store, as the store will know the target screen size, localization, CPU architecture, etc. The current workflow forces the application engineer to upload separate apk configurations. So this is also an improvement.
Wouldn't a simple solution to this be a double signing of one and the same app by both Google and the app's author?
That way, if Google changes the app and signs it, while the author only signed the unchanged app, then the author's signature would no longer validate on the new, changed app.
The whole point of this feature is to allow Google to modify the APK by stripping out unneeded resources to reduce file size. If you require both a signature from Google and a signature from the developer, the modified versions would not pass validation.
The issue is that this inherently requires users and developers to trust Google to only make innocuous changes.
If it's just "sign a thing, but allow some parts to be crossed out later while still being able to verify the signature", that's not that difficult to implement.
This would completely change the signing structure but its feasible. Doesn't work for splitting files but maybe that's ok. I think BlackBerry would have you sign every file in a build but man was that a pain in the ass. It took forever for some reason.
Not without changing the nearly unchangeable internals of the operating system. And it takes so long to get people to update to new OS versions that it'd be years before this could be deployed.
"The whole point of this feature is to allow Google to modify the APK by stripping out unneeded resources to reduce file size."
Why couldn't Google just ask the developer to sign the modified app after Google makes its changes (which the developer should only do if they approve the changes)?
PITA, most likely. More round trips. More complexity. More work for the user. It also means that the bundling process cannot be improved and you can't extend it to support new configurations without the involvement of the user. There are a bazillion locales and device configurations out there, with more created every day.
In some cases, there are 100+ artifacts. I'd wager that far more developers care about the extra effort correctly splitting and signing a mountain of artifacts than the hypothetical threat model described in TFA. And Google probably would prefer the less error-prone method of doing it internally rather than risking devs doing it wrong and shipping broken apps to some device configurations.
What would be the point if Google's signature would still validate. If an author wants to share an app with a different signature outside the appstore they can.
I suspect this is more about Google asserting greater long-term control over apps in the store than anything else. If Google holds the keys it makes it harder, if not impossible, for an author to say 'no' to some unspecified future change(s). As an app developer having watched Google play this game over the last decade, I'd bet money on it.
They probably just want to be able to patch in OS api compatibility shims so they can make faster changes.
For app bundles they want to be able to remove unneeded assets and make device specific builds. Think high dpi only. I think ios has a similar feature called app thinning.
Since Google already controls the Play Store, the Android operating system, and the Android SDKs, they can already do this regardless of how your app is signed.
four days ago I commented on a thread about Google & Apple app store domineering by saying that Google seemed at least to be building an alternative-ok os, where players like f-droid could work on almost all devices[1].
really really hoping we are not entering some new capitalist platform control hell, like this article seems to be indicating.
I recently finished the book "The Age of Surveillance Capitalism." If you *really* want to understand Google - the real Google, not the PR spun version - then this book is a must read.
Every small step Google takes is a good thing for the mass consumer 99% of the people, but much worse for the content creator. Google is slowly destroying itself.
Is it possible for the developer to just make detached signatures of compiled/assembled pieces of the app bundle, include it in the bundle, and then at runtime self-check and tell the user if the app is unmodified?
I don’t want to be pedantic, but it would be more apt to say: “Apple and Google, the US Steel and Standard Oil of our era” since Apple doesn’t own Google.
In response to a now deleted comment about if they constitute monopolies:
Hm, I'm sympathetic to where people are coming from. Treating Apple Apps as a distinct market from Android Apps doesn't feel technically true, but I think it's more then true enough.
More generally I think people have a sense of what fair play is and how large companies shouldn't be as free to throw their weight around, laws be dammed. And that whole feeling gets lumped under monopoly.
> Google Play uses your app bundle to generate and serve optimized APKs for each device configuration, so only the code and resources that are needed for a specific device are downloaded to run your app. You no longer have to build, sign, and manage multiple APKs to optimize support for different devices, and users get smaller, more-optimized downloads.
But as all this logic sits on Google servers and might involve lots of signing of apks for a single app and version, Google has decided it needs your signing keys for that feature. Which is weird already because you could also think of a model where you provide Google not with the keys but a service where Google presents you an apk, and you sign it. Then you can inspect it retroactively and run scanners on it, if you want to. The keys stay yours and you would know what Google is up to with your application.
If you have problems with giving Google your signing keys, you can just avoid this feature. But apparently there is the fear that Google wants to make the feature required. Which would give them ability to alter basically any app on the play store as they deem fit. Or they might in fact be forced by governments. Already now many providers like facebook take down public posts because a local government disliked a post. What if a govt told Google "please install this altered Signal app on this person's device"? And yes, Google apps already run as system app so they could already do something like that, but an implementation of that is way harder to make consistent among different vendors.
[0]: https://developer.android.com/guide/app-bundle