The result is a chaotic market. App devs will have to use multiple push machinism for different device: on Huawei phones, Huawei's own push service is the most reliable; on other phones, you might want use SDKs from Tencent/Baidu/Alibaba so their app (Wechat/Baidu search/Taobao/Alipay) can "wake up" your app to receive push. Battery life become miserable, push become unreliable. It's to a point where the government start "regulating" the market (the Unified Push Alliance, http://www.chinaupa.com, is lead by CAICT, a think tank of Gov of China).
It isn't very different from preemptive/cooperative multitasking.
Are clocks too unreliable or is there too much NAT because everybody's too lazy to implement IPv6 still?
FCM (and Apple push) run a single keep-alive TCP connection and incoming packets will cause radio and application process wakeup only when necessary.
This change significantly improved Android battery life (remember that HN Apple lovers really loved to bash on Android battery life before this FCM push happened). The fact of the matter is that a lot of developers didn't care one bit about battery life and abused Android background services when there was no need to do so (e.g. schedule periodic wakeups to check for a preset alarm needed once 24 hours) and if we learned anything by this experiment is that you simply can't trust developers to respect users anymore.
For one it is hard to measure, and many devs don't really know/care about it.
FCM is a boon in this regard. So are the restrictions that Android is adding to background processing. They are technically not that necessary ... except that you can't trust any app not to abuse it, so the new model is so much better.
Also, thanks Google for finally forcing apps to target the last version or so of this OS so they have to adopt all these new optimizations, whether they want to or not.
But majorly - it makes little business sense for Google to support that. It can only make Android look worse for little gain to them.
IPv6 would solve a single push service, but with every app deploying its own push service, you get back to the tragedy of the commons problems.
As it has been seen again and again on Android, you absolutely cannot trust app developers to be good citizens and stop running services on background that quickly wipe your battery (not to mention even more shady stuff), and no way you'll be able to trust developers with push services that don't do the same.
If you need proof, just look at China, it is the wild west of push services.
That's your problem, if the radio is off and you're using UDP, the message will simply be lost. You can leave the radio on permanently, but you'll kill the battery.
And Google is doing virtually that, with enforcing push via their services and controlling Doze, making some apps receiving notifications with (sometimes significant) delay.
What's more, I would argue that in quite a lot of cases NOT having immediate push notification could work wonders on our attention span, but this is slightly less technical problem :-)
What if there was an open source software one could setup on a server that would provide push services for all these apps and would interact with one open source client running on android.
That way one would have the energy saving benefits of only handling one server connection, but the privacy benefits of a private server for oneself/friends/people one trusts.
This is definitely not easy and would require coordination between many open source projects and also additions in Android to run on a system level (maybe in lineageos and similar), but I really think it would be worth it.
Once multiple processes start doing it, without cooperating together, your battery is going to be shot.
Apps also should never send anything private over push notification. It should be only a ping to the app, to check it's event source.
Fking this. ideally, this would also involve a standard API on the backend for how to send push notifications. E.g., something like:
1) App on phone queries OS for selected push provider.
2) Phone OS returns some metadata about the push provider to the app, including a a backend URL for sending messages.
3) App sends that URL to it's own backend server.
4) When a push message should be sent, the app's backend server invokes that URL with the message to be sent in some standardized way.
If - which is likely - push servies require that developers register their apps with them before use, this could be expanded by the phone OS returning a list of providers instead of just one.
From my testing with IPv6 enabled VOIP for inbound calls, compared to FCM with calls coming from an IPv4 only server, only the former setup would reliably get calls to a test device on the first ring. If you need very low latency push notifications, IPv6 and FCM/Apple Push Notification are needed for best performance. Thus far, inter-carrier roaming generally does not support IPv6, so you still must support legacy IPv4 to a degree.
>I’m closing this issue. I think current master successfully disables analytics. Thanks for bringing this to my attention.
This is just not true unless there is some serious fuckery going on with the websocket implementation. Both require full duplex communications on the transport layer.
Reading this it seems that there is a known solution to this problem. Does Mastodon offer something like private messages? Or is there a messaging app that doesn't use FCM, but works in a way described in article?
I don't use Signal, but I don't think that delay is typical, could there be an issue with the connectivity to their servers?
I have seen this issue on Motorola Android phones with known dying LTE Modems (eg: these phones couldn't maintain a data session for more than a few minutes, GPS off WiFi is also unreliable).
You'll need an XMPP server though, and to use the E2E encryption you'll need a server which supports certain XEPs. The app dev also runs a server at conversations.im that is, of course, compatible. It costs €8/yr though. That said, there are several public servers with OMEMO support, plus self-hosting is an option.
There's plenty not to like about Signal, but reliability is second to none.
At Qbix, we thought deeply about this problem. Ideally, to maintain anonymity, each notification would come from some random endpoint on the cloud. But that’s not how the Internet works these days, you have to connect periodically to SOMETHING. So you can choose your own notification providers.
The trouble is if you have many apps, then your device is constantly connected to many servers.
What we settled on for now is a background process with WebSockets, but perhaps this works better. The operating systems weren’t designed for this use case and the phones aren’t optimized for it. For example, how would you do the same on iOS?
However, what IS possible is tunneling through the native iOS VoIP notifications support and encrypting your notifications. You can even process them on the client side with some “IFTTT” type logic. To do that for Android, however, we had to implement this background process approach. But it’s a hack.
It's likely in the users benefit to only have 1 notification provider
I am starting to realize that DNS is main source of problems on the Internet. We need to replace it with a DHT using something like Kademlia.
It was originally designed to have human-readable domains on the Internet. But it’s become a glorified federated search engine, essentially, whereas we can have lots of different search engines, one per app or location. Today, there is very little benefit to having human-readable URLs. If it’s anything much longer than a hostname, you won’t even say it or read it. So it only works for a tiny subset of internet resources but serves to centralize control in the hands of a few large websites. It leads to centralized databases that attract the NSA and advertisers — and all the stuff from Cambridge Analytica to the Equifax breach are downstream results that can be fixed by apps switching from DNS to a DHT.
In a DHT, anyone with very different incentives can add as many nodes as he likes. If the node id assignment isn't designed very carefully any attacker can even dominate whatever part of the DHT he is interested in and deny or modify information or just log information.
Sure, the DNS system isn't immune against state actors, especially the US. But DHT is vulnerable against everyone willing to spend some money on spinning up hosts.
Now, as for the incentives. In fact, DNS does get poisoned and that’s why DNSSEC was invented. The same exact thing can be done on a DHT with private keys and certificates authenticating the author of a document. Except you don’t have the kind of dynamics that lead to centralized databases and breaches. Instead, we can even have content addressing, so the same exact resource can be cached even if it would have been “on a different domain”. And you know that resource hasn’t been changes from under you. Talk about incentives — there is a huge incentive for hackers to access these giant centralized honeypots and change up what’s being returned from a URL, or do phishing based on a slightly similar domain name.
Look up the MaidSAFE project and tell me, how exactly is the Kademlia DHT there vulnerable against everyone willing to spin up hosts? You can still use certificates and domains can still sign everything. The domain system is basically a search engine, which also has its own incentives.
And finally, Capitalism is the 2nd best system for organizing people and information, the first being open source. The Web, Wikipedia, WebKit, MySQL, PHP and Linux have long ago beaten AOL, Britannica, MSSQL, ASP.NET and Windows NT Server.
Are the FCM service and the iOS equivalent also IP based or can they use some lower level, more energy efficient protocols to wake up the phones?
There are several problems that make replicating FCM hard, which is one reason why Google tries to push you to using it (wordplay definitely intended).
It all boils down to IPv4. IPv4 address space is exhausted, so carriers have had to deploy carrier NAT in front of cell towers, often multiple levels of carrier NAT. Well NAT requires translation tables to be held in RAM and carrier-grade boxes are very expensive in general, so to keep the machines alive they need to aggressively garbage collect dead translations. Otherwise they'd run out of RAM.
This is the origin of the 'keep alive' problem - NATs want to close your connection to free up their own resources, and you want the connection to stay open so you can receive push messages. So phones have to wake up every so often and send keepalive packets or do a connection rebuild.
Google and Apple have an interesting solution to this problem .... it's a mix of learning and data analytics to figure out what NAT timeouts used by each carrier are, and cutting deals with carriers directly to adjust the timeouts for their IP ranges specifically. Therefore you cannot compete directly with Google or Apple on energy efficiency. This is something a lot of hackers don't realise. Getting to the level of efficiency FCM has is very hard and takes a lot of work, and basically requires you to be a giant company. This is why it's an OS level service.
They also use very tight protocols, batch things together and of course provide oodles of server side disk space for buffering messages to disk until the devices return, lots of other MQ type things that are hard to do at scale.
IPv6 solves this problem by allowing carriers to lose the NAT boxes. No more machines that need stuff in RAM for every TCP connection, which means connections are no longer a scarce resource that must be culled from time to time. If you only have to maintain connections to devices that support IPv6 you could theoretically maintain very long lived connections if the kernel, radio and server cooperate. Of course there are still timeouts: the TCP connection requires some state server-side, so the servers will kill off connections from time to time because devices may roam across different IP addresses. But it should be a lot more stable and not require cutting deals anymore.
In theory, seems like they could just sidestep the NAT entirely. Co-locate a front-end server inside the carrier's private network.
Then the phone and front-end server talk to each other with private addresses. Once it hits the front-end server, everything can be multiplexed over a single TCP connection back to the Google data center. (You'd need a way to discover the server, but that's doable. For example, the regular server can refer you based on cell carrier, source IP, etc.)
Google already has reasons to want servers co-located, such as serving static web content and making sure new TCP connections warm up fast. Apple too, somewhat. So the hardware might be there already and this might be just a configuration change, possibly including the addition of a private address.
Point being, the economies of scale here might be even greater. They might have a nearly free way to do it, and without the need to convince the carrier to give up precious NAT resources.
Haven't most big land and mobile carriers deployed IPv6 on their core network yet for other benefits? If so, one could create alternative cloud notification service based on IPv6.
The real problem (afaik) is that people change address whenever they change network (so it doesn't all boil down to v4, either), so you need regular keepalives. If every app starts doing keepalives individually, your phone will be using its radio every 30 seconds 24/7 (assuming you have four apps that need push, and those devs want a maximum delay of 2 minutes, and their timers are roughly evenly distributed). By having a single server to keepalive, the issue is avoided.
My understanding is that essentially every mobile network in the world uses CGNAT. I happen to have AT&T, T-Mobile, and Verizon SIMs handy at the moment, and all 3 of them are behind NAT.
Having the possibility of choice is a good thing but in this particular case I don't think it would be good if 3rd party notifications providers were available. This should be an integral part of the OS, not a field for competition; knowing that Google itself can harvest data from notifications we can't exclude the possibility that there would appear companies interested only in that activity, at the same time enticing users with pretty and simple design of notifications or whatever else.
We mitigate this problem by asking users to make an exemption from battery optimisations for our app. It worked fairly well.
This is why we were migrating to FCM from our own custom built MQTT solution that was invented back in 2012.
Edit: Would anyone care to explain how I am wrong in addition to downvoting me? If Doze blocks network connections to your app when it is backgrounded, how can it receive MQTT events?
There might be hacks, but I'd rather take Googles approach than to work around on each and every release.
Having done quite a bit of testing with IPv6 enabled VOIP for inbound calls as compared to FCM with calls coming from an IPv4 only server, only the former setup would reliably get calls to my device on the first ring.
On this subject, many people including myself have been failing (hard) to convince Moxie that dropping FCM (or GCM or whatever the name) is the way to go for Signal, if anyone want to give a hand...
The article refers to not wanting use Google Solution for non technical reasons.