all whitelisted from battery optimisation, and I've not noticed any huge drain (I expected it to be unusable and have to go back to google play services or micro-g).
edit: I missed that the original post was talking about keep-alives and you were probably referring to them. But how app-level optimisations can help with keep-alives?
I suspect the hard bit for normal apps is keeping the app active on the device while it is in the background. Google has their FCM receiver running as a low level service so can make sure it is always running.
Unfortunately messages are only plaintext.
In particular I would suggest to get to know MQTT an open protocol with several (open and close) implementation that is know for its flexibility and scalability (if nothing has changed it is used to power the Messager Facebook platform, but don't quote me on that).
verneMQ is the server implementation I would suggest if you are interested in such topic.
Regarding the topic of MQTT in general, to me it sounded as if you were not adding information but criticizing the choice of WebSocket instead of MQTT in this project. It was a misunderstanding.
To add to the discussion, maybe you could help me a bit with an issue I'm having. I'm currently working on a service which is starting to make use of MQTT for microservices communication. Initially I was using a WebSocket server for this, then moved over to RabbitMQ a couple of years ago, and now I have added a Mosquitto server which is working, but has not yet replaced RabbitMQ. This service also has one public NodeJS websocket server for web browser clients to connect to it, this WebSocket server receives the messages via RabbitMQ and dispatches it to the clients. I also want to replace that WebSocket server by another Mosquitto instance, so that MQTT will basically become the main way for communication between servers and also with clients.
I chose Mosquitto because it have been using it at home without any problems for a while now and have a custom authentication plug-in for it which I wanted to reuse.
VerneMQ is a candidate, but Mosquitto is working just fine, so I haven't looked into it again. I like how lean Mosquitto is and how it's configured inside a container. Plus I can understand the source code and eventually modify it.
Would you advise me against using Mosquitto in a production system? Which benefit would I have by using VerneMQ?
My apologies, again.
My advice would be to have one single broker no matter what, so don't replace the websockets server with another instace of mosquitto, but just use the same old instance.
Then, the ice thing of working with a protocol is that you don't care of the implementation, hence just keep using mosquitto until it support the load and the, eventually swap verneMQ if necessary.
Be careful with your custom authentication logic, that if you want to migrate will require you to port it over.
Verne provide plugins for Auth so it is not an huge deal but you need to think a little bit ahead.
But again, if mosquitto is working fine just keep it!
Sometimes you don't need a cargo van or a jumbo jet to get the job done when a skateboard will do.
I only saw android support, anything for iOS in the works?
New messages in the queue are then notified (gotified) via the websocket.
The payload can be picked up and managed via REST API as well.
I'm building a similar solution for a slightly different usecase, so wanted to have a look under the hood. There's nothing I can see that would stop you using this for iOS so maybe they just don't have a iOS dev on the project.
Edit: Ah, here's why. https://github.com/gotify/server/issues/87
I actually wound up just using an SMS gateway provider with a simple API you can just fire a get request to.
I liked it so much that I even wrote a small Python library for it: https://gitlab.com/stavros/pysignald/
Not saying I disagree, but i don't know it changes a whole lot in this case.
Also, application/x-www-form-urlencoded bodies are literally identical to query strings anyway, so if you can construct a query string, you can easily construct a POST body using that Content-Type too.
Modern browsers are typically around 100-200k limit, command line tools such as curl and wget also has their limits.
IE was famously limiting GET requests to about 2k bytes.
even wget and curl are probably more limited by your CLI arg length limits than the bins themselves. e.g. I can craft a multi-megabyte GET request in a file and pass it to `curl -K` and it works just fine (I just did so to verify, a bit over 3MB. google complained a bit, but it responded). even if I screwed that up somehow / it silently truncated, I can absolutely do something with netcat and know it won't truncate.
So, assume using a degoogled android, can this notification replaced FCM if more apps use this notification server? This server uses web socket, so what does FCM use for its connection? Does web socket battery efficient enough?
What do you mean it's "integrated in kernel"? Are you saying google play services includes a kernel module?
It's a plain-old TCP connection in GMSCore, but has special permissions to ignore various power save modes.
I personally wish Google would work with mobile networks to replace it with something lower level (eg. Based on the same thing used to initiate phone calls), because a hanging TCP connection open for many hours gets unreliable as NAT drops it and as the phone migrates from one mobile network to another (eg. Across country boundaries)
Play services can also instruct Google's servers to stop sending notifications when the device is in deep doze and re-open the connection when the device is active again -- thus saving power (messages are queued at Google's servers)
I really like the idea of Google providing a single, efficient connection for push-notifications, but having it bundled with Firebase (and therefore with Google Play Services) is an unfriendly approach.
But yes, it's essentially implemented the same across iOS/Android and browsers like Chrome, FF, Edge etc. There are services such as Amazon SNS or Google Firebase that provide a single interface which can talk to all the slightly different implementations.
In a nutshell the device registers to the server and receives an identifier token. Then a server can send messages to that identifier and the message will be displayed on the screen. Some providers allow for Icon, Audio and some Actions to also be sent as part of the message payload.
Exactly that. A device registers with the server and then there's a REST API or CLI binary that you can use to send it push notification messages.
Why would you need a self-hosted (or really, any other push-notif service) other than the official one from Apple/Goggle?
They're optimized (for each respective platform), no need to maintain a server, and free. If you need "cross-platform", you can use Firebase(?) that abstracts that away.
I don't see Google Play allowing these kind of apps in massive numbers though - applications abusing background processing to poll for messages were one of the main reasons why Android got blamed for being worse than iOS in standby battery consumption. Warranted or not, Google/Android gets blamed for abuse of power saving at the end.
$ git clone https://github.com/gotify/server
$ cd server
$ env GOARCH=386 go build
It’s the go gopher... gophing about.
What is the point of building alternative push notification services that can’t even properly rise above the bar of technical feasibility?
An alternative is that the mobile device pulls from time to time, but that's quite power intensive or, when using a long interval, is slow.
Instead one has an idle connection open to which the server can send messages.
For example- Let's say you wish to send a notification to user A when user B sends him a message. In this scenario, you can call another FCM fn, using the token user A had generated. Of course you can use the web dashboard to send the notifications, but in this case, you would not be able to send event-based notification reliability and in real-time. Other services also work in a similar manner.
Imagine if every program on the users phone had their own schedule for checking for updates against remote hosts. That sounds terribly inefficient and wasteful of power, as the applications would need to "wake up" (as in, consume processor cycles) to initiate and perform a check.
If you get to the stage of sending millions of notifications per day then you'll need to start paying,
I got so scared when I looked at the swagger. The authentication uses the exact same variables as mine and is somewhat similar. The name of the endpoints are also very similar and the project has resemblance to mine.
That was really really akward for a moment.
I'll use the project though ( different use-case), looks really interesting. Coincidentally it's for the same project as I mentioned.