I'm playing with the idea to create a Web UI and launch it automatically from the Rust server by opening a browser and pointing it at localhost. No electron bloat.
Has anyone tried this, any thoughts?
> Web broswer communicating with a Rust local server: too much hacky, insecure? (DNS rebinding attacks) and does not support native features like tray icons.
Personally I don't agree it's hacky and while DNS rebinding attacks are feasible, doesn't your application just need to check against a host whitelist to protect itself?
I hadn't considered the security part yet, to be honest. I'm open to suggestions for methods to make sure the application is secure.
This prevents malware from accessing your app while avoiding leaking authentication cookies to other http services on localhost.
Security wise, that is.
It at least used to be the case that this could be gotten around with flash, though that may be fixed, and many people won't run strange flash anymore anyway.
Another way, if you're using WebSockets, you can establish that the latency is unrealistically low to be a switched physical network, with pings (with cookies).
What will you do in native application?
You will put custom drawing code (C/C++/Rust/Go) in WM_PAINT/onPaint/onDraw method of the widget. If in Sciter then you can do event_handler::handle_draw(graphics* pgfx) - Sciter's applications are native ones.
This will be fastest and most lightweight way of doing such things.
Now, what will you do on Web platform (browser or Electron)?
Calling server (over TCP - browser, RPC - Electron) for providing a drawing is clearly not an option.
So you will make some <canvas> element where you will draw that thing. <canvas> is a bitmap based thing so you will have that bitmap allocated as in CPU memory as in GPU memory. Plus you will have some nontrivial setup to position that <canvas> where you need it.
Therefore you will have at least memory requirement increased for your app (if to compare with native/sciter app).
Note that modern browser uses separate process for each tab. So you will have at least 3 processes running your application. That's also about memory and CPU consumption needed for RPC between them all.
And what if that thing that you will need to draw is not that trivial - will be heavy for script to handle...
And so you will start to add JIT to your script engine. That will need more memory and CPU (at least for bootstrap). Otherwise some smart people will propose you to use WebAssemble for that, so you will load WebAsm VM into your application...
See where it goes?
To create obstacles for yourself to overcome them heroically, right?
If that is for you personally then you can do whatever you want. But you want to put that burden to your users ...
And so as many users as many machines converting that needles payload to heat without doing anything extra of what native applications can do already …
To a state where users need a (another) datacenter to access/view/use even trivial apps with “reasonable” responsiveness. There are services already being offered to this end .
I enjoy noticing the system tray spin on cue when I boot up a laptop or plug a phone in as it syncs over WLAN. It's quite a cool feeling seeing the automatic file sync through the air as if by magic, no separate Dropbox/Google/Onedrive/Nextcloud service needed.
I can't do that with only Syncthing on Windows.
It also supports a super bloat-free option where entire Desktop Apps can be run from Gists. All Apps are run and share the same executable (and re-use its dependencies) so the Gists only need to contain their App-specific scripts and dependencies, giving each App tiny footprints:
You do have to add authentication to prevent against dns rebinding but I solved that by adding a random token in the get parameters when I launch the page. (You then cache this inside a cookie.)
Anyway the reason this would be desirable is because the user's browser is likely already open (and doesn't have to be chrome), and one of the critiques with electron is that each electron app is essentially another instance of chromium running, they don't share any resources.
But a web browser that's already open just has to open a new tab, and you can close it when you don't need it, allowing the otherwise lightweight daemon to run with minimum performance impact.
You are right that your choice of features plays into it, but I don't believe that's all there is. Even in the best case (all of features you want are available in all the browsers you care to support), you still should be testing in those browsers (ideally on all of the platforms / devices you support).
You also have to make the initial decision about what browsers/platforms/devices you support in the first place, and then choose when to re-evaluate your support. None of that is free.
Please don't get me wrong: I think it's a fine path (and one I have chosen myself), but it's definitely not without downsides.