This is definitely the way to go! The browser is not only good at network requests and sandboxing but comes with the most powerful layouting system and rendering engine.
So yes, leveraging browsers just for the front-end of a local-first app is such a good direction to go. Generally apps tend to bundle their own browser runtime to leverage this component. Even the lighter weight electron alternatives tend to either implement a part of the browser runtime themselves or at least need separate webview instances. But I already have a browser installed and why can't my local apps integrate with my installed browser to expose their functionality? This project looks like a good step into that direction.
> but comes with the most powerful layouting system and rendering engine
If that would be the case there wouldn't be the flurry of JS frameworks which try to "fix" the DOM. The DOM was originally created to render static text-heavy documents, baking such a limited document-layout system into browsers was a mistake in hindsight. These days, the DOM should be "just another" on-demand loaded Javascript UI framework sitting on top of a handful lower level rendering APIs.
> If that would be the case there wouldn't be the flurry of JS frameworks which try to "fix" the DOM
jQuery was a part of the generation that tried to "fix DOM manipulation" which browsers now have caught up to (hence you don't see jQuery as wildly used anymore).
The problem the frontend ecosystem is trying to address right now is the scale of architectures of code, and managing complicated state.
People don't chose JS framework depending on how they mutate/read the DOM, they chose JS framework depending on how it handles complexity around mutating state and that it gives them Good Enough patterns for managing bigger code bases.
Well, except when it's chosen by cargocult, which is probably most of the cases.
Strange then that frameworks advertise how fast they are at rendering, mutating, and creating objects in the DOM, and one of the main JS benchmarks everyone likes to measure their performance by is literally a benchmark about DOM manipulation: https://github.com/krausest/js-framework-benchmark
Oh wait. It's not strange. Because state manipulation is a largely solved problem, and even the least performant state manipulation is blazingly fast. However, presenting components in the browser's DOM is tens of magnitudes of orders less performant than anything you can throw at state manipulation.
And every single framework is busy solving one single problem: how do we touch the DOM as little as possible?
Which popular frontend framework advertise their DOM mutation speed on their frontpage exactly? Most frontend frameworks I come across nowadays advertise their developer experience and their features, not their DOM mutation speed.
And when I say state manipulation I'm not talking about performance, I'm talking about the architecture and people working with the codebase. No one cares about state manipulation performance because as you said, it's a solved problem.
What do you think words like "performant framework for building web user interfaces" (Vue), "rendering is blazing fast, because Lit touches only the dynamic parts of your UI" (lit) and similar mean?
No, they don't mean "our architecture and state manipulation".
> No one cares about state manipulation performance because as you said, it's a solved problem.
Indeed. That's way in any modern all modern frameworks, without an exception, the main focus is how to touch the DOM as little as possible because it's slow as molasses.
From your own example, here is the text from https://vuejs.org/ as of today:
> The Progressive JavaScript Framework
> An approachable, performant and versatile framework for building web user interfaces.
> Approachable - Builds on top of standard HTML, CSS and JavaScript with intuitive API and world-class documentation.
> Performant - Truly reactive, compiler-optimized rendering system that rarely requires manual optimization.
> Versatile - A rich, incrementally adoptable ecosystem that scales between a library and a full-featured framework.
Yes, performance is mentioned. But it's hardly the main selling point, and they don't even mention DOM manipulation, they're talking about the rendering in general.
Lit isn't even a framework, it's a "web components library".
> all modern frameworks
Yeah, "all modern frameworks" being one framework + one library?
Or, as most people in the frontend space [1], you only understand the absolute surface level of the technologies you use? And unless it's spelled out in the exact words you expect, you don't understand the concepts?
[1] I'm a frontender myself, and the amount of people I meet who don't even have the most basic understanding of the tools and technologies they use is mind-boggling.
The issue most javascript frameworks try to fix with the DOM is the fact that it is not a language and does not have very good templating support. (Even web components in their plain form are pretty bad; no one wants to write document.appendChild everywhere)
But no one has dethroned HTML and CSS as a fully baked cross platform engine, because it has been battle tested and optimized to death. And not really for lack of trying; UI is just actually very complicated.
"UI is just actually very complicated" ... yeah, it's not like we've been developing UIs for decades prior to creating them in web tech.
I think it's unfortunate that there's a generation that just takes that statement as a given, without any deeper understanding of the tech that underlies such things. UIs can be simple, they can also be complicated. Much of that depends on the sophistication and flexibility of the layout approaches you provide.
Building UIs on top of a DOM, especially with layout controlled by this CSS monster - well yes, THAT's bound to be complicated. Unfortunately it's in this awkward zone where the implementation is complex, but it doesn't provide the corresponding payoff for the developer experience.
This is very polished and cool-looking. Inspiring. I find this project's level of polish very inspiring.
It's lovely to see someone has captured this idea and expressed it in the right way to make it interesting to many people. I really hope this mode of desktop apps can take off, at least to the level where the community has something to explore for a while to see if it works. I made something like this for Chrome browsers a while ago: nodejs backends, vanilla front-ends, built-in packaging using pkg. It's just a nice approach: https://github.com/dosyago/graderjs
And I made a demo using the venerable MS Paint clone JS Paint^0. The dev experience was great, I literally just dropped in the front-end code to the right folder, compiled it and wham, "desktop JS Paint" on 3 platforms, haha.
Using the ubiquitous local browser as the rendering / API engine for desktop just seems smart. And it's technically interesting, because you get to think in terms of how can you step back from the browser, the platform, the front-end and the back-end and come up with a general API that addresses all of it, which is kinda cool.
> but comes with the most powerful layouting system and rendering engine.
It's not powerful by any stretch of imagination. If anything, it's the worlds most inefficient layouting and rendering system struggling to render anything beyond the most primitive things.
I am not sure how you come to this conclusion. JS and the DOM are fast. Aside from arithmetic JS is just as fast as Java now and only 2-4x slower than C++. The two big limitations from a pure processing perspective are the garbage collector and massive repaints of large data on large layouts.
Native GUIs are fast, but they are not powerful. You have to specify a fixed resolution, fixed size, and more often than not calculate the layout yourself, or use very limited auto-layout features.
The web browser gives you a full package with standardized commands to control all those aspects, which is also portable between different implementations.
If your rendering needs are limited to a few buttons and a canvas on a fixed size, sure, go native GUI. But if you need to support multiple devices and resolutions, on-the-fly resizing, and layout of multiple complex screen sections, the browser is an unbeatable platform.
I use both svelte and WinForms at my day job.
The designer works quite well initially for laying out the groundwork but after a while it becomes a burden:
- components randomly disappearing but still being there when running the program
- designer crashes
- Small changes that require you to manually drag around 60% of the form, to add or remove one field
Svelte ain't perfect and it requires more scaffolding initially but you get:
- actually good data bindings and state management (in many places you would need event handlers for winforms)
- hot reload (very big win)
- the ability to do greater layout changes with a few css lines (in combination with hot reload quite pleasant to style)
- mass styling without selecting every component every time you want to change something
- native async/await integration in the ui framework
plus the rest of the benefits (not DX oriented)
- gpu rendered instead of cpu rendered
- runs on any OS (including phones)
- advanced responsiveness via CSS
(sorry for the poor text layout, didn't yet find a way to insert line breaks without them being removed)
For smaller apps (dozen input fields or so) the WinForms designer (which has been on life support for well over a decade now!) will get the job done better than anything else out there.
If you want GPU rendered, WPF has you covered. I strongly dislike XAML, even though I like JSX (while disliking data handling in react in general).
The thing about responsiveness is I can make 6 UIs in WinForms faster than I can fix cross platform CSS bugs.
The real issue is WinForms isn't cross platform. :(
Easy to use initially, because of the designer. But as the application scales, it becomes more and more painful. Thinking about components randomly disappearing but still being there, designer crashes. Small changes that require you to manually drag around 60% of the form, to add or remove one field.
I think poster is trying to stay that too many styling options has made for worse UX.
Users can get used to ugly and consistent. On the web and mobile, there is minimal consistency of what a button even looks like, or where site options are to be found, every site looks different and every company has its own style guide.
>Worse developer experience and worse style options though
reply
But better user experience.
Also many styling options are counterproductive for the UX.
Yeah... JS and the DOM are incredibly fast, but that does not applications written for that platform are fast. Many JS developers have absolutely no idea how these technologies work and are reliant upon several layers of abstractions that are each progressively slower than the next.
As an analogy crypto coin in theory is a good idea, but its rife with fraud because most people playing with crypto are speculators that have no idea what they are doing.
You're totally right about the layers of abstraction... still, JS and DOM could be incredibly fast (and in some way, they are, sa there are huge optimizations behind for that to work as well), but they remain slower and more energy hungry than (almost) any native interface, even if you leave away all the layers of abstractions.
By knowing and working with more technologies than just web tech.
"The most the most powerful layouting system and rendering engine" struggles to render even a few dozen elements on screen without junk, tearing and consuming as many resources as a mid-sized game.
> JS and the DOM are fast.
DOM is slow as molasses. There's a reason why all frameworks are going to great lengths to touch DOM as little as possible. A few billion dollars of development and hundreds of thousands of man-hours have optimized it beyond any reasonable expectations, but it's still unbelievably slow because of many architectural decisions rooted back in the 90s.
It's a system designed to render a page of text with a couple of images in one rendering pass at the core, and no amount of hacks on top of it will make it a high-perfromant layout and rednering engine.
> Aside from arithmetic JS is just as fast as Java now
This has nothing to do with either layout or rendering
> The two big limitations from a pure processing perspective are the garbage collector
Has nothing to do with either layout or rendering
> massive repaints of large data on large layouts.
Where "large data" is measly thousands of elements.
Massive repaints in DOM happen basiclly on any and all layouts and layout changes. And these changes are triggered by almost anything that happens in the DOM.
There's a reason why any reasonable animation you can reliably do with DOM is to literally rip the element out of the layout context and render it independently because "the most powerful layout and re-rendering engine" cannot cope with re-calculating and re-rendering the layout for the entire page when you move elements around.
From the last article: there are only two properties that can be handled by the compositor alone, all others trigger either a re-flow+re-paint or a re-paint.
My biggest learning about performance is that developers don't know how to measure it. The training is absent, the effort is too great, and then objectivity just isn't there. So, they guess, which typically just means making things up to qualify an unfounded assumption. Guessing at performance is wrong more than 80% of the time and when it is wrong there a decent chance it is wrong by one or more orders of magnitude. This is one of the key things that separates developers from product stake holders.
The DOM is an in-memory object accessed via a standard API. Let's not over think this. The interesting thing about DOM access is that Firefox has held stable for at least the last 6 years showing no significant performance loss or increase. Chrome on the other hand less than 40% as fast as it used to be, but its execution of access via string parsing mechanisms, like query selectors, is several times faster than it used to be. To run your own tests see this micro-benchmark test:
> My biggest learning about performance is that developers don't know how to measure it.
So have you measured it and compared it to anything else? Judging by the fact that you think that "JS is fast" has something to do with rendering and layout, my guess is that you haven't.
> The DOM is an in-memory object accessed via a standard API.
This has nothing to do with rendering, layout, and doesn't make it fast (compared to other ways of doing UIs) in the general case.
> string parsing mechanisms, like query selectors, is several times faster than it used to be.
Again. This has literally nothing to do with either layout or rendering.
> To run your own tests see this micro-benchmark test:
I said it: "A few billion dollars of development and hundreds of thousands of man-hours have optimized it beyond any reasonable expectations, but it's still unbelievably slow because of many architectural decisions rooted back in the 90s."
Oh wow, you can select elements quickly. What does this have to do with the actual performance of things that matter? Or with the rest of your claim about rendering and layout?
It's funny that you claim something about people guessing, and then use and talk about things that are completely irrelevant to your claims.
You comment shows that you have no practical knowledge of the web ecosystem, and everything you know about it comes from all the blog articles that contributes nothing that's useful in real world use. The reality is that web is fast enough (even with all the tweaks and different approaches of frameworks, libraries etc), and it is the first choice for building a new cross-platform product and for migrating legacy projects. It makes all the business sense as well. Your pedantic arguments are not going to reverse that trend.
> You comment shows that you have no practical knowledge of the web ecosystem
You're talking to a person with 20 years of frontend development experience. But sure, do go on with your assumptions.
Also, no idea what "web ecosystem" has to do with the patently false claim of " the most powerful layouting system and rendering engine.", but do go on
> The reality is that web is fast enough
I never claimed it wasn't. But, again, without clarification of what fast is, or what enough is, it's again nebulous, and wrong for a very wide variety of use cases.
> it is the first choice for building a new cross-platform product and for migrating legacy projects.
I have no idea what this has to do with any of the things in this discussion.
> Your pedantic arguments are not going to reverse that trend.
Java applets and ActiveX also where the bee's knees and the best thing since sliced bread, and drove businesses and brought in billions of dollars in revenue.
All this has literally nothing to do with the technology and how bad or good it is.
I have also been writing for the web for over 20 years. This doesn't really mean anything though. That is why measures are all that matters. Bad measures are still monumentally better than no measures at all.
The sad reality is that most people writing for the web today cannot do so without a framework. They have no idea how the layers underneath actually work. If you want to understand performance you must measure for it in multiple different ways and have something meaningful to compare it to. All modern browsers provide fantastic performance measuring tools in their developer tools. Its how I got my OS GUI (in a browser) to execute as fast as within 60ms of page load.
No its fuckin not.
We have devices running literally billions operations per second, orders of magnitude faster then what we had just few years ago, yet they struggle with rendering websites which comes down to presenting some good looking text.
It's insane how my pc can compute entire 3d world with millions of triangles, 120 times a second, but it lags when I open few websites because some front dev crambed some 'cool' paralax effect onto it, or because facebook, (who literally invented react) can't handle it well enough to not make memory leaks everywhere.
Did Usability of the web moved forward since few years ago? Sure. But compared to what computers can actually do, it's insane how bad things are nowadays
With a modern CPU and DDR5 memory you should be capable of running no slower than 10 billion DOM operations per second in Firefox. Usability is not performance.
You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?
Would you apply the same logic to game optimization?
That's it's not the responsibility of game devs, and instead we can shift all the blame to the gpu sdk team?
> You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?
Not at all. Quite the opposite, in fact. My position is that the browser is fast enough, and that any slowness is exactly the fault of the site devs. You said the browser wasn't fast enough.
Previous poster: The reality is that web is fast enough
The benchmark creates 1000 rows that look like this:
<tr>
<td><a onclick={select this row}>random short text</a></td>
<td><a onclick={remove this row}><span /></a></td>
<td></td>
</tr>
So, less than a 10k elements in total.
The fastest of the fastest attempts to do this takes 36 milliseconds to render. For what is essentially static markup with zero complex logic or complex interactions.
In comparison: 1000 actors with complex interaction and behaviour, complex animations and lighting takes 4 milliseconds to render (in total, significanly more than the measley 5-6k static elements on the page): https://youtu.be/kXd0VDZDSks?si=SswSZLNFlRd7adsM&t=586 (at 9:46)
I'm not saying everything should be Unreal Engine. But the web is on the lowest of the lowest end of the spectrum when it comes to performance.
36 ms is a very small amount of time (faster than the rod flicker fusion frequency, though not the cones), and 10K elements is far more elements than even a complex web page is likely to have.
Can you give me some examples of real-world web pages that have 10K DOM elements on them, or anything like it? Running document.querySelectorAll('*').length on my personal amazon.com home page gives 3163 (obviously this is going to vary somewhat for different people), and amazon.com's front page is pretty damned complex.
> I'm not saying everything should be Unreal Engine.
I'm saying that almost nothing needs to be Unreal Engine. You are confusing "fast" with "fast enough".
To render less than 10k objects on a screen given the current state of hardware? It's an eternity.
The problem is, these things compound. That is why "my page doesn't have 10k elements", but for some reason Google gave up and now calls "2.4 seconds to render content is fast, actually": https://blog.chromium.org/2020/05/the-science-behind-web-vit... (this is, of course more than just DOM being slow).
Gven that it takes that much time to render a static page with a number of elements that shouldn't even register by a clock, you run into hundreds of other problems: layout shifts in DOM are extremelyexpensive, avoid them; animations in the DOM are extremely expensive, avoid them; we can't re-render fast enough when the winow is dynamically resized, so there's tearing; we can't update the DOM fast enough because updates are extremely slow, so we fight the DOM and come up with crazier and crazier solutions to touch it as little as possible; and so on and so forth.
On the same machine a game engine re-renders the entire world with thousands or millions of objects with complex computations and interactions from scratch in under 10ms.
> You are confusing "fast" with "fast enough".
I'm not. I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.
And then hear the idiocy of "it's the most advanced rendering and layout engine" or "string parsing is so much slower than DOM operations" and other claims by people who have no idea what they are talking about.
> To render less than 10k objects on a screen given the current state of hardware? It's an eternity.
When it's so fast that a human being doesn't even perceive it, it's not an "eternity". In fact, it doesn't matter. At all.
> I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.
That's nice, but I'm not sure why I should care what you're "tired of".
I'm old enough to remember rants virtually identical to yours when people first started using C rather than hand-tuned assembly language.
>DOM is efficient
>No it's not, here is the data
>something something it doesn't matter because it's fast enough.
So you agree that the DOM is slow? Or, by this logic, can I call any terrible code 'efficient', because if I run it on modern hardware it will still be faster than 'good' code run on machines from 20 yrs ago?
But also, it's not like all this inefficiency is free; every millisecond that is spent running inefficient code requires power. Multiply that by trillions of operations computers are doing every day, multiply that by billions of computers worldwide and we end up with waste of resources that literally change the planet. Not to mention the e-waste of all the hardware we force out of the usage "because it's too slow"
Just open a bunch of windows and you will get to 10k page elements. The primary window shows the page load time in bold red font. The page load time includes all network calls, all initial script execution, state restoration, and graphical rendering. Total load time should always be around 1 second of which most is visual render. The script execution typically takes about 60ms or so but you can see the complete breakdown in the browser performance tab. The CSS could use a lot of clean up. I pulled all of this code from a browser based OS highly distributed OS I am working on.
Also, on that site you can easily check element count in the console using the following custom DOM methods:
document.getNodesByType(0).length; // all nodes
document.getNodesByType(1).length; // all elements
EDIT
I just got to 10000 visible elements on the site and everything still loads in about 850ms, give or take 50ms, on my 7 year old desktop. Base load time for a new user on the same machine is about 550ms, so difference in load time is not significant. The real significance is page repaint on a fully loaded page. Drag and drop of any one window is noticeably slower.
To reset state execute the following and refresh the page:
I took 'web is fast enough' as in 'current state of web is fast enough'.
But if we are still sticking to the actual internals of web browsers, i don't doubt they are quite 'state of the art'. It's just the outcome for the end user sucks
It’s almost like 3d rendering is vertices and shading is an embarrassingly parallel problem which is quite trivial to make faster by throwing more hardware at it.
General layouting/text rendering, etc are not like that, and there is not even a “free” improvement anymore with single-threaded CPU speeds plateauing.
Yes, there are contrived examples where DOM rendering speed makes a difference, and also a fair amount of real-world crapware (much of it written by companies that should know better) where shitty code continues to hit the DOM unnecessarily hundreds or thousands of times even after the page is allegedly "loaded", but that is not the fault of the DOM.
I think the misunderstanding is how to hit the DOM. If using static methods the performance cost is a memory cycle, so there can be many wasted steps and it’s still negligible. If access is via query selector then there is a string parse operation which is a few orders of magnitude slower. That does not mean the DOM is slow. Even Lamborghinis are incredibly slow if you never move out of first gear.
Come on, people, make an effort to learn how insanely fast computers are, and how insanely inefficient our software is.
String parsing can be done at gigabytes per second: https://github.com/simdjson/simdjson If you think that string parsing of selectors is the slowest operation in the browser, please find some resources that talk about what is actually happening in the browser?
For those of us who understand how these things work the incredibly slow performance of query selections is not surprising. What is interesting, though, is comparing the numbers between difference browsers.
Correcto is OpenJDK with Amazon branding, I left it out on purpose.
Still, a dynamic language winning out a strongly typed one, only for those that don't have a clue about how compilers work, or how to write winning micro benchmarks.
While still far from perfect, benchmarks comparing actual, practical usage, like the ammount of request served by webservers for example, are much better indicators imo.
Yeah everyone says that and stops there which is absolutely useless. Benchmarks are at least objective ways to measure something. And there are no "correct" benchmarks. Unless you have better metrics or another way to prove things, please stop repeating these meaningless words
I've literally pasted benchmarks measuring actual job (webserver request per second) in a comment below.
But besides, the critique isn't meaningless even without providing a better one;
If your benchmark is measuring things that are trivial no matter the language (like stack-based operations), but ignores things that actually differ meaningfully (like handling of heap objects), then criticizing such aproach is perfectly fair and valid objection
These benchmarks include startup time _and_ processing time when comparing languages. I don’t believe that is telling a very compelling story given JS is still slightly slower than most of the Java metrics unless you are looking for a new language to write your lambdas.
* Users expect consistency within their apps, and will complain loudly if platform A has something that platform B has not
* the native platforms’ rendering technologies are so different that you basically need a developer or team dedicated to each platform, with their own test infrastructure, etc.
* even if you can afford to do the above, herding four teams using different technologies to do a coordinated feature release schedule is like herding cats
In any case the native platforms hardly do much better. There are plenty of Windows, Mac, Android, iOS applications from ten years ago that don’t work well today, for instance.
Browsers have extremely good backwards compatibility, five or ten years should really not net you any problems. They do carry quite a bit of bloat however, though so do most native toolkits these days.
It will be interesting if one could have one's webapp also be able to accept browser extensions as add-ons to functionality to the app - such that users can create their own 'packages' for your webapp just as anyone would do browser extensions? or does this sound stupid (obviously the biggest risk here is malevolent extensions (dont make medical apps out of this :-)
I don't understand how this is materially better than Tauri (more generally, frameworks that bind to the platform webview).
At least with the platform webview, you can:
1. Have some knowledge about what engine versions were available on what OS versions and make compatibility decisions accordingly.
2. Know that the browser engine is always there.
When it comes to trade-offs between "static linking" (Electron) and "dynamic linking" (Tauri), this is like cowboy linking. You have literally no idea what browser will open. I guess this isn't materially different from a remotely hosted backend but I still wonder of the merits.
And why communicate over a websocket, or not have a custom protocol that you can handle and register with the OS more easily? I like the general idea but this seems inferior to a webview in almost every way.
I do see how you might get things like extensions easier, but I'd rather petition the platforms to add extension support and close other gaps directly in their webview runtimes.
And is it just localhost under the hood? If so it reminds me of this Zoom security debacle a few years ago:
Since I had a similar idea to build something like this, here's my opinion of the pros of this concept:
1. We web developers should always be building to a common specification, and not to the quirks of a particular web engine (which now includes webview quirks too).
2. Javascript is a shitty language. There are other better languages for application development, which allow you harness the full power of the OS platform.
3. It's more in line with unix philosophy - Write programs that do one thing and do it well, write programs to work together and write programs to handle text streams, because that is a universal interface.
4. If an application is built with point 3 in mind, a lot of old codes can be reused and / or given a new UI with this concept.
5. Really tiny as it doesn't increase the size of the application code base by much.
If I understand correctly, it's WebSocket on top of Embedded C/C++ web server (civetweb).
Also, the application must find and launch the installed browser. In my opinion, this part is very fragile. On my system, this function (webui/src/webui.c) :
Agree finding browsers can be tricky, but the node Chrome-launcher package has a fairly good approach that you can patch and extend to the other vendors.
In graderjs i solve this by searching for the installed browser, and then prompting the user to download it —obviously just the first time! — if we couldn’t find it. so the download and install happens as part of the application start up process and it’s baked into the framework. I think it’s a nice solution where the application installs any components that might need.
I think another nice solution would be to use playwright because that usually downloads all browsers that you might want and I’m sure that can be customized. playwright is a reliable source and it’ll put them in a location that easy to find.
Do you have "open URLs with a browswer" turned off in your OS? Because that's the universal way to open "whatever the user has set as their default browser".
> Do you have "open URLs with a browswer" turned off in your OS?
But the application works differently. It needs the browser in kiosk mode, which means it should be able to run the browser binary with specific arguments.
civetweb will run cross platform as a http server.
the issue is when you need write code on each OS, e.g. a cgi for windows/macos/linux/bsd, you will have to know the native API for each OS and use them, which could be hard to maintain if cross-platform is a must.
I wish there is something lower level that is truly cross platform, e.g. a C library runs on each OS and let me do filesystem/network/etc, it does not seem exist.
Very different. Tauri directly embeds/links to the web view runtime, and calls the APIs directly to setup the window.
WebUI finds a browser installed or already running, and launches it to a localhost server hosting custom content.
Kind of but also not. Tauri uses the default on the platform it runs on, this zig-webui project seems to be able to use every major browser and every major platform. For example, Firefox, Chrome and Edge all seem to be available for Windows, Linux and macOS in zig-webui, while Tauri would use a specific engine for a specific platform (by default at least).
They are working on servo implementation, but it's long time to go.
Tauri has problem on Linux, webkit2gtk has lower performance etc. I know I have problems with three.js, others with svg
Additionaly communication speed between backend and frontend is bottleneck (serialisation to string) and you must use custom protocols to effectively communicate/send large data, zero copy binary protocol isn't possible at the moment
Unless you're referring to using tao and wry as libraries in your own application, but I guess that would be kind of cheating as it's not really a part of Tauri per-se in that case.
I would have thought at the browser level access to WebUI would not necessarily be a given as it seems that access is outside the browser engine. Glad to be corrected.
In this case it's relevant because it means that your app will run on Windows even if the user hasn't installed anything else, since it can just pick up Edge.
I'm seeing lots of comparisons to Tauri, which is exciting because while I am using Tauri to build an app, I'm not actually that enthusiastic about Rust.
But I'm feeling a lot of comparisons are really just comparing the most straight forward, load HTML + JS in a webview, without comparing other things like:
- (Auto) Updating.
- Embedding other files (and actually distribute them).
- Plugins (I guess plugins can be made for the various language bindings and implemented via event handlers).
- Disabling Edge's annoyances (IE it looks like Edge's menu that shows up when selecting text is happening, I guess its image hover thing is there, and maybe more annoyances).
- The menu on macOS is the menu of the launched browser.
This does definitely seem to have its uses and I'll keep an eye on it.
Side note, embedding files for distribution is definitely possible, just takes a little bit of work. I sent in a PR a few months ago to make it possible.
Reminds me of the approach of CLOG (Common Lisp Omnificent Gui[1]) and its ancestor GNOGA (The GNU Omnificent GUI for Ada[2]).
They also integrate basic components and even graphical UI editor (at least for CLOG), so you can essentially develop the whole thing from inside CL or Ada
Don't know the relation to the official repository, which might still be: https://sourceforge.net/projects/gnoga/ but Blady-Com is the current most active developer in both.
Off topic rant: can anyone explain why the syntax highlighting of Zig on Github turns every single word brown? I know they incorporated tree sitter, and for some reason the default syntax rules really dislike coloring identifiers in black.
Look at the code example in this project README or its source.
For a long time something bugged me about the language whenever I glanced at it, and it's all because Github and its stupid syntax coloring. I hate the rainbow vomit approach to syntax highlighting that's all the vogue these days. It should be a sprinkle on top, not the main course.
A better name for the color would probably be orange, at least in dark theme. Look at... variable names, field accesses, basically literally everything that is not a keyword or symbol. It's not all supposed to be colored. There's no plain white text anywhere!
I did a basic C demo where I compared this to Tauri/webview:
https://github.com/petabyt/webui-demo
I'm not planning to make any apps in HTML/CSS, but if I had to, WebUI would be my choice.
The fact that the only thing WebUI needs is a web browser is a bit of a problem for me, so I decided to go with Tauri, which uses the system's webview, for my HTML-based application:
https://github.com/christianhellsten/ollama-html-ui/
The biggest problem with HTML/CSS apps is that their look and feel are different from native GUIs. This is less of a problem for chat and document-based applications (Slack, Word, etc.), IMO. With Tauri, I can also customize the window and integrate native features to mitigate this. I'm not sure WebUI can do this.
It's not confidence-inspiring that https://webui.me throws security warnings. I'd want folks to make applications "that make applications" to take security a little more seriously.
Wow, couple with WasmGC this cross-platform GUI approach can be the way to go for cloud enabled local-first application [1][2].
Personally will love to see D language supporting these two features since the fact that D has GC by default and still does not has a polished cross-platform GUI framework.
[1] WasmGC – Run GC languages such as Kotlin, Java in Chrome browser
This seems like way too much work when people have been doing things like this in different ways for years (just look at Replit, it runs a full IDE and runs on 4gb Chromebooks).
Maybe I'm misunderstanding this project, but you don't have to reinvent the wheel.
This seems amazing, there is another similar project called tauri but it uses webgtk, which has its rough edges.
Something integrated in the user preferred browser sounds amazing.
Is there any plan for mobile?
Does it open the user default browser with all extensions loaded?
I wish that Apple and Microsoft would work together towards making it easier for people to make cross platform desktop applications.
It's largely uneconomical for most companies to develop and maintain separate apps using two different teams with minimally interchangeable skills.
Tools like Electron are, essentially, not a choice
I also wish that wasm would come sooner so we could have more memory and CPU efficient (threaded, statically compiled) web applications (for apps like Slack, messengers, Jira and so forth).
I know people keep saying "wasm is not a replacement for JavaScript" - but perhaps it could be an alternative. I know I'd like to write a web application in Rust but as it stands now there is little advantage to that, particularly without threads and DOM access.
We have good cross-platform desktop UI solutions: Java Swing/JavaFX, C++ GTK/Qt/wxWidgets, Rust egui/iced/Slint, and of course HTML/CSS/JS Electron/Sciter.
The problem is that cross-platform UI will never look as well as native UI because it’s not native UI: control sizes are different, colors and elements don’t blend the same way, and some things (like menu bars) are completely different in mac, Windows, and Linux. It’s not a slowness issue (if your UI isn’t responsive on today’s computers, you’re doing something very wrong), it’s not a usability issue, it’s not even an appearance issue (even Java UI can be made to look OK fairly easily in embedded systems). It’s an “I’m accustomed to native UIs and everything else is native UI, so any non-native UI looks out of place” issue.
If that were the problem, then Electron wouldn't help in any way either. In fact, Electron is even less interested in platform conventions than something like Swing. There isn't even default styling that tries to copy MacOS/Windows that you can apply.
The bigger problem is actually desktop-mobile compatibility today. No one writes apps only for Windows and MacOS. You want an app that works on iOS, Android, Windows and MacOS, and typically on the web as well so people can occasionally access it from an unfamiliar computer. So, the only viable option unless you want to have 4 dedicated UI teams or so is to write it in the lowest common denominator of HTML/CSS/JS, and use Electron and WebView to distribute it.
To a very good approximation, no one whatsoever cares about platform look and feel. In fact, most people actively prefer if they're using an app for it to have the same look and feel across all systems where they use it, including the web.
This! I hate when an app's GUI is following Windows-principles (if at all) and the macOS version is basically the same with a macOS theme slapped on top of it.
About 10 years ago I've experimented with an early version of appcelerator Titanium for mobile app development. It was JavaScript but GUI rendering was transcoded to either Android-Java or Apple-ObjectiveC and the respective code was compiled natively (while running your main code via a JavaScript engine). With a bit of if-then-else you could make one app that compiled for both platforms and looked and felt native on both.
Wasm threads via SharedArrayBuffer have been a thing for like 5 years. In Rust you can use wasm-bindgen-rayon, for instance. DOM access is and has always been easily achieved through imports. It's performant, too: https://youtu.be/4KtotxNAwME
It's __doable__ but I can't yet in earnest recommend it for a serious project (outside of canvas controlled GUIs).
The SharedArrayBuffer implementation requires spawning the same wasm module multiple times and passing the SAB to each instance. This works but is no replacement for thread management by the module/process itself.
It also requires restrictive "Cross-Origin-Opener-Policy" and "Cross-Origin-Embedder-Policy" headers - making it largely impossible for applications with multiple subdomains (for instance apps that have their API on api.myapp.com, or assets on cdn.myapp.com). Not an easy change for an established enterprise application.
Glue code might be performant but I really just want to go
> Tools like Electron are, essentially, not a choice
yeah, they really are. people seem to forget that you can simply start a local
web server, and direct the user to visit http://localhost:8080. here is one in
Go, literally 8 lines of code:
This is serious?
Generally your app should aspire to having a better user experience than a low end phishing scam. Normal people really don't want to copy/paste a series of random looking computery numbers from one application into their web browser. At best it feels broken and more likely it feels like you are about to get hacked.
Looks like it is—they started out with the same color scheme just without the splashes [0], then changed the colors when someone recognized it. From the thread it sounds like an honest mistake (though one that probably warranted stronger corrective action than just changing the colors). They seem to have assumed that everything on icons8 [1] was up for grabs, when in fact a lot of trademarks are on there.
Once your application is complex and reactive enough, has a certain amount of JavaScript code and gets used by enough people, you will inevitably run into compatibility issues. Then you realize you cannot just rely on a random browser version on any platform and think a minimum wrapper works well. That's why Electron bundles a browser and people create applications targeting specific Electron versions.
I can provide an example. I work on spaceflight mission planning software that runs in Electron. We're using a lot of cutting edge Web APIs like WebGPU, SharedWorker, OffscreenCanvas, and Atomics. There's no way we would be able to ship something that would reliably work cross-platform if we used the OS's webview or whichever browser they prefer.
There's other considerations as well. The biggest advantage Electron provides over other libs that use OS webview (e.g. Tauri) is I can use whichever newer features I want (e.g. CSS nesting) as long as they're supported by whichever version of Chrome Electron ships with.
That being said, I get why Electron gets a lot of hate. I've been developing Electron apps professionally for the last 4 years. I can't use VS Code because I'm _accutely_ aware that it's Electron. I think 80% of Electron apps in the wild should probably just run in the browser. If you don't need to do stuff like access the filesystem, spawn processes, or interact with the OS in a way that the browser doesn't provide, you should probably just stick with the web app.
I don't think the difference between Electron and Webui is the JavaScript thing you are talking about because Electron uses a bounded Chromium browser. Webui uses any installed web browser, so both are the same.
True differences:
1 - The big difference is that the Electron backend is ONLY JavaScript, while Webui can be used with any language, C, C++, JavaScript, Zig, Nim, V, Python, Go...
2 - Electron is big, yes, +100MB, but gives you complete control of the window, while Webui is 200Kb, but all it does is run a web browser for you and sends you click events, so you have no control over the window because the browser owns it, not your application. So, Webui is suitable only for quick software development, while "big" projects should use Electron.
> I work on spaceflight mission planning software...
Yeahhhhhh. The thing is, though, I see two classes of desktop GUI applications:
1. The kind that are simple and casual enough to build around a web-based framework. In which case, I'd rather rely on the system native browser rather than ship my own entire browser in my distributable. Because I'm not doing anything complex enough to worry about cross-compatibility issues, and I'm not concerned about people running my executable 10 years in the future with zero updates.
2. The kind which don't fit the criteria above, and therefore really just ought to be honest-to-God native desktop GUI applications. I'm rather astounded that spaceflight mission planning software doesn't fall under this category. Do you REALLY care about supporting Windows, Mac, and Linux targets? If so, then... why? I would assume that spaceflight would be dramatically more locked-down and spec-targeted. If you do not care about cross-plat, then what's the point of eschewing native frameworks since that's the main reason people go with web-based?
> The kind which don't fit the criteria above, and therefore really just ought to be honest-to-God native desktop GUI applications. I'm rather astounded that spaceflight mission planning software doesn't fall under this category.
Our team is very small, so we don't have the budget or resources to make a native version for each platform. I think this is why most companies go with Electron. The browser/Web APIs abstract a lot of the platform-specific stuff away (at the cost of missing some functionality), which means you can run pretty lean.
> Do you REALLY care about supporting Windows, Mac, and Linux targets? If so, then... why?
Yes, we really do care. The current version of our software only runs on Windows, just like our main competitor. We're trying to appeal to a wider audience, such as aerospace grads, most of which use MacBooks. We really only have one competitor and are pretty close to having feature parity with their product, so the ability to run our software on 3 platforms is going to be a pretty big feather in our cap.
> I would assume that spaceflight would be dramatically more locked-down and spec-targeted
To some extent, yes, but probably not as much as you might think.
> We're trying to appeal to a wider audience, such as aerospace grads, most of which use MacBooks.
Is this actually the case? MacOS is in a distinct minority position in desktop/laptop usage as well as areas where people (erroneously) assume it has a significant market share, such as for developers.
I know macOS doesn't have nearly as much market share as Windows, but we've been getting requests for a macOS version for a long time. Considering the current Windows-only version of the software has been around for 20 years, that's a lot of requests.
Our software is very specialized and very expensive. We don't need to sell thousands (or even hundreds) of licenses to justify the effort.
Maybe the aerospace grads wasn't the best example, mainly because I didn't go to school for aerospace. The private sector is full of space startups that need mission planning software. I don't know about you, but every dev job I've had in the past 7 years is full of people using MacBooks or laptops running Linux.
Fair example, although from what You are saying it looks more like its about using features that might not yet be available in os webview.
But the parent post, as well as 'other stories' I see sometimes talk about using certain, 'frozen', older version to ensure compatibility, essentially stating that new releases break some stuff. So i was more asking about such cases
> Fair example, although from what You are saying it looks more like its about using features that might not yet be available in os webview.
Right, which is what the parent comment explicitly stated about compatibility issues. The issues are with trying to use newer browser features that might not be available in the OS webview.
> But the parent post, as well as 'other stories' I see sometimes talk about using certain, 'frozen', older version to ensure compatibility, essentially stating that new releases break some stuff.
I'm a little confused here. Are you talking about an older version of the browser or an older version of your application? TC39 and browser vendors go to great lengths to ensure backwards compatibility. I've never had a web app break on a new version of a browser.
Reading the “How Does it Work?” section of the readme, I interpret it as WebUI being a way of embedding a web view into your existing program, being a sort of proxy for the existing browser of the system.
Technically interesting, but no thanks. Imagine a Firefox user downloading the latest LibreOffice and being greeted with the warning "Better run through Chrome", or writing a dashboard on a small embedded system, and the memory footprint goes like 20x because now you need a browser in between in place of native controls. The idea is interesting but there are so many caveats, and the industry is famous for choosing the simplest and cheapest development methods rather than the most efficient ones.
So yes, leveraging browsers just for the front-end of a local-first app is such a good direction to go. Generally apps tend to bundle their own browser runtime to leverage this component. Even the lighter weight electron alternatives tend to either implement a part of the browser runtime themselves or at least need separate webview instances. But I already have a browser installed and why can't my local apps integrate with my installed browser to expose their functionality? This project looks like a good step into that direction.