For those unaware, the current recommended way to develop "native" apps for Windows is to use WinUI3, distributed with the WindowsAppSDK.
Unlike regular Windows SDK, which lets you make use of the functionality provided by the OS, the WindowsAppSDK is entirely separate from the OS and requires the installation of a separate runtime on the user's machine. It also requires installing nuget packages on your machine to use it so good luck if you'd rather use straight CMake instead of Visual Studio.
As far as I can tell, there's no backwards or forwards compatibility, so the end user has to install the specific version of the SDK your app uses, or you need to bundle all the hundreds of DLLs with your app yourself.
A sane person might ask why not just use Qt (smaller distribution!) or Electron (about the same size) at that point, since they're cross platform and you can easily get fluent themes that look the same as WinUI3?
As far as I can tell there's no sane negative answer to this question. It's not like your app's "fluent theme" will be updated alongside the OS, it's no different from Qt or Electron in this regard.
There's no reason to do "native" windows development anymore, unless you mean using raw Win32 with no dark theme or a custom UI built on Direct2D/3D/Write. And if you are doing that, there's absolutely 0 reason to use this CLI.
WindowsAppSDK (and its direct predecessor) was installed only by the OS in Windows 8 and early Windows 10 and everybody hated the OS updating it and needing to know things like which specific Windows 10 update a user was on to access features. It's a mess either way it is installed.
> A sane person might ask why not just use Qt (smaller distribution!) or Electron (about the same size) at that point, since they're cross platform and you can easily get fluent themes that look the same as WinUI3?
The "good news" about this tool is that it is partly for making it easier to use WinUI3 from Electron, so that you can have both large distributions at the same time.
I remember the problems with the WinRT APIs being tied to specific Windows versions (still are, just a smaller surface area, so less of an issue). With the old service pack model it wouldn't be an issue but with constant OS releases it was too much churn.
I thought they had solved the worst problem with WinUI2, a bit like the compatibility library in Android, so you only had to bundle the more "volatile" bits while still delegating most things to the OS.
But then they went and threw all that out the windows (pun intended) with WinUI3 which even has its own separate implementation of DirectWrite for some god forsaken reason.
Unlike the DirectX redistributables of old it's not even backwards compatible so you can't tell people "just download the WinAppSDK runtime", they have to install the specific version you used when developing your app.
You get that download "for free" if you use a .appx, but with a regular installer you're on your own. Even the way apps link to the WindowsAppSDK is a mess with a weird bootstrapping process.
> own separate implementation of DirectWrite for some god forsaken reason.
That reason I sort of understand: The Cold War that DirectX can't be bothered to support WinRT directly is fascinating from outside, and also just really, really stupid. DirectX is ancient COM. WinRT is essentially modern COM 3.0. But if DirectX supported WinRT's version of COM then "Oh No, those idiot and dirty C# and JS developers could use it directly" and where would the games industry be without excuses to force college graduates to use only the one true C++ like Stroustrup intended? /facepalm
For a library called DirectX they seem to really love being IndirectX. Someone in the Windows Division should have forced them onto the right path sooner, but Windows is too busy being in either the Azure or AI divisions these days to actually care about being a consistent OS and DirectX slipped into being protected by Xbox in ways that are backwards to how things were meant to work. (Xbox was supposed to be the way to encourage more software developed with DirectX not to protect more software developers from using DirectX directly.) That xkcd comic of Microsoft being a bunch of disconnected orgs in a Mexican standoff seems to apply here (directly).
Couldn't this "dark theme" stuff be mapped onto the user-configurable Win32 color schemes that have been there since the beginning? Did Microsoft break it in Windows 11?
If you use classic unstyled Win32 controls (Windows 95-2000 style) then you can do that. If you use uxthemed Win32 controls (Windows XP onwards) then there's no official dark theme support.
They exist but are rare and don't hire often. I know a guy (self taught programmer) who got his first major a job at a company doing native ui (not even using OS frameworks, straight GPU stuff).
The company does highly complex simulation software used by movie studios for explosions and other effects.
He got hired by word of mouth recommendation from someone at the company that had met him.
It takes as much luck as it takes skill to get these sorts of jobs, sadly.
Exceptions are cheap on the happy path and super expensive on the error path.
Checked exceptions only make sense for errors that are relatively common (i.e., they aren't really exceptional), which calls for a different implementation entirely where both the happy path and the error path have around the same cost.
This is what modern languages like Rust and Go do as well (and I think Swift as well though don't quote me on that) where only actually exceptional situations (like accessing an array out of bounds) trigger stack unwinding. Rust and Go call these panics but they are implemented like exceptions.
Other errors are just values. They have no special treatment besides syntax sugar. They are a return value like any other and have the same cost. As they aren't exceptional (you need to check them for a reason), it makes no sense to use the exception handling mechanism for them which has massively skewed costs.
Result or Error types may just be normal values, but they add overhead to the code as well when they’re ubiquitous.
Once they’re the standard error method then case every function has to intertwine branching for errors paths vs normal paths. Often the compiler has to generate unique code or functions to handle each instance of the Result type. Both add to code size and branching size, etc.
> Rust and Go call these panics but they are implemented like exceptions.
I don't know about Rust, but a very important difference between Java exceptions and Go panics, is that a Go panic kills the entirely process (unless recovered), whereas a Java exception only terminates the thread (unless caught).
It's a little off-topic, but I wanted to clarify that for passer-bys who might not know.
Exceptions are cheap. Stack trace collection is the expensive part; if you make use of checked exceptions for domain errors you can just turn that off if you hit a performance problem.
The only approach I've tried that seems to work reasonably well, and consistently, was the following:
Make a commit.
Give Claude a task that's not particularly open ended, the closer to pure "monkey work" boilerplate nonsense the task is, the better (which is also the sort of code I don't want do deal with myself).
Preferably it should be something that only touches a file or two in the codebase unless it is a trivial refactor (like changing the same method call all over the place)
Make sure it is set to planning mode and let it come up with a plan.
Review the plan.
Let it implement the plan.
If it works, great, move on to review. I've seen it one-shot some pretty annoying tasks like porting code from one platform to another.
If there are obvious mistakes (program doesn't build, tests don't pass, etc.) then a few more iterations usually fix the issue.
If there are subtle mistakes, make a branch and have it try again. If it fails, then this is beyond what it can do, abort the branch and solve the issue myself.
Review and cleanup the code it wrote, it's usually a lot messier than it needs to be. This also allows me to take ownership of the code. I now know what it does and how it works.
I don't bother giving it guidelines or guardrails or anything of the sort, it can't follow them reliably. Even something as simple as "This project uses CMake, build it like this" was repeatedly ignored as it kept trying to invoke the makefile directly and in the wrong folder.
This doesn't save me all that much time since the review and cleanup can take long, but it serves a great unblocker.
I also use it as a rubber duck that can talk back and documentation source. It's pretty good for that.
This idea of having an army of agents all working together on the codebase is hilarious to me. Replace "agents" with "juniors I hired on fiverr with anterograde amnesia" and it's about how well it goes.
My personal use is very much one function at a time. I know what I need something to do, so I get it to write the function which I then piece together.
It can even come back with alternatives I may not have considered.
I might give it some context, but I'm mainly offloading a bunch of typing. I usually debug and fix it's code myself rather than trying to get it to do better.
TBH I think the greatest benefit is on the documentation/analysis side. The "write the code" part is fine when it sits in the envelope of things that are 100% conventional boilerplate. Like, as a frontend to ffmpeg you can get a ton of value out of LLMs. As soon as things go open-ended and design-centric, brace yourself.
I get the sense that the application of armies of agents is actually a scaled-up Lisp curse - Gas Town's entire premise is coding wizardry, the emphasis on abstract goals and values, complete with cute, impenetrable naming schemes. There's some corollary with "programs are for humans to read and computers to incidentally execute" here. Ultimately the program has to be a person addressing another person, or nature, and as such it has to evolve within the whole.
Better can be argued. More performant though? Yes, massively so.
Turns out spending some time understanding what your CPU and GPU are actually doing when running your app, and how to make them do less work, leads to pretty speedy software.
Then it also turns out that this does not seem to impede most of the features of the software it is competing with, meaning that software is by definition wasteful.
It can't even be argued that the other software made better use of human resources since it's a large team vs one guy who is often not even getting paid, and the guy is the one with the fast software.
And here's the ruining of Pixelmator Pro everyone was waiting for. I paid one time 20 euros for it (discounted). And I would gladly pay again even full price for a new major version.
I don't want yet another subscription.
I see that they can still be bought (for now) but I wonder how long that will last.
That's not what they mean. As a developer, the API you used to develop your app was now deprecated with no migration path. That meant your app was deprecated, with no migration path.
For an app platform already a distant third place and struggling to attract developers, pissing off the few devs you do have TWICE was not a smart move.
Even then, that happened at most twice as you say, not three times as the other poster said.
And I disagree with your implicit claim that the WP7 & WP8 Silverlight -> Win10 UWP transition had no migration path. There was >= 90% source code similarity, bolstered if you had already adopted the Win8.1/WP8.1 "universal" project templates. And Microsoft provided tooling to ease the transition. Sometimes it was literally just s/Microsoft.Phone/Windows.UI/g.
Games were a different matter, I'll admit. XNA as an app platform had no direct replacement that was binary compatible with Win10 desktop, but even then, not only was DirectX already available from WP8.0, but Microsoft invested in MonoGame as an XNA replacement precisely because they knew the end of XNA would hit hard. (In fact it was the Windows Phone division that had essentially kept XNA on life support in its final years so that WP7 games would not break.)
"the API you used to develop your app was now deprecated with no migration path."
Seems that's the standard now for .NET desktop dev. Every 2 or 3 years MS crank out a new XAML based framework that's not compatible with the previous and never gets completed before a new framework comes out.
Nobody in their right mind should be touching any Microsoft provided API that isn't already well established (like Win32 and Direct3D).
I'm happy they're at least maintaining (to a limited extent) Windows Forms and WPF and updating their styles to fit with their fancy Fluent design.
But even that is a pretty sad state of affairs, since Windows Forms should be able to get that info from uxtheme (which Microsoft fumbled) and WPF should be able to get that info from the style distributed with the system-installed .NET framework (which Microsoft fumbled and now only exists for backcompat).
For the company with the best track record for backwards compatibility (with Windows), they sure suck at developing and evolving the same API for long.
So what is the right way that Skia uses? Why is there still discussion on how to do vector graphics on the GPU right if Skia's approach is good enough?
The major unsolved problem is real-time high-quality text rendering on GPU. Skia just renders fonts on the CPU with all kinds of hacks ( https://skia.org/docs/dev/design/raster_tragedy/ ). It then renders them as textures.
Ideally, we want to have as much stuff rendered on the GPU as possible. Ideally with support for glyph layout. This is not at all trivial, especially for complex languages like Devanagari.
In the perfect world, we want to be able to create a 3D cube and just have the renderer put the text on one of its facets. And have it rendered perfectly as you rotate the cube.
Unlike regular Windows SDK, which lets you make use of the functionality provided by the OS, the WindowsAppSDK is entirely separate from the OS and requires the installation of a separate runtime on the user's machine. It also requires installing nuget packages on your machine to use it so good luck if you'd rather use straight CMake instead of Visual Studio.
As far as I can tell, there's no backwards or forwards compatibility, so the end user has to install the specific version of the SDK your app uses, or you need to bundle all the hundreds of DLLs with your app yourself.
A sane person might ask why not just use Qt (smaller distribution!) or Electron (about the same size) at that point, since they're cross platform and you can easily get fluent themes that look the same as WinUI3?
As far as I can tell there's no sane negative answer to this question. It's not like your app's "fluent theme" will be updated alongside the OS, it's no different from Qt or Electron in this regard.
There's no reason to do "native" windows development anymore, unless you mean using raw Win32 with no dark theme or a custom UI built on Direct2D/3D/Write. And if you are doing that, there's absolutely 0 reason to use this CLI.
reply