Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why is building Windows apps so complicated?
382 points by vixalien 44 days ago | hide | past | favorite | 489 comments
So I decide to build a windows 11 app. I already know some .NET and C/family from school and look through the source of C# and it doesn't seem too complicated.

However, building Windows apps is very complicated even when you know C++ or C#.

1. I wanted to build Windows 11 apps and I had to choose between Windows Forms, WinUI3, Win32 and WPF I think. I don't really know the difference between the 3 and no one really explains that (unless Win32 which is obvious).

2. Apparently, WUP is no longer recommended. A few years they were pushing it to the max. Bummer.

3. I installed Visual Studio overnight (it comes with all the tools that building apps need). It downloaded 6GB of data and used 20GB of storage. All i want to build is a hello world app.

4. I create a project following a guide from Microsoft.

5. The app is extremely resource heavy and way too over-featured. I build the boilerplate app and Run it and it takes minutes to build (most of the time is downloading dependencies so not such a big deal).

6. I open the newly created app in Sublime Text (my editor) and I can't find a way to build the app anymore.

7. Because Visual Studio is so bloated, I download Visual Studio Code, which is far more simple but I can still not figure out how to build my app even with the various extensions VS code boasts.

8. After hours of googling, I formulate a script that can build the app from the command-line.

9. But I stil don't know how to build the app as the .exe file created does not execute.

10. I'm very disappointed.

The worst of all is that there is not even one good guide on the whole clearnet on how to do develop a Hello World windows app from scratch.




1. Install the .NET 6 SDK: https://dotnet.microsoft.com/en-us/download/dotnet/6.0

2. Terminal ,pick a folder, `dotnet new wpf`

3. `dotnet run` and you have an (admittedly empty) hello world app

It's telling me it took 1.5s to build the app.

Want to publish the app?

`dotnet publish -o ./publish -r win-x64` which takes a bit longer as it has to download some runtimes (60 seconds max, then cached). Zip up the resulting folder and ta-da you have an application. Just unzip and run the .exe.

If you want a single-file exe (as in completely statically compiled without self-extraction) that's a bit more advanced and requires a few properties in the csproj to configure how to handle native assemblies that expect to exist on disk. If you want it to have no runtime dependencies, add `--self-contained`


>has to download some runtimes

As someone who had to develop software in an airgapped enviroment, I'm sending a special "fuck you" to whoever thought this is a good idea. God forbid you have the AUDACITY to not be connected to the internet at all times for any reason whatsoever.


Off topic, but this really struck a chord. I arrive at the trailhead for an afternoon of mountain biking in the mountains. I can't record the ride route because my mountain bike specific trail/ride logging app can't get an internet connection to log me in. I don't want to be logged in. I just want to record GPS over time. No maps needed. No server access needed. I guess they never thought someone might want to mountain bike in the mountains! Sorry, but that is an idiotic design decision. Unfortunately, this approach to "personal" computing has become the norm rather than an exception. People are not allowed to exist on their own.


...then use a different app. There are myriad ways to record rides with no data service; I do it all the time. Most commonly I use Strava or a dedicated Garmin bike computer.

Syncing / uploading then happens once they have signal again. Or in the case of the Garmin I copy the FIT file off by hand when plugging it into my computer.


I think that by the time you're at the top of a mountain without cell service, you're already locked in to an app that won't let you record your position.


So your solution for a software problem is buying more hardware. Companies must love you.


I don't think that was the point at all, the point was that if your app doesn't let you record your track without internet connection, then it's on that specific app (and eventually it's on you if you stick with such an app).

There are many apps that let you track your run/hike/ride without internet connection.


You got it; the point is that it's a one-time problem after which a different app should be chosen. There's a ton of things out there which'll record rides offline.

Personally, I think a dedicated bike computer is best, because then the phone's battery is saved for emergency uses instead of recording a ride. For long rides (8-10 hour) phones won't have enough battery to record the whole ride.


Some people like the interface/extra features of an app. I tried going from google maps to open street maps, and came back in 5 minutes flat.


I don't know if companies love him but I know Strava is an android app.

An android app is software.

Not hardware.


What new hardware are you referring to?


I record GPX with OSMAnd+ running on my older and smaller Android phone. No SIM, no Bluetooth, only GPS. It goes on all day long. Then I send it to my new phone or to my computer over WiFi. If I were in the mountains I'd turn on hot spot mode on the phone to make the transfer work.

No log in, no external services.


It's exactly why I quit using fitbit 4 or 5 years ago when they made a similar change. There should be 0 reason for needing an internet connection to send data from a wristband 2 feet away to my phone using bluetooth in order to tell me how many feet I've traveled. That may have changed since then but I wouldn't know. They lost me as a customer.


You made the mistake of thinking an app's functionality is it's purpose. The functionality is just a thin veneer of an excuse the devs use to track your every moment. Why would you think otherwise?

Even if it cost money to install, you are just paying for the privilege of being tracked by those folks instead of others.

#appsnotevenonce


Any good FOSS exercise monitoring apps?


Some great offline GPS (cell-free) map apps for Apple iOS are

- Sygic,

- Topo Map+,

- 2GIS

- Starwalk - Night Sky 2

- and my fav, Genius Map

all works without WiFi, cell coverage, nor NFC/bluetooth


Also maps.me


Maps.me started to implement some monetisation UI bloat a while ago. I swapped to organic maps. Can’t remember if it’s a fork or by one of the old maps.me developers but there was some connection.


Is it possible to migrate saved-places from MM to OM ?


...and yet, we're hitting to the same wall again:

Hardware is cheap, network is reliable. Move fast and break things.

Neither of these assumptions are true, and we're going to have these issues by truckload, until we understand and change our mindset.


Hamburg, Germany: the public transport services released a new app a few months ago which assumes that the request failed when the app has not finished receiving an answer to a route query after a given time span. The problem: It is normal to have a data cap that, when reached, limits the rate to 64 kbit/s. That is too slow for the answer to be fully transmitted in time...

At least the website works, even though it transmits the full site for every request...


Did this happen with Strava? My usual morning MTB lap has no cell service at the trailhead and it’s never been an issue. But I’ve also never found myself logged out before.


The PADI (suba diving) app barely works without constant internet connection, especially eLearning is useless with it once you are offline / have a shaky connection.

Scuba diving spots on the world are rarely well covered with internet and even if they are, still many people don't have the roaming/local SIM solution for it.

Guess where you want to use that damn app the most?


You can’t add a password to Bitwarden while being offline.

Discovered this when I tried to add a WiFi password before connecting to it.


Strava doesn't require an internet connection (at least on Apple watch). For maps, Trailforks supports offline mode.


Strava doesn't need a connection on Android either.


For android users, Strava makes things easy, but OsmAnd has way more features and I like it a lot more.


I use GPSLogger that saves my ride to a GPX file that can then be imported wherever you want.


Yeh I hit that too! If it’s the same app their business model is to charge for offline use.

Not a great way to get new customers though as it requires missing out on logging a ride to become aware of a reason to buy.


The massive 5G infrastructure push should help deal with some of you “occasionally offline” dissidents


No, if anything it will be harder to connect rural areas with 5G. 4G’s coverage is measured in miles. 5G’s coverage is measured in feet.

https://www.verizon.com/about/news/how-far-does-5g-reach

> 5G Ultra Wideband network’s signal can reach up to 1,500 feet without obstructions


You're quoting something about “5G Ultra Wideband”, which seems to be a brand name for mmWave. Yes, mmWave has very short range. But 5G isn't just mmWave. It's in many ways an evolution of LTE/4G, supporting the same frequencies and offering the same range, i.e. multiple km/miles. But it's up to carriers how they allocate their frequencies. To quote Wikipedia:

> 5G can be implemented in low-band, mid-band or high-band millimeter-wave 24 GHz up to 54 GHz. Low-band 5G uses a similar frequency range to 4G cellphones, 600–900 MHz, giving download speeds a little higher than 4G: 30–250 megabits per second (Mbit/s). Low-band cell towers have a range and coverage area similar to 4G towers.

https://en.wikipedia.org/wiki/5G#Overview

5G is _perfect_ for providing coverage in rural areas, except for the problem that 4G devices are incompatible with 5G networks. Starting 5G rollout in urban areas makes more sense because (a) 5G provides most benefit when clients are close together, and (b) because denser cells make it reasonably economical to maintain 4G coverage in parallel to 5G coverage.


That's a fair point -- that the tech is capable of supporting it. I could be wrong, but in the near term I don't recall any US carriers proposing to allocate any low-band spectrum that way.

Either way, if we're talking about "coverage" for low-bandwidth stuff like fitness trackers, it's the spectrum that matters more than anything. We can communicate thousands of miles on 1 or 2 watts of LF spectrum using technology that is nearly a century old. Don't need 5G for that, just need to use the right spectrum.


I'm very excited to heqr the plan for locating 5G towers in the ocean, in remote wilderness sites, in underground facilities, the Antarctic, etc. People visit these sites and expect their tech to work fine as long as it doesnt obviously require a network connection. Of course I can't browse HN from those places, but my otherwise self contained apps should continue to run predictably.


The higher frequencies of 5G are even more easily blocked by outdoorsy things like trees and rain.


How so? I thought 5G is mostly coming to densely populated areas, that is, areas that already have decent connectivity. Also, at least currently, I thought 5G is a developed country thing. Lots of folks are still running off 3G.


Huh, almost seems like 5G is marketing bullshit? The primary goal of 5G being to goad and shame consumers into upgrading a perfectly capable older phone to a new phone that is “5G ready”


I'm very excited to have gigabit download speeds so I can hit my hidden "unlimited" undescribed quota within a minute while also permanently having hotspot throttled to 128kbps.


There still likely won't be towers in the mountains or backcountry.


It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good. Yes, you have to put some extra effort into moving packages across that airgap when you need to add or upgrade one, but then, isn't having to do things like that kind of the whole point of an airgap?

I certainly wouldn't want to ask that the other 99% of the world's developers avoid a feature that's useful to them just to assuage my feelings of envy about the convenience they enjoy.


> isn't having to do things like that kind of the whole point of an airgap?

Nope. The exe files are supposed to do it's thing without internet and no extra effort like it was for a long time.


I'd call it a "yup," if we're talking about the point of an airgap. If I don't want executables contacting the outside world without my knowledge, and one is, then the airgap (or, more likely, firewall or suchlike) preventing that exe from being able to do so is a feature.

This does mean that certain programs just won't work, or won't work without some finagling. That's also a feature. The price of control is having to control things.

Granted, most people don't want to pay that price, and prefer convenience. That's admittedly not to my own taste - c.f.) log4j for a good example of why - but I think I'm maybe a little weird there. I certainly don't think there's anything audacious about catering to majority tastes. Maybe just vaguely disappointing.


Computers were able to work without internet since i was a kid. That's how me and my friends used them without any problem. Using the airgap word you are making it a new special setup that needs some special steps, some programs will not work etc. It's not a feature.

Saying people prefer one to another hides the fact that they were not given any other option. People will choose whatever default is given and then we may say they everyone prefers that. Or just make the other option (which was normal before) complicated so that nobody wants that now.


Even in the past files wouldn't magically materialize on your harddrive.


In the mini-guide above they already moved the SDK ("Software Development Kit") across the air-gap, yet creating a hello world still requires downloading even more stuff, because the .NET SDK does not actually contain enough stuff to create hello worlds, apparently?

Contrast this with e.g. Zig for Windows is a ~60 MB zip file which contains the entire Zig/C/C++ compiler with cross-compilation support for basically everything, and also has complete Win32 headers + libraries in it. There's even DDK headers in there, though I'd expect to do some legwork to build drivers for NT with Zig.


If you're happy to sacrifice the benefits of .net and spend time writing basic Win32 apps, that's totally a choice you can make. Or even just use .net framework 4.6 and not add any extra dependencies.

I'm not really sure what you're complaining about here. .net core is split into tiny packages - if that is hard to handle in your very special environment, you get to use special solutions to make it work.


That's besides the point I was making, which is that there's runtimes/languages which lean towards "internet required for development" or even "internet required for building" while there are also languages/runtimes which are self-contained and independent.

That being said, WinForms is also "just" a Win32 wrapper, I don't see a compelling reason why a similar wrapper wouldn't be possible in pretty much any language. .NET 4.6 is a fine choice too, especially because you're not forced to ship runtime and standard library, as Windows already has both.


I believe that's very much related. The more of the nice wrappers you provide, the more you have to decide if someone needs them all or are you providing them on demand. With .net core doing more splitting than framework and with Java going through a decade of jigsaw, I think we collectively decided we don't want to include everything upfront.

We don't even require the internet for building/development. In .net land you require internet to get the dependency you're using the first time. If you want to get them all and distribute/install before you start development, you can totally do that. It's just not the default behaviour.


Does that 60 MB include something like Qt, comparable to WPF? I'm not sure that comparison is fair.


Files used to materialize without internet since beginning, it's not magical. We just needed the software installer file copied to hard drive.


It's gonna blow your mind when you realize that `dotnet publish` produces an artifact that IS machine-independent, and that you would just distribute that. Or if it somehow really bothers you, put it in a self-extracting ZIP or MSI, wow, so much better. And I don't know what golden age of the Internet you grew up on, but there's always been "apps" distributed as zips, or as more than just a single binary.

I get that you have opinions, but you seem to have entirely missed that the runtime is downloaded at build time and included in the bundle. And god forbid if you like doing everything by hand, you don't have to use Nuget and you can manage every last dep by hand, however you like (and you'll likely end up hacking something that is less usable than just setting up a private nuget server, but "opinions").


>It's really not that big of a deal. You set up a package proxy that is itself behind the airgap, and you're good.

Yes, technically easy but if their work environment is strict enough to enforce air gapped development, I imagine the bureaucratic process to accomplish such a thing to be a bit less than easy.


> set up a packaging proxy to [connect with a system outside the airgapped computer]

Do you even know what an airgapped computer is?


It’s not that kind of proxy.


My guitar tab program, which I pay for, refused to show me my library of tabs when I was supposed to play for some kids at a mountain campfire because it couldn't verify my membership because no internet connection. I'm not a good guitar player, and my memorized repertoire is... well, not of interest to 12 year olds. :)

I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.


> I wouldn't say the campfire was ruined, by my goodwill toward this product certainly was.

Your goodwill deterioration does not matter unless you switch a new a app[1], and a) Make sure that new app can function without internet, and b) Tell your current app developers why you are switching.

So, yeah, your goodwill is irrelevant is you're still giving them money or value. [1] I assume that it's a subscription - most things are nowadays.


I don't know if it's the case anymore, but that's been the state of windows installers for a long time. Usually the easy-to-download one was tiny and just phoned home for the real bits. And that wasn't just Microsoft's own stuff, but even things like Java and whatever.

Usually you had to dig a bit and could find an "offline installer". Sometimes an "alternate downloads" link is on the initial download page, sometimes you have to good to find a deeper link on the vendor's site.

I always did that just to keep from needlessly reaching out to the internet X times when updating X machines.

And of course, make sure you're getting it from the vendor and not some sketchy download site.


The worst example I know of is Microsoft Office. When I run their installer/downloader, it installs the 32-bit version on my 64-bit machine—it doesn’t let you choose and by the time you realize, you’ve already wasted all that time and bandwith. I had to go to some janky website that hosts the links to the official ISOs and download that instead.


Yes, I hate it when software that I thought was fully installed unexpectedly starts downloading more stuff when I run it.

What if I installed the software with the intention to run it without internet, or 5, 10 or 100 years in the future?


Apparently in those 20GB there was no place for the runtime?


The installer for the sdk is 200MB


I think it started with Android.

You can download all versions of Android if you want, but i doubt that you would want that.


I think the first time I encountered it was in some makefile of Chrome or perhaps V8 that automagically downloaded dependencies. It sounds nice in theory, but then I expected the tarball to contain the entire thing which caused trouble and confusion down the line.


the default is online now for development, so nowadays by default if you need offline you need to test it :/


Who gets to decide what is the default?


On this case, Microsoft. It's their private garden, you are invited by their rules.

And yes, this sucks. But if you want freedom, there are other OSes (or even other dev tools) where you can have it.


This is the reason I wrote "bash-drop-network-access" [0]. I use it as part of my package building system so that downloads are only done in the "download" phase of building where I validate and cache all objects. This means I can fully verify that I can build the whole thing air-gapped and far into the future with the set of files identified by the SHA256 in each package's buildinfo.

This is important because I support each release of the distribution for up to 10 years, and have some customers who may need to build it in an air-gapped environment.

[0] https://chiselapp.com/user/rkeene/repository/bash-drop-netwo...


Very interesting. First time I heard about loading bash "builtins" from a shared library. How does this compare to LD_PRELOAD?

Personally, I just run things in network namespaces with "ip netns exec offline|wireguard $COMMAND" to restrict net access.


Using LD_PRELOAD you only affect dynamically linked executables, where using kernel enforcement using syscall filtering, every process is affected. Also, things are allowed to unset LD_PRELOAD, but not remove filtering.

I thought about using a network namespace, but that would make things more complicated since I would need to re-call my shell script to pick-up where I left off (because it requires creating a new process). I initially tried to implement this using network namespaces, but you cannot "unshare" the current process, you must spawn a new process.

With dropnet I can do

      download()
      enable -f ./dropnet.so dropnet
      configure()
      build()
      install()
With "unshare" I would need to do more work to get to "configure()" in a new process.


Safari is reporting your TLS certificate has expired


While I strongly sympathize, in this case it specifically addresses one of the OP's main objections: why did they have to download and install many GB of stuff that they'll never need. The three options I can think of are: (1) install everything (what they objected to), (2) ask the user what things to install (they probably already had this option but didn't know what they needed), or (3) install a minimal amount and download on demand. Although it doesn't work well for you, it seems it would work well for them.


Because OP is being disingenuous. 20GB is for the _full_ install of VS 2022. The required components for what he is doing is literally half that.


Is it clear to a novice which components they will or will not need ahead of time?


What is it that you are complaining about really? You need the latest runtime if you want to develop for the latest runtime. If that's your intention, download the latest runtime anyway you like, and then install it on your target machine. If it's not, don't download it and develop for the last runtime available on your machine.


You can bundle the .NET runtime with your app. So the user doesn't need to download a runtime.


Yes, however, it will expand the runtime into C:\temp (or similar). What could go wrong? And then you find yourself in a MS induced yack shave because you want to run two different executables. Microsoft is a never ending source of accidental complexity.

In this particular scenario, my first thought was "shoulda used golang".

I hear tell that since then (1+ yrs ago) matters have improved in the realm of MS standalone apps (well, maybe just cmd line apps).

oh, and the exe is round about 65mb compared to a golang ~5 or 6mb


Indeed, that's why it's called a redistributable.


Agreed. I was very happy when it was announced in 2019 that Cargo got offline support: https://www.ncameron.org/blog/cargo-offline/

You can even prefetch popular libraries: https://crates.io/crates/cargo-prefetch


Well, it caches the packages... So only your CI system needs internet, or your PC the first time you ever publish.

And you can likely mention it as an explicit dependency in your csproj so that you can download it on first restore.


I feel like you are misunderstanding the rules associated with an air-gapped system.


I developed with dotnet in an airgapped environment. Due to restrictions, you cannot use dozens of nuget packages. So, you create a nuget package repository in your airgapped environment. That's all it is. If you want something else, you use whatever the policy is to get a file from internet to airgapped side. When I wanted a newer version of a Nuget package, it took me 3-4 hours to get it on my workstation. But that's all.

Also, when you write something on those environments, you know users cannot install a runtime. So you get in touch with IT teams to ensure which version of runtime they are deploying. If and only if you have to update it for proper reasons, then they can deploy newer versions to the clients. For all or for a specific user base. This is how it works.

Without an actual business case or a security concern, you don't just go from a runtime to another, let's say 4.8 to 6.0. So yes, development in airgapped environments is PITA. But it's the same with Java, Python, Perl, etc. That's not the fault of the runtime but the development environment itself.


Presumably all development frameworks require you to explicitly list your dependencies, download/restore them with internet, then snapshot and copy that to your air-gapped environment?

That's exactly what you have to do here.


Is there a rule regarding developing for a .NET framework from within such an environment?

I understand OC issues with the difficulties associated with using M$ tools with limited internet but wonder if the "Air Gapped" example may be a bit extreme.

Being required to work from home while still meeting an employers' secure network policies might be more common.


I would guess because the world doesn't revolve around you? You can download the full installers and bring them over on a USB, it's a trivial operation. You can also build on a networked computer and then bring over the final file(s) to your air-gapped system.


Well, in .net 6 you have the ability to deploy self-contained application, in a single file manner and even compress the binary [1]

The end result is a Golang-like, single binary experience that runs on many platforms easily and rapidly.

Though I can master a lot of programming languages, I miss C# the most especially on async/await and LINQ. Rust is what I'm favourited in second places with a lot of similarities of C#.

[1]: https://docs.microsoft.com/en-us/dotnet/core/deploying/singl...


Pytorch occasionally does this as well w/ model weights and it's a royal PITA


You are missing the point. ...We have imaged Black Holes galaxies away, detected Gravitational waves from the other side of the Universe, landed on the Moon.

But to this day, nobody knows what data your system telemetries to Microsoft. Not the data they talk about in the 5-10 page license. Instead, the data mentioned in the 55 page doc about what you agree to send them, that they refer to from the MS Software License...


What dev system allows you to build things without downloading required components first? None?

Like every other dev system, connect, either download offline installers for everything (they exist), or get your system running, then you can dev offline all you like.

You don't need to "be connected to the internet at all times for any reason whatsoever". You need it once.


Man… the number of times I’ve had to debug a broken Steam game by installing the Nth .net runtime version…


For me it's zero.


Things were far more annoying in the past, in Win98 connecting a printer, or any other hardware required inserting the installation CD or having a folder containing the all the cab files on your system and drive space was far less abundant.


same with CI/CD pipeline. Most developers just choose to download the same runtime each time there is a build, which is not just very inefficient but not at all guaranteed to work for the next 10 years.


its a split game, you can install everything at once, like 60gigs, and then you can happily work offline, but for most people, it is much easier to work from the on demand model, to pull what is needed when its needed.


This is a bit dramatic. You're a software developer, building an app which has dependencies, so of course you have to download those dependencies to build. Where else would they come from? Literally every language with a package manager does the same thing.


Being able to make a portable build of the software you are creating is such a basic feature it's baffling you have to fetch extra data to do that. Also nowhere in "dotnet publish -o ./publish -r win-x64" I said "Connect to the internet and fetch megabytes of binaries"

What I miss is the old model for installing software. Give me the yearly ISO, optionally provide a service pack or two if some huge problem went under the radar in testing.


`dotnet publish` performs an implicit `dotnet restore`. So, yes, you did.

If you don't want it to download anything then you use the `dotnet publish --no-restore` flag, which is used a lot in CI/CD pipelines. If you don't have the package dependencies cached it will then simply fail.


The opposite side of that coin is a required up-front install of every package that might ever be needed for every possible scenario... in which case people would complain (even more) about massive installs.

The internet exists, the industry has evolved, software has dependencies, and yes you have to download them (just like you had to download the SDK ISOs back in the day). But it's just one command, run it and get it over with, and after that up-front one-time pain you'll have a nice offline workflow.


I'm not OP, so interpreting: I don't think OP is asking for an up-front install of every package under the sun that might ever be needed for any kind of development. He's just asking that, out of the box, the build tools can build software with no dependencies into an executable without having to hit the Internet. And, if he has particular dependencies he needs, allow him to download them (ONCE) onto that machine, and again, he can build software into an executable without having to hit the Internet again. This doesn't seem that unreasonable a request. Every other compiler I've ever used has had this feature. It wasn't even a feature. It's just the way software has always worked.

I should be able to take my computer to a remote cabin with no Internet, and use all the software on it. The only software I'd expect to not work is software whose purpose is to access data stored on the Internet, like web browsers. I don't think this is such a crazy user expectation.


You are welcome to the philosophy that says, “the internet exists. Adapt or perish.” It may serve you well.

For many, it is not so black and white. Internet connections are spotty, slow, or expensive. In GP’s case, there is no internet.

Like I said, you are welcome to ignore those users. But your ignorance (I don’t mean that in a derogatory way) doesn’t change their situation.


> make a portable build of the software you are creating is such a basic feature

That is easily doable. However users often don't want a copy of a large runtime for each and every program they use, so it often makes sense to move common things (like DLLs, runtimes, your OS) to libraries that can be shared.

You can easily make dotnet apps in either flavor to your liking. And not every developer is going to make their apps to appeal the your needs.


We seem to have normalised the current situation as an industry, but that doesn't mean the situation is good.

In days gone by we used to have truly standard libraries and runtimes, in the sense that they came with your build tools out of the box and so were available everywhere. Host platforms similarly provided basic services universally. Documentation was often excellent and also available out of the box.

In that environment, writing "Hello, world!" meant writing one line that said do that, maybe with a little boilerplate around it depending on your language. Running a single simple command from a shell then either interpreted your program immediately or compiled it to a single self-contained executable file that you could run immediately. Introducing external dependencies was something you did carefully and rarely (by today's standards) when you had a specific need and the external resource was the best way to meet that need.

Some things about software development were better in those days. Having limited functionality in standard libraries and then relying on package managers and build tools where the norm is transitively installing numerous dependencies just to implement basic and widely useful functionality is not an improvement. The need for frameworks and scaffolding tools because otherwise you can spend several hours just writing the boilerplate and setting up your infrastructure is not an improvement.


There was a time when MS didn't understand the internet and none of their build tools depended on it.


This is my experience as well building and running .NET core stuff on Arch Linux all the time. You just have to know what you're doing, and the Microsoft documentation doesn't make it easy to take the minimalist route.

Microsoft could do a much better job onboarding new developers.


Microsoft wants to sell Visual Studio, this is why Visual Studio Code will never get a GUi Designer, or have easy compile options for C#.

Hell just recently they were trying to take away hot reload to keep that feature as a paid feature only part of VS



I am aware of the community edition, which the most recent versions have a VERY restrictive license on the acceptable uses for the community edition

The fact this edition exists does not change my point, or really add anything of value to the conversation


Ah yes, VS' hot reload that insists on popping up a dialog every time a file changes. Garbage.


Visual Studio Code only needs to be good enough for a Cloud IDE kind of scenario for Azure workloads (it started that way as Monaco anyway), anything beyond that is a gift so to speak.


That's true.

I only really started liking .NET when I stopped using visual studio a few years ago.


This is probably the easiest way. The tragedy is that this explicitly rejects all the subsequent developments since WPF - UWP, WinUI3 - because they don't work nearly as well.

If you want an installer BTW, use "Wix".

If you are eligible for VS Community, do give it a go: https://visualstudio.microsoft.com/vs/community/ since that has the WPF designer in (the WinUI3 designer is inexcusably broken)


> the WinUI3 designer is inexcusably broken

I just tried VS2022, and the UI designer for WinUI3 is completely non-functional? What is the recommended approach to lay out the UI pages these days?


Realistically you have three options:

- ignore Winui3 and do it in WPF (fewer controls, deprecated, actually works)

- do it blind in XAML, possibly with the aid of a piece of paper. Or the "live view" in VS if that works. (Live View is the suggestion given on the github issue for "designer doesn't work", fwiw)

- do it in the UWP designer, then s/Windows/Microsoft/ in the control namespaces


That's what i really love about the MS ecosystem. There are actually 3 ways to achieve a task, but none of it works fully end-to-end.


holy shit


Exactly. And then people wonder why everything is electron nowadays. Native UI development on any platform is pure garbage compared to frameworks in Web frontend.

I hope SwiftUI and flutter will be able to make it at least a little bit better.


> If you want a single-file exe that's a bit more advanced and requires a few properties in the csproj.

This. This is what's wrong. Why is single-file exe "a bit more advanced". In early 2000s Delphi could build a single file exe in seconds, and that was the default behaviour.

What changed since early 2000s that having an exe is a) advanced and b) requires manually fiddling with some unspecified properties in the abomination that is a csproj file?


> This. This is what's wrong. Why is single-file exe "a bit more advanced".

Because that's how it works for very every single interpreted and bytecode compiled language?

And the thing that changed in the early 2000s was a massive shift toward using interpreted and bytecode compiled languages.

If we're specifically talking .NET, the thing that changed since the early 2000s is that creating a self-contained executable became possible in the first place. On Windows, .NET *.exe files were still run by an outside runtime, it's just that, since Microsoft owned the whole platform, it was easy for them to hide all that behind a curtain, ensure .NET is preinstalled with Windows, etc. The design constraints changed when .NET became cross-platform. OS X and Linux require a bit more (or at least different) finagling in order to achieve good user experience.


> Because that's how it works for very every single interpreted and bytecode compiled language?

I went ahead and searched for C# executable around 2005-2006. Guess what, this wasn't even a question then. Because, apparently, building an .exe was also the default setting for C# projects in Visual Studio.

So. What changed?

> If we're specifically talking .NET, the thing that changed since the early 2000s is that creating a self-contained executable became possible in the first place.

It was always possible.

> On Windows, .NET .exe files were still run by an outside runtime

1. We're literally in the thread to a question about Windows apps. On Windows

2. If you've ever did anything on Windows such as played a game, you'd know that you almost always need something external to run: be it msvcrt (c++ runtime) or clr.

> The design constraints changed when .NET became cross-platform.

What you mean is: it's still perfectly fine to create a standalone executable, but for some reason it's now called a "more advanced operation". The thing that's changed is that now it's hidden behind a ton of inconsistently named parameters


But, per the last paragraph of my comment, those .exe files were not really executable files. At least not in the sense of, say, an AOT-compiled C++ application.

They were much more comparable to an "executable" Python or Perl script where the OS knows to look at the hash-bang line to figure out what interpreter to use to run it. If you try to execute one of those .NET .exes on a computer that doesn't have a suitable .NET run-time installed, you'll get more-or-less the same error as you'd get trying to run a Python script on a computer that doesn't have Python installed.

The part that was being criticized a few comments up was about how to create self-contained .NET apps with the runtime bundled in and everything. Specifically, these guys: https://docs.microsoft.com/en-us/dotnet/core/deploying/#publ... That kind of executable simply did not exist in the old Windows-only .NET Framework; it's a feature that was first introduced in .NET Core 3.0.


> If you try to execute one of those .NET .exes on a computer that doesn't have a suitable .NET run-time installed

Just as you would try to execute a program written in C++ (and not statically linked etc.) on a computer that doesn't have a msvcrt runtime installed.

This is not a new thing. Nor is it a "more advanced".


If you're happy with that error then you don't need this new feature.

It feels like you're getting too hung up on "exe". The important part is standalone vs. not standalone.


No application on a modern OS is standalone. They all rely on either having many components the need already installed, and then try to bring along others that may not be installed. As the commonly installed base changes, the included pieces also change.

I for one don't want every application to include 100's of MB of standard components that every other such app also brings (such as Electron style apps). I'd much rather have an app tell the OS to fetch missing pieces once, and once only, then future apps share.

And this also mitigates a significant source of security holes. Nothing like linking everything and the kitchen sink so your system is riddled with unknown, hidden vulnerabilities in binaries.

For example, I recent worked on tools to search for such things - they are EVERYWHERE. OpenSCAD, for example, includes a ssh engine, which has known vulnerabilites, but OpenSCAD does not list them. I found thousands and thousands of embedded binary libraries in applications with known and unpatched vulnerabilities.

Too bad all those didn't use a decent package manager, allowing systemwide updates to common functionality. I suspect the future is more components, not less, for these reasons.


"Oops, we couldn't find a shared library that this program depends on" is not exactly the same error as, "Oops, we couldn't find the interpreter / VM that you need to run any programs written in this non-AOT-compiled language."

In other words, compare missing msvcrt.dll more to missing libc.so than to not having a JVM or whatever installed. I guess from end user perspective the end result is the same - the program won't run - but what's actually going on under the hood is very different.


Which is exactly why I dont use any of those. I will stick to Go, or Rust, or Zig. People expect to be able to produce a single EXE. Maybe not as the default, but it should be an option, and an easy option. Any language that cant do that is a failure in my opinion.

Also please dont lump interpreted languages with C#. At least with interpreted languages, once you have the runtime on your computer, you can have a single script file to work with. With C#, you STILL have to have the runtime on your computer, then after you "compile", youre left with a folder of one EXE and literally over 100 DLL.


With Python, if I publish my script with 100 dependencies, and someone `pip install`s it, they will also end up with 100 packages being copied to their computer.

The main difference is that Python installs them globally (or at least thinks it does, if you're using virtual environments), while .NET apps isolate themselves from each other.

Also, let's make a fair comparison. Is that hypothetical Rust application of yours depending on literally 100 crates? If so, what is the size of your binary?


Please don't use Python package management as a baseline. Aim higher.

I love Python, but I cringe whenever someone asks me why a Python program isn't running properly on their machine. Obligatory xkcd: https://xkcd.com/1987/


Take PHP with composer: it works quite fine, but still you need all the dependencies downloaded from somewhere on Internet. Just a PHP script and the PHP interpreter works, but this is not how 99.9% of the software is written.


The php world invented the phar format [1] to deal with the single file tool/app distribution issue.

In fact, composer uses dependencies managed by itself in their sources. Then it gets packaged and distributed as a single file that includes all dependencies (composer.phar). That single file can be run by php as if it was any regular php file (including executing it directly and letting your system detect the required interpreter through the shebang).

https://www.php.net/manual/en/intro.phar.php


Self quote: this is not how most apps are written.


Funny how feelings change. https://xkcd.com/353/


> With C#, you STILL have to have the runtime on your computer, then after you "compile", youre left with a folder of one EXE and literally over 100 DLL.

No you don't. If you'd rather increase the download size for the user then you can turn on self-contained (include runtime) and single-file app.


That single-file app is a zip archive with exe and those dlls


If we're talking modern .NET ( 6 for example ) you have 4 options, let's assume a simple hello world without third party dependencies:

1. Build it runtime dependent: ( which requires the NET 6 runtime to be pre installed ) on your computer in order to be able to run: You get a single exe file.

2. Build it self contained: You get a directory with a bunch of dlls and one exe But no runtime needs to be installed on the target computer

3. Build it self contained + single exe: You a get a single exe that embeds all those dlls and unpacks them in memory ( since net 6, in net 5 it would copy them to a temp directory )

4. Build it using AOT Mode: You get a single, statically linked exe. This is probably the closest to a standard Rust (statically linked) build. However AOT mode is not yet official and requires some fiddling still, but should become stable for NET 7 next year. And you loose out on some features obviously like runtime code generation


> At least with interpreted languages, once you have the runtime on your computer, you can have a single script file to work with.

The Python people stuck in venv hell would probably think otherwise.


The reason it's more complicated is to support reflection. C# allows you to create objects and call methods at runtime based on data not available at compile time, including classes and methods that don't get used by the normal (non-reflection) code.

That means that by default you can't do tree shaking, which means you would end up with enormous exes, which will probably annoy the type of people who want a single exe.

The bit more advanced is to tell the compiler which types you might want to reference by reflection so that it can shake the rest out of the tree.


Do you know of any production GUI applications that are literally a single-file EXE and aren't like, small utilities? There's just no reason to try to pack everything into a single file, Go-style. The self-contained publish (which is literally a single flag) is a quite reasonable result - a directory of files that are completely sufficient to run your app on any Windows computer, without any dependencies to install.


> Do you know of any production GUI applications that are literally a single-file EXE and aren't like, small utilities?

The old Delphi-based version of Skype fell into that category. Thinking of that example, I can understand why some people think modern software is decadent.


https://www.joelonsoftware.com/2004/01/28/please-sir-may-i-h...

Short story: developer interests unfortunately don't always align with tool builder interests.


Eg. Every .exe has an accompagning .config file.

It's really easy/simple to change an application to adjust for a different client this way.


Embedding native assemblies inside another application is apparently very hard.

I'm not sure I know any frameworks that easily support arbitrary native .dll/.so inside the application executable.


The rest of the world refers to this as "static linking" and it is, in fact, trivial.


It's amazing that most people on this thread seems to take this nonsense as being completely normal and acceptable now. It really shows how much windows dev has devolved over the last decade.


Why does every linux app I download come as a self-extracting installer, run in a container, or download dependencies? Those aren't single-file executables.


The discussion was about Windows, which offers self contained apps and where the support matrix isn't the size of Asia.


Which dotnet supports with a single compile parameter, I'm not sure why people are making it out as if this is some very complicated feature.


> I'm not sure why people are making it out as if this is some very complicated feature.

OP asked how.

The answer was, quote "If you want a single-file exe (as in completely statically compiled without self-extraction) that's a bit more advanced and requires a few properties in the csproj to configure how to handle native assemblies that expect to exist on disk. If you want it to have no runtime dependencies, add `--self-contained`"

Somehow creating an exe is "more advanced", and requires changing of unspecified parameters in the project file. wat.


Huh? `dotnet run` created an .exe.

I'm talking about creating a single-file application, which is just an end-user nice-to-have if you aren't using an installer or deployment system.

Don't most applications have a bunch of ancillary files? I rarely see installed applications on any platform that are a single file.


> Somehow creating an exe is "more advanced"

.NET Core is cross platform, it was created with ASP.Net as the driver and web development is not about exe files. "dotnet run" runs your code, on any platform, that's one of the default intended ways to run code. If you want a platform-specific executable you've done more work and made the code less general. If you also want to package the entire .Net framework into one binary on any platform, why is it unbelievably impossible to understand that this is more effort and desired by fewer people, so isn't as easy to do?


It is trivial if you can embed most of the OS inside of your executable, if there can be only 1 version of the OS, if you do not use any libraries you cannot statically link and so on.


https://www.joelonsoftware.com/2004/01/28/please-sir-may-i-h...

Strategic decision from Sun and Microsoft that has crippled software for 2 decades already.


And what. You’re going to run .exe files on Linux?? Lol.


Sometimes I feel like the Visual Studio team and the C# team don't talk to each other very much.

The C# team says "this is the new best practice" while VS still drives you to The Old Way all the time.


Your gut feeling is very correct.

https://news.ycombinator.com/item?id=28972431

There was a whole fiasco where the dotnet/C# team were forced to remove features from dotnet in order to sell more copies of Visual Studio. Later, Microsoft lied through their teeth and said it was some kind of sprint-planning scoping error, even though the development work was already done.


Worst part is that the person that made the decision to remove that feature was promoted. And now indirectly controls GitHub.


You have absolutely no idea what you are talking about.

Full disclosure: I know what I am talking about


It is quite easy, the ASP.NET folks on the C# team don't care about the remaining use cases, and they are the ones behind the initial .NET Core reboot.

Except, everyone else cares about everything else .NET Framework has been used for the last 20 years.

This was quite telling on the .NET 6 release with its minimal APIs to compete against Python and JavaScript "hello world" apps. As soon as one scales beyond that, it is back to the old MVC model.


dotnet does not work with winui3 apps. I can't link the relevant stackoverflow answer from a microsoft dev. You can only use msbuild.


That's pants on head stupid.


How so exactly?


They're two almost-identical pieces of software that parse almost-identical file formats. They may or may not share a codebase (five minutes on github hasn't clarified this), and in most cases you can substitute one for the other. Except this one. And the developers (of whom there are not many for WinUI!) forgot this use case because they're focused on building inside VS.


The fact that 2 major Microsoft products don't integrate fully.


This is the best response I've seen to the question: "What's the easiest way to get started developing a native Windows app". Better than anything Microsoft has put out.

Isn't WPF getting phased out tho?


This Is The Way. You don't typically need a single-file EXE, `--self-contained` is Fine.


dotnet is a stack for slow servers who love cold start

wpf is a stack for slow bloated apps (200mb for a hello world and 300 dlls)

even electron is lighter than this.. electron..

microsoft needs its swift + swiftui moment


This, .net core is quite a pleasure..


Unpopular opinion: native GUI app development on most platforms is a shitshow. Huge toolkits, copious amounts of boilerplate, and a huge amount of dev effort thrown at the platform just to get it working.

Meanwhile, say what you will about the web ecosystem, I've given interns/jr devs tickets like: see if you can get this webapp running in electron, and maybe figure out how to do push notifications and save data locally. Two days later the project is done. A week later, the app is ready to install.

I think the real reason I quit doing meaningful amount of native GUI development a decade ago was velocity. The developer experience with iOS is probably about the best of the major GUI platforms, but even that is rife with tarpits of tribal knowledge and gets really complicated to release an app due to the whole App Store experience.


I don't think I have ever witnessed a comment on Reddit, prefixed with "Unpopular opinion:", that did not actually contain a VERY popular opinion, and therefore rocket to the top of the thread thanks to reverse psychology upvotes.

It's depressing to watch this trope start to bleed over into HN.


Problems with the prefix aside (I'm not a fan either) Reddit does it's best to show you popular comments. If you go looking in the -10 through 10 range on relevant threads you'll have a decent chance of spotting them. Not "controversial", not "top" (or bottom of "top"), not "hot", just the 95% of comments that aren't driving engagement numbers.

After all it's neither going to be popular because it's a popular sentiment nor is it particularly likely to be something interesting/relevant to get upvotes on merit. It's just going to be an unpopular opinion that gets ignored.


Well, it's at least a controversial opinion even if it also maintains popularity. People seem to rag on Electron apps all the time on HN due to their inefficient usage of computing resources. However, they are an optimal use of developer resources, which are far more constrained for the average project.


Also the phrase seems close to breaking the guideline against commenting about the voting on comments, and is a boring meta-comment that just adds noise. It seems like putting up a preemptive umbrella.


Well.. people around here love to hate on Electron so I can understand thinking it's an unpopular take.

But seriously Electron is so many times better than writing native apps .. or even trying to find the documentation for how to do _one single thing_ in a native app the way you want to.


You're taking about wrapping a boilerplate ridden platform in another boilerplate ridden platform where someone else wrote either boilerplate. You're talking about a browser without window dressing. That's not making an app nor eliminating any boilerplate.


I'll also add that you lose most of the benefits of a native app and most of the benefits of a web app.


https://en.wikipedia.org/wiki/Survivorship_bias

You are less likely to read the truly unpopular opinions but yes I agree, seems like a cheap way to gain popularity.


Getting started in the web ecosystem involves a lot of similarly impenetrable "magic". There just isn't a community built around creating project templates and creation scripts and such for lower-level programming.

I'd say you're right about the GUI shitshow, and the web ecosystem is just a slightly different kind of shitshow.


Absolutely.

There's a huge amount of tiny chores with web development, that if you hadn't learnt how to do them in 1.5 seconds 5 years ago, would take you 2-3 days to figure out.

There are a hundred small chores that must be done to build and launch software on the web that you have to learn before you get it going.

I strongly doubt desktop doesn't have the exact same volume of inane things as webdev.


I second this. Further, the number of times I’ve tried to get into native development is about ten, and every time I’ve done it the APIs are completely different as the OSes have moved on to new ways of doing things. Meanwhile, the DOM APIs have been more or less the same since I learned them 15 years ago, with the oddball world-changing addition of querySelector or enhancements that people rarely use like Shadow DOM and filesystem API.


The APIs for cross-platform native GUI development haven't changed in any significant ways in 20 years or more.

If you insist on using the new hotness being offered for any platform (web, native, whatever), then sure, you're doing to see this. But in the web context, you reference the DOM APIs that are stable ... in the same way that Qt, GTK (or even Cocoa) have been. New APIs for web dev show up monthly, but you apparently manage to not keep switching from one to the other. Why would you do the same thing for native dev?


Qt isn’t a native API. It’s true that there are stable libraries like GTK or WxWidgets which allow you to write native-looking apps (or for Linux, I spose that is native, and granted, Linux probably has the best native dev experience), and those exist in the form of React et. al. for the web world.

But the number of Windows native app APIs is insane, and every year I try to learn how to create an XCode basic app and it’s different. Creating a simple app with HTML and JS however is the same as it’s always been.


> every year I try to learn how to create an XCode basic app and it’s different.

https://ardour.org was first ported to then-OSX in 2006. Almost nothing in the codebase that relates specifically to now-macOS has changed in that time.

XCode is a tool, not a platform. Cocoa is the platform, and it changes very little and very infrequently.


This is irrelevant in terms of developer experience. I’m sure it’s possible to maintain an existing app for years and years, but if every blog post in existence is written for a different version of the development environment (which are vastly different from each other) and the OS Vendor (Apple in this case) is deleting or never even producing documentation on these APIs, which definitely happens, the rot is too much to keep up with for people trying to learn the system.


One of the jobs of a software developer is to understand that companies that produce proprietary systems (such as Apple) have their own incentives that do not always align with the desires and needs of those creating software. Put simply, this means that you do not trust those companies to do the right thing for you, and instead you need to identify what level of their technological offerings seem most likely to represent the correct entry point for you/the project.

When we first ported Ardour to OSX/macOS, it was abundantly clear that XCode was a non-starter in terms of technology, partly because it was single platform, and partly because its scope was too large - as Unix-based developers we understood the merits (and occasional pain) of more modular toolsets. So I found a handy guide to writing "nibless applications" that completely removed XCode from the picture (along with some Cocoa boilerplate code that isn't actually required). As a result, we've continued to interact with the stable API core of Cocoa, even as Apple dither around the edges.

Apple (and Microsoft and .... ) are not your friends. They want you to build software for their platforms, but beyond making that possible in some way, their incentives do not align with yours in any substantive way (partly because they make software too, and have their own API goals and needs that may/will conflict with yours). If you don't understand this, you're inevitably going to have the sort of experience you describe above.


I don’t disagree with anything you said. If anything, it just supports my point that native development is needlessly hard. Again, none of this has been true of the browser vendors. They have every incentive to get people to use their stack and make it nice.

By the way, a blog post on how to create Nibless apps would be invaluable.


Not sure about browser vendors and their stack. I almost never hear of anyone who develops on a browser vendor's stack, they use 3rd party tools in JS and CSS and webasm etc ... After Node wrapped up the DOM and put a bow on it, it seems that people move on (up? down? sideways?) from webdev at that level (except the people developing alternatives to Node).

I'm not sure if this is the source I used, because the date is a little late, but it seems very thorough and excellent:

http://lapcatsoftware.com/blog/2007/05/16/working-without-a-...


I use the term native to mean "an application that runs on the metal and has direct access to system calls, not in a VM of any kind, including a browser".

Qt is native in that sense, just as much as any other toolkit designed for the same purpose.


Cocoa is quite stable, I can vouch for that. So much so that I was able to resurrect an open source Mac app originally written in the early 2000s over the course of a weekend. Some changes were required and there were a bunch of deprecation warnings, but it worked and could’ve been cleaned up pretty easily.


I haven't touched GUI development for many years but 20 years ago I was able to learn Turbo Vision and create a simple TUI app having very little programming experience. A few years later I was using Java AWT and It didn't took too much time to learn it too. I cannot create a WebApp or even an Electron app of similar GUI complexity without spending much more time.

Native GUI is more cplicated nowadays too, but I don't expect it to change too much - controlling layout using CSS+HTML is harder than in most (if not all) native GUI frameworks.


Cannot agree more. I have used like 20 Native UI Toolkits ( MFC,WxWidgets,QT, WinForms, WPF, VCL, Android, Jetpack, SwiftUi, GTK, and the list goes on ) i have even written one myself ( for in game uis )

I still cannot comprehed how we ended up with the shitshow that is CSS/HTML. Even Javascript nowadays is a workable language ( or otherwise typescript ) WebAssembly is great... but the actual UI Layer with the DOM and CSS .... just a nightmare

Every time i have to work on something a bit more complex on the browser, i ponder of writing my own UI Toolkit based on Canvas/WebGL or WebGPU and just draw everything myself


Flutter for Web is doing exactly that. For some webapps, this might be a great way to go.


I am not sure, Flutter is still using DART ? That's a pretty big no go, last time i looked DART was a haphazard clone of Java/C#/Kotlin with no discerning features of any value.... I rememember when it was supposed to be the new Javascript (transpiling and big dreams of landing a native Dart VM in browsers )

For me its a no go to have a language that's bascially a dead end outside of one specific ui framework.. Seems like a solution in search of a problem.


The performance of flutter on the web is unusable, I would be ashamed to ship an app using it.

What other framework doesn't maintain 60 fps in its hello world?!

Flutter works great on mobile and desktop though, I shipped multiple apps using it and currently writing a desktop app.


I don’t have the patience or interest in messing with GUI frameworks anymore.

Any desktop app I make that needs a GUI gets written in Java. Swing is antiquated but doesn’t change every year like everything else keeps doing. I use an XML UI definition language (Swixml) to make it RAD. With a modern look-and-feel package it looks great, runs fast on modern hardware, works the same on Windows/Mac, is easy to deploy, and most importantly: I know it will work basically forever.


> see if you can get this webapp running in electron, and maybe figure out how to do push notifications and save data locally. Two days later the project is done. A week later, the app is ready to install.

So you had an app that was already written, ported it to electron (designed to make the port as painless as possible), and you note that this is easier than writing it from scratch with non-web APIs? This is not exactly suprising, is it?


So true. I don't think it's a surprise for anyone that developers experienced in web technology find it easier to make apps using the thing they're experienced with.

In a parallel universe, a junior dev who only learned Qt would have same amount lot of trouble making websites.


Maybe, but there a lot more web programmers than Qt programmers, so chances are you going to be dealing with someone who knows the web and not Qt.


> So you had an app that was already written, ported it to electron

True, and had we tried to port to native, that intern would have given up. Most of the business functionality was in a REST api, so porting to native was something that was doable, but the level of effort would have been extreme. Oh, and had we wanted native on Mac, Windows, Linux, Android and iOS... well... that's 5x jumbo right there.


As a general remark, I get confused and maybe even a little irritated when people conflate Android & iOS with macOS/linux/windows. I mean, a device that you interact with almost entirely via (multitouch) and generally has extremely limited display real-estate is fundamentally a totally different platform than a desktop computer. The interaction model, the display design... you're never going to want the same thing on both (unless what you want to do is extremely constrained).

You can sort of kind of pull this off with a relatively limited web app, where the interactions are limited to tapping (since there's generally touch interaction on the desktop) and the fluid layout possibilities of modern CSS give you some chance of coming up with a general design that can work sort of kind of OK across all browsers on all platforms.

But try this for a more complex app, the sort that has been traditionally desktop only, and I think it's extremely hard to design the GUI in a way that can work on both mobile and desktop platforms, regardless of the actual toolkits /technology you are willing to use.


> As a general remark, I get confused and maybe even a little irritated when people conflate Android & iOS with macOS/linux/windows.

Android and iOS represent the majority of devices your software can be run on. They are ubiquitous, and also very dependent on a native UI frameworks that are a somewhat better developer experience than their desktop counterparts, but only because the years of accumulated cruft are less. Incidentally, Android actually does have a desktop mode for it's GUI. That said, users rarely use the desktop Android GUI.

> You can sort of kind of pull this off with a relatively limited web app

I've had better luck with electron and PWAs on the desktop than non-native mobile tooling. This is largely because non-native mobile frameworks often don't have access to the same libraries - or the abstraction layer loses fidelity in trying to be cross platform. There's really no QT grade multiplatform tooling on mobile. There's some promising starts.

> I think it's extremely hard to design the GUI in a way that can work on both mobile and desktop platforms

It's hard to get a decent GUI on both Android and iOS that doesn't have a lot of rough edges with cross platform frameworks. They are coming along, but it's still just not quite there yet.


I'll concede that native GUI development is tedious on pretty much every platform in existence. However, there was a golden age of Linux GUI programming that spanned from the mid 2000s to 2019, where you could spin up fantastic GTK interfaces in the span of a weekend afternoon. I still use GTK3 to make simple forms and utilities, and I reckon I will for quite a few more years; the workflow is flexible and straightforward, there's a ton of fantastic desktop widgets that work as well as they do on a touchscreen as they do with a mouse, and it was surprisingly extensible too. GTK4 has seemingly put that behind them to focus on a "write your app in this specific way or don't write it at all" ethos, but frankly I'm happy to stick with the prior toolkits. Maybe it sounds cheesy, but it really was my "ideal" GUI library for a while.


Yep. Getting started on the web platform is honestly still a marvel to me and unmatched after decades. Sure you can use babel/webpack/vite whatever, but you can also still create a text file called app.html, type your little program into it and then double click to run.


I recently switched to Fedora and discovered that it's straightforward to create native GUI apps for Linux these days: just use GTK+ with your choice of official binding (C, JavaScript, or Rust) and you're done. Things are well documented and generally easy to set up, at least in my limited experience.

Flatpaks make distribution a non-issue too.


Not sure this is a very 'unpopular' opinion ...


The size of node_modules and the compile times beg to differ.


Have you tried the Mac?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: