Hacker News new | past | comments | ask | show | jobs | submit login
Windows 11 will happily execute a binary compiled 30 years ago (twitter.com/mikko)
318 points by mikkohypponen on Aug 18, 2023 | hide | past | favorite | 408 comments



See also

https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...

one of the developers of the hit game SimCity, who told me that there was a critical bug in his application: it used memory right after freeing it, a major no-no that happened to work OK on DOS but would not work under Windows where memory that is freed is likely to be snatched up by another running application right away. The testers on the Windows team were going through various popular applications, testing them to make sure they worked OK, but SimCity kept crashing. They reported this to the Windows developers, who disassembled SimCity, stepped through it in a debugger, found the bug, and added special code that checked if SimCity was running, and if it did, ran the memory allocator in a special mode in which you could still use memory after freeing it.


Counter-example: Soldier of Fortune is broken on modern windows because of a misapplied compatibility hack. Rename the binary and it works with no problems.

This is an awful way to implement backwards compatibility. Opaque and ad-hoc. They have been using similar toolset to break competitors applications.

The choice of what old version of windows to run the program on is typically to try them one by one.

Linux is no better with no stable ABI. Mac is a mixed bag of excellent Rosetta and breaking apps for no reason. Who did it better? FreeBSD? Some extinct “grown-up” OS like VMS?


> Mac is a mixed bag of excellent Rosetta and breaking apps for no reason.

They will probably retire Rosetta2 in a few years, like they did with Rosetta.

Apple usually seems to care about getting the bulk of applications to transition over, and the rest is just collateral damage/the devs should’ve just updated their software.


> They will probably retire Rosetta2 in a few years, like they did with Rosetta.

Counterpoint: The PPC-to-Intel version of Rosetta was licensed technology (QuickTransit); Apple was undoubtedly paying for it, possibly even per user, so there were financial reasons for them to get users off of it ASAP.

Rosetta 2 was developed in-house by Apple, so there isn't the same hard timeline to get users to stop using it. I wouldn't expect it to survive long beyond support for running the OS on x86, though.


It could last longer if gaming with their game porting toolkit gets big enough to drive more Mac sales. Money talks.


That's a long shot. I wouldn't hold my breath for seeing the day when gaming on Mac is a real choice for big games.


Yes, just like they had valid technical reasons to kill 32-bit iOS apps. The point is that they don’t go above and beyond like Microsoft (although of course even MS has deprecated e.g. 16-bit apps).


On the bright side, the end result is that on Mac there are only apps that have been updated this decade.


Is the lack of an app a good thing?


It can be because it’s an incentive to create a new, up-to-date app.


This is basically the defense of bad deeds: bad deeds inspire others to do good deeds.


No, the absence of a bad thing creates room for a good thing.

Because an old, out of date application is not available, there is a viable market for a new, up-to-date application that serves the same purpose.


Gee isn't it great when modern businesses don't have to compete with decades old software and can sell you their crapware for whatever price they want because there is no alternative.


The general idea behind capitalism is that the market provides alternatives. Why would there only be evil giant corporations whose applications are so bad that they would be trumped by those great ancient, obsolete applications? There is also room for small vendors to provide innovative new alternative, and for the open source community to try their hand.

The reality is that on the Apple platforms these ancient, obsolete applications are not available and instead there are new, modern, better applications because there is a market for them. While on the Windows platform it’s a big, inconsistent insecure mess because everyone is clinging to obsolete, unsupported software that is barely good enough.

By the way, keep pressing that button you think is the disagree button!


Is incentivizing churn a good thing?


Is it really fair to characterize replacing a 30 year old app as churn?


Not if the developer isn’t around anymore.


In a world with only one developer, yes. In the real world, where other developers can create new apps that do the same thing better, more secure, easier or in a more modern way, no.


Also in the real world, where (for any number of reasons) users sometimes prefer older apps regardless, yes.


And sometimes users do not get what they prefer and they have to make do with what they get. In the Microsoft world and in the Apple world. Tough luck.

It’s not worth living in the past because some hypothetical users want to cling to it. It’s worth promoting innovation because innovation is replacing old things with better new things.


> I wouldn't expect it to survive long beyond support for running the OS on x86

Even if the support for running x86 Mac GUI apps along with x86 macOS, they might still keep the technology around for docker, linux VMs, etc.


And this is why Apple will never be a serious gaming platform for non-exploitative/GaaS games. Personally I think it's good that I can run games that were last updated in the early 2010s on my computer.


Definitely. All Intel Mac apps will be abandoned. Even tiny apps like Spectacle will cause pain.


I've found Rectangle to be a good substitute / in-place replacement.


Thanks, it actually makes sense to switch. And Spectacle is even open source, so the amount of pain is minimal.


> Linux is no better with no stable ABI.

I’m confused. Linus has repeatedly stated that the ABI should be stable, “we don't break user space”. There are exceptions, but any proposal that makes a breaking change to the kernel’s external symbols is very hard to push through.

I don’t remember anything breaking because of a new kernel version except device drivers, which are part of the kernel anyway and should be compiled for a specific kernel version. They are not applications, so they shouldn’t rely on assumptions about the ABI.

Most Linux distros offer mechanisms to compile a version-dependent interface to isolate a version-independent driver or program that messes too closely with the kernel.

> Some extinct “grown-up” OS like VMS?

I’d say the age of binary compatibility ended with most of those “grown-up” OSs becoming legacy. I usually test (compile and test) my C code on multiple platforms, ranging from Windows to Solaris on SPARC (emulated these days, sadly). I haven’t yet figured out a cost-effective way to test it under IBM z/OS’s USS (which makes z/OS a certified UNIX).


The kernel userspace APIs are pretty stable, the APIs provided by the rest of what constitutes a complete Linux "operating system" are not. I've ended up using a lot of hacks and tricks to get some early Linux games running on modern systems. Some applications designed for X still have broken features in Wayland, and likely won't be fixed without new versions of said apps because making Wayland compatible would break the entire security model.

It's generally not a huge issue in Linux, because most of the software you use day to day is open source and probably maintained. The real problem children, across all operating systems, is proprietary video games. They're closed source, unmaintained, and their cultural value makes them highly desired targets for ongoing compatibility and preservation.


> The kernel userspace APIs are pretty stable, the APIs provided by the rest of what constitutes a complete Linux "operating system" are not.

There are plenty of userspace ABIs that are extremely stable, including whaever you need to run a game (like the C runtime). There are also APIs without stability guarantees (like the C++ standard runtime). A lot of games that no longer work depend on some of the latter libraries. There are also ABI bugs, no compatibility is perfect, but those usually do get fixed when found unless doing so would break more programs.

> Some applications designed for X still have broken features in Wayland, and likely won't be fixed without new versions of said apps because making Wayland compatible would break the entire security model.

That's not a long term compatibility problem but using a zero trust mobile phone security model on a desktop problem. That security model should be broken and moved to /dev/null where it belongs.

But really at some point you are going to need compatibility layers anyway. We already have Wine with great support for old Windows apps, there is nothing preventing something similar for legacy Linux on modern Linux emulation - except a lack of interest because there really aren't that many Linux-only legacy applications.


> The real problem children, across all operating systems, is proprietary video games.

They can be distributed packaged with the requirements (Wayland and X still would be an issue) but things like snaps of flatpacks solve that.


> Linus has repeatedly stated that the ABI should be stable, “we don't break user space”.

Linus said that the userspace API to the kernel should be stable, which it mostly is. But a GNU/Linux system contains a lot more APIs (in userspace).


Drivers being kernel specific is really annoying


All they have to do is upstream their drivers into the kernel instead of shipping proprietary blobs. Why is this so hard?


Because the Linux kernel has stringent guidelines how a driver should be written and work. Companies don't want to put in the work (i.e. pay someone with experience) to upstream their drivers. Whether this makes sense monetarily isn't really relevant to many decision makers. At least that's how I explain why Nvidia and Broadcom don't upstream their drivers.


> Because the Linux kernel has stringent guidelines how a driver should be written and work

Preventing poorly-written software to be added to the kernel is a Good Thing. The system is working as planned.


Major problem with android, stuck with whatever version of android due to binary blob drivers.


The major problem is Google not enforcing updates via their contracts.

Project Treble has made Android a pseudo microkernel with stable ABI for drivers.

However Google has decided it is still up to OEMs to decide if driver are to be shipped or not.

With no legal enforcement for accessing Google services, OEMs rather sell updated hardware.


The obvious problem seems to be propriety device drivers, no? If they didn't shoot themselves in the foot with their licensing, the drivers would work with any kernel version.


> They have been using similar toolset to break competitors applications

Source(s) ?


This, for example. And they got caught. How many times they did not?

https://www.theregister.com/1999/11/05/how_ms_played_the_inc...


Your evidence is an article from 24 years ago about behavior that happened 32 year ago? And it's not even about them breaking competitors applications, it's about them refusing to run on a competing OS (in a bit of a sleezy way).

Do you have more evidence of your claimed behavior?

I dislike MSFT, a lot, but that's a _very_ big claim and needs to be backed up with evidence.


My claim is that Microsoft operating system was silently detecting competitors software and changing behavior to break compatibility. That is proven. The war on WordPerfect was equally shady.

Did Microsoft clean its act at some point and stopped doing so? They force Edge at every opportunity, so even the behavior that almost got them forcefully partitioned is back.

I don’t think we have caught them outright sabotaging e.g. Chrome aside from the default browser shenanigans, but who would bother to check unless it’s a repeatable crash? Aside from Chrome what app do they even have a need to sabotage? Steam?


> My claim is that Microsoft operating system was silently detecting competitors software and changing behavior to break compatibility. That is proven

That's a false claim. That code was in a beta and never shipped. You're just spreading FUD..


The code was shipped, just disabled.


Plus, let's be real, the Register is basically the tech equivalent of the Daily Mail. Often amusing, occasionally occurate.


Sure, but the DR DOS case is well known. Is ZDnet better? It also lists other cases related to efforts to destroy Novell.

https://www.zdnet.com/article/caldera-unlocks-microsoft-evid...


Never expected someone would bring up the SOF issue on kodern windows. PCgamingwiki FTW.


Apple dumped 32 bit support which sealed MacOS fate as a gaming platform.


This is actually what most gpu driver updates are. Instead of devs fixing their games, nvidia calcilated its in their best interest to just fix the game bugs at the gpu driver level and push updates.


> fix the game bugs at the gpu driver level and push updates

it's because the nvidia drivers are opaque blobs, rather than a source distribution.

If nvidia distributed their drivers as open source, i would imagine developers would likely "fix" their games properly because they'd be able to see what is going on underneath the hood, and write more optimal code.

of course, this removes some "competitive advantage" nvidia has over their AMD counterpart.


Open source drivers also carry workarounds for buggy OpenGL [0] and Vulkan [1] programs.

[0] https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/uti...

[1] https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/uti...

Yes, these should be fixed properly in the programs instead of the driver but more often than not they won't users don't care that its the program being buggy when it works with another driver because it either doesn't trigger the bug or has its own workaround. The opposite also exists of course - application workaround for driver bugs. Or for bugs in crappy third-party code that gamers like to inject into the process, e.g. the Steam overlay.

If anything, AMD drivers probably have more workarounds as devs are more likely to test with NVidia due to the unbalanced market share.


source code distribution has literally nothing to do with it.

all of what you say could apply to AMD or Intel but doesn't loo


Those games stopped receiving updates years ago.


See the infamous story about Quack 3 on ATI Radeon cards.


Here's the link, for those interested:

https://techreport.com/review/how-atis-drivers-optimize-quak...

Personally, I don't believe this is a good example; as this was ATI purposefully crippling the experience (downscaling textures, being the prime aspect) to cheat benchmarks. OP is more referring to "transparent" optimizations (the same experience, but made to run faster for the GPUs architecture, usually via operation reordering or shader modifications).


Ah, yes, that's true, it's usually not done maliciously like the ATI example. It came to mind because it was a well-documented case of the driver being aware of the application and changing behavior.


Newer APIs even have the program provide more detailed information [0] so workarounds can more easily apply to e.g. multiple games using the same engine. Renaming the executable won't be enough to test for / disable this kind of hack today.

[0] https://registry.khronos.org/vulkan/specs/1.3-extensions/man...


Is this done by intercepting the API calls made to the GPU, or by directly modifying the game binary instructions?


Modifying binaries (on disk or in RAM) would be detected by anti cheat mechanisms. Most likely they change how to interpret a certain sequence of API calls or replace shaders with their optimized versions.


Yeah, i guess i shouldn't have said "bug" in the traditional sense. More like "lets rewrite the gpu calls for this game instead of getting 53 avg fps, they now get 60 avg fps) since we (being nvidia) knows it better.

Most likely when a game is released, the team that build it disbands to work ons omething else and wont have time for performance hacks like this.


From where I'm sitting this looks like an excellent argument for breaking backwards compatibility. All those bullshit hacks are a maintenance and debugging burden for someone and a tax on the whole rest of the operating system -- and I'd argue that it really shows.


The thing about Microsoft is that they developed a framework for doing that kind of thing in a systematic way so it is not so hackish as it sounds.

That's an innocent example but do recall that circa 1995-2005 or so, Microsoft looked like a dangerous monopolist and could have been in legal jeopardy if a product that competes with office (say WordPerfect) quit working when they went from Win 95 to Win 98 or Win 2k to Win XP.

I'd also add that more than once I've had web pages and applications I've made been used as test cases by Microsoft and been contacted by people from Microsoft about compatibility issues. (... and I've never been an advocate of Microsoft browsers, often I've been quite the opposite.) I haven't heard once from Netscape, Mozilla, Apple, Opera or Google about something like this.


Yes, people used undocumented functions, which were left undocumented as they were a subject to change, and accused Microsoft of not understanding software development or thought that those undocumented APIs were somehow better than the recommended documented ones. When stuff crushed after Microsoft changed an API people would be accusing them for deliberately breaking their applications and what not. I remember the hostility in those years 95-2005 towards Microsoft. There is still some cult in some Internet communities where trashing anything developed by Microsoft is seen as some kind of expertise or something by usually self-proclaimed experts.


Perhaps the negativity stems from the all the illegal, immoral, and damaging acts in this era.

Not least of which is pouring millions into a baseless pump and dump scheme laying hallucinated charges of infringement against the entire Linux ecosystem but there are plenty of other issues.

In short people hated them for being shitty people who do shitty things and present leadership were important folks when these shitty things were done everyone just moved up a few rings.

They aren't better people they just have better aligned incentives where illegality and immorality aren't profitable.


> Perhaps the negativity stems from the all the illegal, immoral, and damaging acts in this era.

Perhaps, but hate against Microsoft was not the point of the discussion. You are free to see them in whichever light you want, I just don't understand why you have to down-vote me or bring that into completely different discussion.

The question here was about the technical issues and their work on making software tick despite people abusing the API and producing buggy software. That has nothing to do with hate against Microsoft. To me that sounds like a whataboutism.

By the way, I personally don't even use Windows, I have been 100% Linux user since many years and have contributed to a GNU project, I am not some Microsoft fanboy as you might see me. But I see no value in defending a wrong, and I see the same behavior repeated in other communities. People abuse the API or don't read the documentation or just are plain idiots, and than accuse the developers for being malicious or stupid or whatever when things break. That behavior is bad for anyone, regardless if it is against Microsoft or some GNU project.

> In short people hated them for being shitty people who do shitty things and present leadership were important folks when these shitty things were done everyone just moved up a few rings.

I am sure people have many reasons they hate something or someone for, Microsoft included. There are people who passionately hate GNU, Linux, FSF, you name it. If you justify such behavior, it is your choice, but I am not interested in that discussion. I was talking about people abusing API and than blaming Microsoft for incompetence or deliberate evil, while the company obviously went to quite long efforts to make things work, even for buggy software. Microsoft may as well be evil or good for other reasons, but wasn't really the point of discussion.

> They aren't better people they just have better aligned incentives where illegality and immorality aren't profitable.

This is outright dangerous behavior on your part. You are transferring a behavior you have projected on a company over the entire group of people, all the thousands of people who work for Microsoft. To start with, they are all individuals, and as in every group there are good and bad characters among them. Also, history is against you, Microsoft was, and still is very profitable company. In the time we speak about, they were probably too profitable for its own best. For the second there are law and law enforcement officials to decide if they did illegal business or not.

Not to mention, that probably most of those who worked there back in 90's are probably retired or have changed the job by now. Also, to note, with thinking and statements like that one, you are denying people chance to develop as individuals and become better persons.


I didn't down vote you because down vote to disagree doesn't lead to optimum discussions a thought process I note with amusement that you don't share this idea.

> You are transferring a behavior you have projected on a company over the entire group of people

I specifically indicted the top leadership who were as now in a position of authority when immoral actions were taken and continued to work for the same org. This is a pretty clear and defensible position whereas you said.

> That has nothing to do with hate against Microsoft. To me that sounds like a whataboutism.

and also

> I remember the hostility in those years 95-2005 towards Microsoft. There is still some cult in some Internet communities where trashing anything developed by Microsoft is seen as some kind of expertise or something by usually self-proclaimed experts.

You compared people with a legitimate ax to grid to cultists and denigrated them to "self proclaimed experts" You are again the party committing the sin you project.

> There are people who passionately hate GNU, Linux,

These things aren't of neutral value. There are people who hate purple and Hitler but nobody thinks these things are the same.

> For the second there are law and law enforcement officials to decide if they did illegal business or not.

They actually were repeatedly found to have engaged in illegal behavior in a court of law and furthermore beyond multiple losses in a court of law ample information is available. Nobody with a brain thinks OJ was innocent or feels bound to disregard for instance the book where he describes the crime called "If I did it".

> Also, history is against you, Microsoft was, and still is very profitable company.

How is history against me? IBM literally helped the Nazis categorize their population so they could exterminate millions of people and the people who helped make those decisions didn't cease to exist in 1945 on indeed in 1955 and they were and are worth of critique even if it would be nonsensical now to impose that judgement on people who born decades after the war. We can both remember AND be reasonable.

History by definition is the things that happened. The fact that it tends to forgot the bad things done by people who later did well for themselves isn't "history" its a collective dementia a mental defect which keeps us making the same mistakes. An actual appreciation for history would suggest a commitment to objective memory not white washing.

> I have been 100% Linux user since many years and have contributed to a GNU project

This is the software equivalent of conspicuously announcing that you have a black friend. You needn't as nobody is suggesting you have insufficient credibility. What's happened is you wrote some fairly inflammatory next to something mundane that is reasonably considered and most people have ignored the mundane thing because you set your apple pie adjacent to a flaming bag of poo and when everyone mentioned the shit you have followed with a bunch of bad explanations for the poo and ill considered arguments so we are still talking about the smell of shit instead of eating pie together.


> recall that circa 1995-2005 or so, Microsoft looked like a dangerous monopolist

As opposed to now?


The issue is at that time updates for most shrink wrapped software were almost non existent and distribution even worse. Windows had an update facility and I bet you sim city for dos did not. So, either windows yielded and patched it’s behavior to run sim city or sim city users couldn’t run it under windows full stop until maxis distributed a new distribution on media.

In todays world it looks like a ludicrous solution to the problem. The game vendor needs to HTFU and distribute a patch via steam or whatever and that’s a totally reasonable stance for microsoft or apple to take, because the infrastructure for patch distribution is pervasive and robust now.


And it's worth emphasizing that Microsoft was/is incentivized to do this because especially with popular software, such bugs have a chance of resulting in users blaming Windows even if in reality it's the gamedev's fault.

Similarly to how GPU vendors are incentivized to patch their drivers to fix bugs in a popular game release because the bugs might be blamed on the vendor instead of the dev.


I don't have the reference for it here, but in the first year Windows Vista was out >50% of crashes were due to buggy nVidia drivers. Microsoft assumed (incorrectly) that their ecosystem would get their shit together automatically. That nVidia would make solid drivers for the new Windows WDDM driver model.

The year before Windows 7 came out I was working at a company (DivX ;-) making Windows software. We were getting contacted by different testing groups at Microsoft constantly. Some weeks three different people might contact me. Somehow they found my phone number? It didn't seem very efficient, but it was clear that they were allocating huge resources to this.

They found unbelievably nuanced bugs (their QA was better than ours...). They wanted to know when we were going to have a fix by. They wanted to make sure they didn't have a Vista repeat. Vista SP1 was actually quite stable, but it was too late for the Vista brand already.

With Windows 7 it seemed clear that the thing they cared about was: the software their users actually use still works after the update. Right or wrong, it was very user centric, because what user likes for their software to break? Nobody cares why.


Microsoft got to burn those kinds of resources on the compatibility problem due to having had a near-monopoly on the desktop back in those days.

That isn't going to be repeatable for pretty much any other software company other than a FAANG, and certainly not of open source projects, not even Linux. People don't pay enough to open source for that kind of support.


It is true for the kernel though. Linus has a strict non regression (of the user code) rule. If some user code starts to break because of a kernel change, the kernel team takes the blame no question asked.


Amazing story, thanks for sharing. It explains some of the sustained success of Windows perhaps.


Speaking of windows Vista, a tale of two friends kts call them ben and randy.

Ben has a Compaq laptop that is Vista capable. It takes upwards of three minutes, I kid you not, of blank screen to get a UAC prompt for elevated access request. And those were very common with Vista.

Randy has a nice (and expensive) desktop and thinks windows Vista is just fine and all computers should upgrade to it (college computers still has Windows XP and there was no plan to upgrade existing computers to visits that I knew of).

They both have wildly different opinions of Windows Vista.

https://arstechnica.com/gadgets/2008/03/the-vista-capable-de...


People forget how every software developer had to spin up their own patch delivery infrastructure.


Back in the day it wasn’t even reasonably possible. You got a floppy/CD from the store and that was the end of the story. I mean, this is “Netscape just came out” times.


In the early 90s, we had a disk replacement policy and would mail people updated floppies on request (and a $10 S&H fee). Few people took advantage of it.

We then started hosting some patches on Compuserve, GEnie and Prodigy.


Yep. At best you had an updated version on the next run, but even that was unlikely.


My AOL updates in the mailbox always lagged behind, week after week...


If the company even existed when a Windows update caused an issue. And given the number of Windows installations, even the most obscure software or setup is likely to affect a lot of people.


> All those bullshit hacks are a maintenance and debugging burden for someone

That’s why that someone is getting paid -it’s their job.

If you want software to power important things in society, like transport and energy, you need to have a certain level of responsibility for reliability.

As this post demonstrates, Right now we have immature children who are used to breaking things for the sake of a new fad, in charge of critical systems.


This really bothers me and I'm young, I am TIRED of applications FORCING restarts, and other garbage as a software culture. You are doing something? Oh sorry our developer team thought it would be smart to crash your whatever task you are doing by force to give you a 2 line changelog.

Typing a dm on discord or talking to people? boom random restart to force update, using firefox nightly? sorry you cannot use it as your main browser because they can decide to brick your browser randomly to force you to restart. for no reason whatsoever, what is the downside of just warning people but not FORCE restarting? none.

Doing critical stuff on your pc that requires long term uptime? sorry Windows will decide for you to restart forcefully (at least these can be turned off, for now, via group policy)


Re: Firefox, Firefox only demands restarting if you changed the copy of Firefox out from under it. This only happens if you're managing Firefox with an external package manager, or you're building from the repo and installed a new copy.

If you run Firefox from its default mechanism, it will update itself and ask to restart before applying the update, and will continue working forever, never updating until you restart it.


Might be the case now (Note, I'm talking about Nightly), this was a few years back, I noticed that first it warned that a new version was working and then mysteriously bricked the browser after, complained and they told me that I "shouldn't be using nightly if I don't wanna have the latest updates"... which I do, but I don't wanna be forced into a restart unless it actually breaks?


Firefox Nightly is a branch specifically for testing and evaluating the newest updates. If you want reliability and consistency out of Nightly, the problem is you.

You will get what you want out of the regular branch, and even more out of the Extended Support Release branch.

As for Windows rebooting out of nowhere: Get a Professional or Enterprise license and turn autoupdates off via Group Policy and your problems are solved.


And this is exactly the comment I was talking about.

Anyways for any sensible people, yes, Nightly is a branch specifically for testing and evaluating, doesn't mean you have to brick peoples browsers after some time when you push an update (again, back when I used it that was the behavior, according to a different comment it might be different now), to force a restart.

"As for Windows rebooting out of nowhere: Get a Professional or Enterprise license and turn autoupdates off via Group Policy and your problems are solved.", yes, I already addressed this and mentioned this solution, but again, that doesn't change the fact that it's wrong in the first place, maybe actually read the comment you are replying to?


I use Nightly exclusively. Unless you are somehow installing Nightly updates in a way that the binary is owned by a user who isn't you (ie, owned by root), Nightly will never force you to restart, it just forever has the green dot on the hamburger menu.

This is true for me on both Windows (and my user is part of the Administrator group, thus can write to a global c:\Program Files install) and Linux (and Firefox is installed to a directory in $HOME to simplify the process of non-packaged binary management).

Now, I also have Firefox (stable) installed as a .deb (to fulfill the browser dep). If the .deb gets upgraded by apt, that Firefox suddenly bricks itself until I restart. And this is intentional, btw, given how Firefox interacts with itself to do process isolation.

Everything I say here has been true for roughly the past decade.


> using firefox nightly? sorry you cannot use it as your main browser because they can decide to brick your browser randomly to force you to restart.

I'm on Firefox beta for a year or so and never seen this - it politely shows update notice and that's all. Updates itself once I close all open windows of FF. Wonder why Nighly do different things here. From another side - running nightly is expected to be not the same smooth as stable release.


Restarting exists, because it's the simplest way to maintain a program. Do you expect a car mechanic to repair your car while you're still driving? It could be possible, but it would cost so much more, that no one even thinks to do it.

Discord is a great example, because it's very far from being optimal in so many regards. It could be written in C++ and be so much faster, use so much less memory, it could be more reliable in many aspects… But because most people don't care, what Discord creators do is roughly the most meta-efficient (money efficient?) thing to do.


Raymond Chen made a great point in his book “The Old New Thing”. If a hacky program suddenly doesn’t work when you upgrade to a new version of Windows, the user isn’t going to blame the program. They’re going to blame the OS because that’s what changed.


And that’s why you stand where you are and Microsoft stand where it does.

You’re thinking like an engineer. Microsoft is a business. Backwards compatible is its major core competency.


And why you don’t allow developers to make business decisions.


The huge backlash against Reddit's policy change is a good example of that. Sure, devs and programmers were upset, but after a week or two, all went back to normal as if nothing happened. Reddit continued to work, and its alternatives didn't gain much traction.

I have an engineering background and then pivoted into managements science, and the difference in perspectives in the two fields is really obvious.


Let's not give Reddit's management too much credit. They've lit a decade's worth of investor and advertiser money on fire and have precious little to show for it. Their recent moves seem less like calculated business decisions and more like desperate scrambles to make the site appealing to public investors on very short notice.


And yet Apple has famously broken compatibility on a regular basis. Remember when Apple went from Motorola 68000 to Intel x86? They did it by providing direct support to key companies to port their software, and providing Rosetta for applications that could take the performance hit.


Either extreme works. There's a sour spot (opposite of sweet spot?) in between, where the platform breaks compatibility, but not so predictably that customers remember how to reach their suppliers, and suppliers remember how to update their products.


Yes, as long as the business communicates and executes well and makes the value proposition of the choice clear to their customers, either works. Being backwards-compatible to the beginning of time is not the only choice. My experience in the industry, however, is that most organizations cling to never breaking compatibility, under the flawed belief that it's always simpler or cheaper than planned migrations.


I think you mean “PowerPC to Intel”. 68K was before PPC.


You're right! It's been a long time. There was a 68K to PowerPC, then later Power to Intel, and now Intel to arm64.


I think both are valid strategies, but it's usually a business decision, not a technology one.


They are, but in my experience businesses will tend towards backwards compatibility without adequately weighing the costs of one versus the other. Cargo cult compatibility, so to speak.


You're sitting on the point of view of a software developer.

Microsoft's point of view is that the underlying software doesn't matter. The user's software _has_ to run. The Application Compatibility Database (https://learn.microsoft.com/en-us/windows/win32/devnotes/app...) is, overall, a relatively small component, and all it does is apply some shims if your executable is in that list. Performance issues in Windows do not stem from anywhere near the kernel. The kernel team is absolutely top tier. The kernel itself is of much higher quality than what you'd find on Linux, or MacOS.

Now, the upper layers however...


> The kernel team is absolutely top tier. The kernel itself is of much higher quality than what you'd find on Linux, or MacOS.

Have recommended sources for this or learning more? My experience with Windows doesn't match this at all, though from my perspective it's hard to tell if it's kernel as opposed to any of the layers above it.


Notably Microsoft does not document an API to the kernel, the official userspace interface in Windows is the DLLs they provide. In that since, Wine, which provides DLLs that implement the same interface over the Linux kernel is conforming to the way windows works, and Cygwin, which provided a POSIX-like libc that ran on the Windows kernel is a Windows-centric way to implement POSIX.

(That said, the path of WSL 1, which emulated the Linux syscall interface on Windows, takes advantage of the idea Windows NT had from the very beginning which was implementing "personalities" that could pretend to be other OS, such as the original plan for Windows NT to be OS/2 compatible.)


Windows was kinda a wreck till “windows NT” they brought in David Cutler who did Vax/VMS to help architect it. It was pretty amazing the transition from 95/95/me/vista to windowsNT/2000/XP. They put my old operating systems book appendixes online which have details about windows 2000 (Mach and bsd are the other os covered)

https://bcs.wiley.com/he-bcs/Books?action=resource&itemId=04...

Check out appendix C for details on windows 2000 architecture or this which should link to the pdf.

https://higheredbcs.wiley.com/legacy/college/silberschatz/04...


>the transition from 95/95/me/vista to windowsNT/2000/XP.

Windows Vista is part of the Windows NT lineage, specifically NT 6.0.


There's an absurd number of components to the Windows Kernel, so here's a kind of disjointed list of various things, from different time frames.

Windows Research Kernel - https://github.com/HighSchoolSoftwareClub/Windows-Research-K... - More or less Windows XP

I/O Completion ports - https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-c... - io_uring, but mostly better, since NT 3.5

General architecture info: https://en.wikipedia.org/wiki/Architecture_of_Windows_NT

A bunch of things you'll find in Windows Internals, which is pretty much the bible for Windows (https://empyreal96.github.io/nt-info-depot/Windows-Internals..., or buy it online. Mark Russinovich is a treasure trove of Windows knowledge)

The various Windows subsystems - Windows is built from the start to be able to impersonate other OSes. While the most obvious one is WSL (despite WSL2 being just a VM), there's an OS/2 Subsystem, a POSIX Subsystem, a Win32 subsystem...

Very few things actually run in kernel mode. There exists a almost-kernel-but-not-quite mode called executive mode, which is a much better option than Linux's all or nothing user-or-kernel (and, as far as I know, Mach has the same problem)

NT is a hybrid kernel: not quite monolithic, not quite micro. This allows Windows to do things like swapping your video drivers live as it's running, unlike Linux and Mach which would miserably crash. Hell, it can even recover from a video driver crash and restart it safely, and all you'll see is a few seconds of black screen.

The breadth of devices it supports is absolutely breathtaking. (well, in part because they very much have a hand in forcing manufacturers to respect the standards that they write)

All of Sysinternals (Mark Russinovich's work, again) is also an amazing resource: https://learn.microsoft.com/en-us/sysinternals/

Now, mind you, this is purely about technical merits: the NT Kernel is a miracle of technology. The APIs it exposes, and that most Microsoft products expose are downright batshit insane sometimes. But that's also what happens when you support 35 years of software. Also, the HANDLE pattern that most Win32 API uses is the superior alternative to dumb pointers (https://floooh.github.io/2018/06/17/handles-vs-pointers.html)

Oh and a bunch of The Old New Things articles, but I can't be arsed to look them up right now, sorry.


That's great collection of things

Maybe it will open eyes for people who for some reason acted as if Windows internals were some outdated tech mess just because Windows did some questionable choices when it comes to UI/UX


As an historical aside, I’m 99% sure that the handle pattern had its origins in the 68k Mac system software. It is pretty cool to give the OS liberty to move your memory around at will without breaking behavior.


> The user's software _has_ to run.

My concern is that a lot of security issues may have come from this. A clever attacker could grab recently freed memory from one of these programs and inject malicious code to enjoy whatever other weird privileges the original program has, because marketing said it can’t crash.


The compatibility hacks are not "revert to old, terribly unsafe behavior". Rather, "use current, good behavior, lie to the application about what happened."


This actually improves security by preventing UAF.


The NT kernel interfaces are lovely. I’m not at all a windows fanboy and haven’t done windows dev in well over a decade, but when I did the APIs were lightyears ahead of the competition. And Jeffrey Richter’s books were a marvelous resource on the documentation side.


Fork and IO performance are major problems when porting *nix software to Windows. It's visibly slower, one of the reasons WSL1 was way too slow to be used for many things while technically being a clever solution.


to my understanding it's not really IO (reading/writing bytes), but overhead on OpenFile/CloseFile being under AV/Defender inspection. I'm taking that understanding basing on https://www.youtube.com/watch?v=qbKGw8MQ0i8 with " "NTFS really isn't that bad" - Robert Collins (LCA 2020) "

> Why was rustup slow (3m30s to install (not including download time)) in early 2019 on Windows, and why isn't it slow (14s to install) now?

>Early in 2019 I was developing some things in Rustlang on Windows, got frustrated at the performance of rustup (the installer for rust) and looked for bug. I found one which blamed a combination of NTFS, small files and anti-virus software for the problem. This wasn't an entirely satisfactory answer to my mind. Fast forward a bit, and with a combination of changes to rustup and to Windows itself and rustup is now pleasantly usable.... which also improved performance for rustup on NFS mounted home directories.


> The kernel team is absolutely top tier. The kernel itself is of much higher quality than what you'd find on Linux, or MacOS. > > Now, the upper layers however...

You're not selling me on the idea that the compatibility layer has no cost by pointing out that the upper layers that reside over it are a mess.

That would actually be my argument.


Would you consider LDPRELOAD a massive cost to Linux ? Because that's basically what it is.

You could give me the best kernel in the world, if I end up reading a file whenever I push a pixel to the screen, my performance will be dogshit. Windows's performance problems are not due to the kernel (or rather, not due to problems/bugs: some performance issues are just a choice. See NTFS's dreadful performance with Git: NTFS simply wasn't thought out for having thousands of very small files all being touched at the same time.)


> Performance issues in Windows do not stem from anywhere near the kernel.

I remember Windows uses a O(N^2) scheduler so the system slows down when it has a few thousand processes. Would that count as a performance issue in the kernel?


As far as I know, Windows uses a multilevel feedback queue, so O(n²) would be surprising. The one issue that pops up with the Windows scheduler is when you have plenty of processes doing tiny little bits of IO instead of one huge slab of it.

Could you count it as a performance issue in the kernel ? Maybe. But really, you're mostly hitting an issue in what it's built to do. Windows really likes two things:

* don't spawn subprocesses

* please do your I/O in one huge batch and thank you.

The average Windows machine will barely have 100 processes running. I have 184 right now, and I run a lot of crap. This goes directly contrary to the philosophy of many UNIX programs: create a shitload of processes, fork everywhere, and write tiny files all the time.

I wouldn't complain about a hammer not working well if I'm using the handle to nail things. Sure, it would be nice if it also worked there, but it's not exactly made for that either, and for good reason. POSIX fork() being such a can of worms is half the reason that WSL1 was abandoned. Windows does not have the internal machinery to do that, because it wasn't built for that.


If it had been a piece of shovelware I doubt they would have bothered. But there were some applications and games like this (Myst also comes to mind) that were so main stream MS may have had them on a punch list of apps to test before a release. When a game or app is sort of a poster child for “why should I buy a pc?” they didn’t have much choice. For countless other apps consumers had to fiddle with voodoo HIMEM configs and other such things you might hear rumored on street corners late at night trying to work so binary magic. I’ve heard the Sim City example before and I think it gives MS entirely too much credit for being OCD on compatibility. Plenty of things simply broke.


FWIW Safari and Chrome are also doing this with their “quirks” files: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/pa... (I can’t find the chrome one)


Chrome doesn't have one anymore, that's a WebKit thing.


Yeah, this was removed from the Blink engine. They've also done a lot of work removing namespace hacks (--opera-foo or --webkit-bar).

Not that I think the Google/Blink monopoly is a great thing, especially with their recent moves, but they did stick by some of their "sticking to standards" rhetoric.


From a sales point of view though, if your operating system can run one of the most popular games at the time, it means people won't just keep using the existing one.

It had to be a step forward, not a step back. I mean I don't know what you're using at the moment, but if your favorite application didn't work on the next version, would you upgrade?

This is why Apple spent so much on Rosetta, first when going from PowerPC or whatever to x86/64, then from that to ARM / M1, while in the meantime building a developer and application ecosystem that allows for easier transition between CPU architectures and environments.


Breaking backwards compatibility was one of the biggest customer criticisms of Windows Vista.


OTOH, Vista and an interaction with a digitally distributed edition of Jagged Alliance 2 with bolted on DRM somehow resulted in it wanting a new registration key every time I launched the game.

This drove me into looking for a vanilla executable that wouldn't have that after-the-fact DRM, and I discovered the 1.13 mod. So, I had one happy experience with Vista.


>From where I'm sitting this looks like an excellent argument for breaking backwards compatibility.

It's precisely because of that backwards compatibility, insane in both the workings and the result, that keeps most people using Windows.

People use computers to get stuff done, and Windows lets people use the absolutely massive library of Windows programs whether it was written today or over 30 years ago.


> and I'd argue that it really shows

Yep, the most popular desktop OS in the world and the 2nd biggest company in the world.


When your favourite program stops working after an OS update, who do you blame? If you really want that program to run, what do you do?

There are business reasons to maintain backwards compatibility, and they were very strong before the easily updated software.


Raymond Chen said (to the effect of) this in his book the Old New Thing:

You have been playing your game fine. You installed a new Windows OS and now the game no longer runs. As an average customer, who do you blame? Hint: not the game studio.


I think that's why they invented containers/virtual machines


They did this at a point where there was competition for the operating system GUI transition. Having this backward compatibility focus was a major win for them.


The other position is that Microsoft’s customers paid them billions of dollars and maintenance and debugging “burden” is just part of the business.


Search for the bonus chapters to Raymond Chen's "Old new thing", which goes through MANY examples of how there was an entire team who's job was to go through popular applications and hack windows to make them work on the new OS


Meanwhile, the Asahi Linux GPU driver checks to see if the first character of the process name is 'X', and simply nopes out if it is.

Because why are you still running Xorg, asshole? You should have switched to Wayland by now.

https://social.treehouse.systems/@marcan/110904454552941656


How dare the Asahi folks not do a huge amount of free work to support you? Don't they know how important you are?!


Yes, you're right, how arrogant of me not to expect kernel-level active sabotage of still widely used software, in a kernel where "don't break user space" is a guiding law of development.


The driver doesn't work with xorg. What do you think it should do? Pretend it does? Do you actually know how software works?


This seems like an interesting vector for a virus.


These older games will all work in 640kb so the fix would likely just have been to fence off 640kb completely from other apps while SimCity was running.


Sim City 2000 used DOS/4GW though: https://en.wikipedia.org/wiki/DOS/4G :

> It allows DOS programs to eliminate the 640 KB conventional memory limit by addressing up to 64 MB of extended memory on Intel 80386 and above machines.


Yes, Raymond Chen describes such fixes in [several blog posts](https://devblogs.microsoft.com/oldnewthing/) and in his book The Old New Thing. Check the old posts, back at the beginning. There are posts about to which lengths they went to ensure buggy applications still worked after an update or a fix.


They would likely not have done this today. That fix was a product of its time.

From what I can tell, SimCity had already been released. Many users likely didn't have an Internet connection, and even if they did, there was no auto-update via Steam.


Nice. OpenBSD does something similar, with the difference being that there are no exemptions for specific software—they expect developers to fix their broken programs.


Clearly missing the point. The OP mentions DOS, so this was the mid-90s. OpenBSD didn't exist (Theo hadn't burned all his bridges with NetBSD yet). There were NO updates. You bought software, games especially, in a shrink wrapped box at a brick and mortar store. Most people outside universities did not have internet access. If you were an extremely privileged few, you could download small patches from a BBS or online service but this was extremely rare. You got what you got. The development model was different. It had to work under any circumstances. There was no "we'll fix it in the next sprint!" and push it to the app store. Developers had no way to contact their customers.

Regarding that hobbyist OS OpenBSD, where the developers care about nothing besides security, there were no proprietary application packages available. Most retailed applications in those days ran on SunOS/Solaris or HP/UX.


Gaming magazines shipped diskettes with game updates and patches.


> There were NO updates

I distinctly remember patching DOOM from 1.1 through 1.2, 1.4, 1.666 to 1.9


That's not similar at all! Windows takes the "Linux way" of "not breaking userspace" seriously. OpenBSD has other goals - some sense of minimality and clarity, but this is not one.


And unlike Linux, has a stable ABI.


Linux has a stable userland ABI

OpenBSD doesn't even have that: if you don't reboot quickly after installing a major upgrade you're going to have a bad time

whereas you can run ancient userland on more modern Linux kernels (as evidenced by the container ecosystem)


The Linux kernel has a stable userland ABI.

Other core components that makes up most Linux distros (e.g. libc, gtk, libc, curl, ssl etc) however do not.


The libc (glibc) absolutely has a stable ABI. Stop spreading FUD.


> OpenBSD doesn't even have that: if you don't reboot quickly after installing a major upgrade you're going to have a bad time

that's interesting, can you share bit more details/links?


> That's not similar at all!

That was the joke, friend.


If it was a joke, my bad !


you had me


Raymond Chen has been providing an inside perspective on this for decades: https://devblogs.microsoft.com/oldnewthing/


A fun side-effect of the "general" stability of Windows APIs is that Win32/DX has become a very stable and reliable "universal" API for Linux (and other OSes) via the massive amount of work put into Wine/Proton. I keep seeing games drop their Linux-native releases in favor of just shipping for proton.


The Proton-aware releases often are easier to setup and run better than the Linux-native releases.


That's been my experience as well. EU4's Linux-native release has issues with scaling and cursors on ultrawide monitors. Proton version is flawless.

There's been quite a few cases for me, like Star Citizen for example, where games perform significantly better under Proton with DXVK than it does on Windows!


Yes. Someone blogged about this last year and it generated a lot of discussion here as well.

https://sporks.space/2022/02/27/win32-is-the-stable-linux-us...


It's not insane, it's my expectation for a tool. My hammer still works perfectly well with nails I bought 30 years ago.

It's impossible to build on shifting foundations that are constantly breaking backward compatibility. You eventually spend all your time maintaining instead of creating.

Then you have to go reinvent your wheel, and in my experience as a user your shiny new one isn't necessarily better.

Most of the software I use is more than 10 years old. Some is still updated, some is not (or went cloud and left me happily behind).


Milwaukee still manufactures NiCAD batteries for their tools that have long since been outclassed.


I see you haven't bought the Apple Hammer.


On the other hand, you’re stuck with stupid limitations in perpetuity: https://news.ycombinator.com/item?id=14286383


Microsoft could let new applications opt out of that limit if they cared. They have done this for plenty of other limits, e.g. absurdly low path length restrictions.


I used to believe this, but no longer.

Any Steam game that used the "Games for Windows – Live" service, and wasn't updated since the service shut down in 2014, would fail to launch on Windows 10 & later, because the DLLs for the service were removed. For a time, folks were able to download the DLL from third-party sites, but that doesn't work now.


Older games without DRM or other networked services also may not work due to graphics incompatibility. (cnc-ddraw salvages a lot of them though: https://github.com/FunkyFr3sh/cnc-ddraw)


Ah, but I bet the pirate versions of those games still work ;)


Even more “insanity”:

z/OS (aka OS360 aka MVS) supports programs going back to the 60s and I just talked with a DE at IBM who is still using a program compiled circa Apollo 11 mission.


That's common in the mainframe world. Unisys (ex Univac) still has its Dorado mainframes binary compatible with the Univac 1100 released in 1962.


I think I remember reading a while back that System/360 binaries can still run on modern Z/architecture mainframes.


Yup, that’s the example I cited above.


Oops, missed that, I came in from the comments link.


I thought the Unisys mainframes have been running emulated on X86/X86-64 for a while? I assume they have some sort of binary translator.


Yes, you got it right. The Dorados now run a binary emulator on top of a microcomputer (x86_64) architecture, while IBM Z (itself essentially a 64-bit S390 arch) kept a mainframe configuration.


What's a DE? Also did they tell you what the program did?

Other systems that will run or automatically translate > 30 yr old binaries:

- I believe IBM i on POWER {i5/AS400} will run stuff from System 38 (1980).

- HPE Nonstop (aka Tandem Guardian ) on X86-64 will run or translate binaries from the original proprietary TNS systems (late 1970s) and MIPS systems (1991).


Distinguished Engineer?


Bingo.


Its famously determined, but I feel that a DOS CLI app isnt much of a challenge since the DOS subsystem is essentially ossified. What would be the result if, say, you tried to run something DOS-y that was demanding or an early Win16 app? Say, Zortech C++ from 1986 with the Pharlap DOS extender or Minesweeper from windows 3.1. Would they work?


That’s not a DOS app, it’s a Win32 console app. DOS apps (16-bit or 32-bit) or Win16 apps would not run natively.


That depends on if they are using 64 bit or not. The 16bit VDM was deprecated on that move from 32 to 64. Which is the majority of most installs these days.

What is kind of neat is every windows application/dll is a valid DOS application. The first part of all of them is a valid MZ DOS 16 EXEcutable. Windows just treats it as a skippable header and reads the real header that is about 100 bytes in and then decides which subsystem to fire up (win3x, win32, OS/2, etc). But if you take a exe compiled today with current tools and put it on a DOS 3.3 box it would run the exe and print out it can not run (the exe has that in there).

Also from that era not all DOS applications were exclusively 16 bit. Many were hybrid. Just to have better control over the memory space instead of using segmentation was usually worth the speed boost (as well as the bigger registers). Windows from that era usually had extra 'pid' file where you could basically tag the executable as 'hey you are about to run a 32 bit app get out of the way windows oh and support dpmi while you are at it'.


If that was a Windows 10 screenshot then yes there would have been the possibility of it being the 32-bit edition running a DOS app through NTVDM. But the poster says Windows 11, which does not have a 32-bit edition.

I’m not aware of 64-bit Windows being able to run 32-bit DPMI DOS apps natively, I think those still required NTVDM.


> I think those still required NTVDM

Pretty sure you are right. As I think that is what setup the interrupts for it. Win9x did it very differently and would basically just put command.com back in charge of stuff to sort of make it work with a sys file. NT with DPMI programs was usually very hit or or miss (more miss). If they did not play just right with windows the thing would just crash out.

Think there might be a win11 32 bit out there. But nothing that MS sells to normal end customers. But my brain may be playing tricks on me and I am confusing different articles I have read. But that would probably be some sort of weird kiosk ODM build. Not what most normal people would have (like in that post).


OTVDM will allow running 16-bit windows programs on modern 64-bit windows.


OTVDM is great, but it's just using Wine. Windows really ripped out the whole 16-bit compatibility layer, which is a little sad.


It’s not that Microsoft chose to rip it out. NTVDM relied on Virtual 8086 mode. X86 processors cannot transition to this mode from 64-bit mode.

https://en.m.wikipedia.org/wiki/Virtual_8086_mode

Intel is proposing to remove 16-but support entirely:

https://www.intel.com/content/www/us/en/developer/articles/t...


They were kind of forced to. WoW (classic, not WoW64) relied on the CPU being able to switch to a 16-bit context, in much the way modern WoW uses amd64's compatibility mode. OTVDM's project page directly mentions this as a core component:

> CPU Emulator

> 64-bit Windows cannot modify LDT(NtSetInformationProcess(,ProcessLdtInformation,,) always returns error)

They would have had to replace/extend WoW with an architecture emulator. Raising the development/support complexity quite a bit for little gain (few people that use Windows 11 are running DOS or Win16 applications today, beyond retro gamers who use DOSBox anyways).


Win16 got dropped at some point, probably because keeping it didn't add up financially. I have little doubt they could have kept support for much older stuff.


There's a "pirate" port of NTVDM to 64-bit Windows (from leaked NT 4.0 source) so it's certainly not technically infeasible: https://github.com/leecher1337/ntvdmx64


There's also winevdm from the Wine project.


Something that confuses me is that this states it's running a binary compiled 30years ago. How is this not 16bit?

I'm aware of win32s, I used to run it but still it seems unlikely this is a win32 console app unless there's an incredibly unlikely set of circumstances behind this.

Or perhaps it was simply recompiled after all despite what the Twitter post states?


9x wasn't the only Windows OS.

The path contains the phrase "ntbin". It was compiled for NT.


My guess is this is a PE format type windows application. Though I suppose you could get a NE format file to work correctly if the binary was compiled as 32 bit. My memory is a bit fuzzy on this but I think you did have the option to compile either way.


Correct, it's a PE file.


Windows NT 3.1 was released on July 27, 1993.


In fact looking into this further the only thing windows 11 lacks is the ntvdm which allows some dos api calls. If your binary is straightforward and not tied to msdos which this is it's fine. So I think the idea that this is a 32bit windows console application is completely untrue. It's also intuitively untrue when you consider the age of the app being run here.


The fact that there is no Windows 11 version with NTVDM is why this must be a Win32 console app, assuming the poster is truthful. Windows NT 3.1 came out in 1993. This being in a directory called “ntbin” gives another hint.


I see that makes sense. I feel it's also a little misleading from the original post. An exceptionally specific binary from 1993 works but the implication here is that the compatibility is more than this.


Author here. I've been running Windows since 3.0 and I copy my old tool folders with me whenever I change machines or upgrade the OS. This GZIP.EXE is the oldest EXE I have in my \ntbin tool folder that still works. The folder has 579 EXE files.


> that still works.

I think that is the point being made. This exe is cherry picked as the oldest working exe, it isn't like every 30 year old exe in that folder still works.


Zortech C++ was my rig for a good while - great memories. Pharlap is way too intrusive, my guess, to run on current Windows but would be an interesting experiment. Probably any extended/expanded memory thing doesn't work anymore.


That shouldn't be considered remotely impressive. It should be seen as routine and expected, and if it doesn't work, that should be considered a hugely humiliating and unacceptable fail.

To be clear, I am not saying that it's not impressive in the shitshow that is 2023. I am saying what norms we should work towards.


Agreed. There is absolutely no reason most statically compiled binaries should stop working.


There are many reasons. Notably, binaries don't tend to compile the OS into themselves. There's always some kind of interface boundary where something depends on OS-defined behavior, like windows and menu bars and printing and finding the user's personal folder and so on and so forth. Microsoft decided to do all of the work to preserve old interfaces long after making newer ones, but it does come at a heavy maintenance and management cost which only makes sense in a market with extremely lucrative legacy forces.


They won't if you use the old version.


Someone’s gonna come and say that linux has that too, and while technically true it’s quite hard in practice.

The kernel abi is stable, everything else is pure chaos, and this is mostly due to how applications are usually packaged in linux: your app could load (as long as it’s not in a.out format) but then would fail at loading most libraries. So effectively you need a whole chroot with the reference linux distro (or other runtime in general) and I’m not so sure you could find archives of 30 years old distros.

And I’m assuming that the kernel abi hasn’t actually changed a single bit and that no other interfaces changed either (stuff like /proc or /sys - /sys wasn’t even there 30 years ago i think).

And if you’re running an Xorg app, I wouldn’t bet my lunch on that level of protocol-level compatibility.


It works in Linux the same way it works on Windows: If you don't have the dynamic libraries and configuration you need, it won't work.

Why this is a mark against Linux and not Windows is beyond me.


Microsoft keeps maintaining all those libraries, and keeps them in the OS by default. That may not be the case with Linux.


This is exactly it. If you call native windows things, they’re there (ignoring deprivation of win16, etc).

But if you call gnome libraries, they’re probably long gone.


I've found a lot of package archives for various distros.


Eh, kinda. On linux you can only assume kernel abi compatibility whereas pn windoes you can assume the basic runtime (win32 or whatever) to be present and available.


In the timeframes this post is talking about, you can practically do the same in any operating system. What "basic runtime" could such an aged Linux binary use, other than glibc and Xlib/motif, which are precisely the ones which have been actually trying to preserve ABI? These 30 year old binaries have a higher change of working than week old ones...

This is similar to the fact that Office 95 will work on recent Windows, but Office 2k won't.


I've always been sad that my old Mac software just won't run. It's one thing for Apple to move to new architectures. Maybe that was necessary. But when the emulators break after a few years, well, that's the part that bugs me.

Microsoft's devotion to its customers shouldn't be so amazing-- it's the way that every company should behave.


Apple had a time where they would allow even 1st gen iMacs (300MHz?) to get the latest MacOS X. You might have to max out ram but that was it, and it was very usable.


yes it can run things that don't use much of the API surface (just using libc? probably fine)

however try running a game from the Windows 95/98 days and you've got a maybe 50/50 chance of it working

e.g. they changed the return code from BitBlt from 95/98 -> XP, they used to return the number of scanlines but switched it to a boolean

same with the heap management functions, directory traversal functions, etc


Try running something from only ONE year ago on Linux and it very often won't work, unless it's an Appimage or Flatpak, or you're on Nix,

They might not break userland but Qt and GTK do the breaking for you. Python joined the party recently. Random DBus daemons might be missing, etc.


You can run binaries from decades ago on Linux too. This is about a DOS binary. Well, a command line program that just uses system calls is fine too.


Linux is absolutely terrible with this, it sits on the "extremely unlikely" end whereas Windows sits on the other in terms of backwards compatibility. It's not surprising either because the entire ecosystem runs on the myth that shared libraries are desirable when any serious look into them shows that almost no space is saved, almost none of them are actually shared by a meaningful amount of apps, almost no actual security issues are avoided by using them.

Anyone who has ever been interested in having someone else be able to run their program has figured out a long time ago that you have to ship your dependencies for them to be able to do so.


I have never had any issue running old programs.

Shared libraries have nothing to do with Linux. They are an entirely userspace concept.


That would be a great point were it not for the fact that virtually all Linux distros run everything on shared libraries. I like Linux but this is one aspect of it that has never done it any favors. It was probably a decent choice at one point but it has ceased being one. I reckon it has held back the Linux application ecosystem for over a decade for no reason at this point.


There is nothing wrong with shared libraries. If you want to run an old program, you need to also run old libraries, and old network services it communicates with, and maybe old hardware too. Libraries are no different.

In practice it just isn't an issue, because competent application developers don't tie themselves to particular versions of libraries, and competent library developers don't make gratuitous backwards-incompatible changes.

Nobody is forcing you to depend on incompetently-written malware like GTK+.


Only if it is statically linked with everything it needs. Otherwise good luck resolving dependencies.


Even then it might not work because it could rely on a DBus daemon being there. Even a brand new binary might fail because it needs some external program they forgot to add to the dependencies list in the package, so you have to sift through to find the not at all obvious package that provides it.

Or something was compiled without some option for unknown reasons so you're just SOL unless you want to compile stuff yourself.


How is that any different from any other program on any system failing because of a daemon not running? Inter-process communication exists on every operating system. IPC means your program's behaviour can be different depending on what other processes are running. This is not specific to Linux.

Your other point is equally as inane. Anything might fail because it requires some external program, or external data files, or any other external resource. A program might fail because it requires a particular hardware device to be plugged into your computer. None of that has anything to do with the operating system.

>Or something was compiled without some option for unknown reasons so you're just SOL unless you want to compile stuff yourself.

Oh no! The terribly, impossibly difficult task of running a program! How could you ever subject me to such a fate as having to compile stuff myself. You cruel beast!


Compiling stuff yourself is generally much harder than one might want it to be be. The instructions might or might not actually work unless you're on a source based distro. It also might take hours. Or days, but the stuff that's big enough to take days generally works out of the box.

Windows has IPC, but doesn't have as heavy of an influence from the Unix philosophy, and Android seems to have even less. More stuff is just built into the OS, it's always going to be there, probably for 20 years.


Any software that has build instructions that don't work is probably written by morons. I wouldn't want to run it. I certainly have no interest in running software that takes days to compile. Overengineered crap.



just don't try anything that used safedisc


Those issues only apply to Windows 9x linage.

Current Windows versions trace back to Windows NT 3.51/2000 linage.

Naturally 9X => XP don't work flawlessly, they are two different OS stacks.


well yes

however it sort of undermines the "insane compatibility" / "stable API" point if mass-market Windows software produced before the 2001 release of XP mostly doesn't work on modern Windows

NT effectively forked Win32 (introduced with Windows 3.1) into something incompatible

(meanwhile it all runs on Wine perfectly fine)


What? Windows NT never forked anything.

Windows 3.1 introduced Win16 protected mode with segmented memory.

Win32s was a backport from a Win32 subset from Windows NT 4.0.

Windows NT linage exists since 1993.


I remember Win32s, I think Netscape Navigator needed it. I was thinking Lode Runner, but that was WinG.

I always thought it was backported from Windows 95, thanks for the info.


Iirc Win32s existed before Win95.


Windows 95 - August 24th, 1995

Win32s - October 1992.

And I was off by one Windows NT version, it was already based on 3.51, not 4.0.


I wish there was a backwards compatibility option to give applications a “virtual display” which runs in a window, for old programs which only know how to run fullscreen at 1024x768.


I once made a program (Intended for Games only, doesn't support standard windows controls) that allows you to stretch an otherwise fixed-size window. Uses D3D9 and Pixel Shaders to draw the upscaled window.

https://github.com/Dwedit/GameStretcher


Oh, where was this when I really needed it a few years ago...

I was playing an old MMORPG called The Realm. It's been live since 1995 and ran until just a couple months ago. It only knew how to run in 640x480.

I tried to write a program that would create a scaled up version, but it didn't work well, especially since the game would create child windows for certain UI elements. I was writing it in C, which isn't my strongest language, simply so I could call Win32 APIs more easily.


Does the built-in "compatibility mode" not work? https://support.microsoft.com/en-us/windows/make-older-apps-...


This is why I ran a virtual machine for windows even when I was running windows. Virtual machines almost always support having a different virtual resolution vs actual.


Working on https://github.com/evmar/retrowin32, I disassembled one old demo that wouldn't run on my native Windows machine -- turns out it was requesting 320x200 fullscreen resolution and aborting if it couldn't get it. (Not sure why the Windows machine wouldn't do it...)


Wine does this, it's a lifesaver for older games like Diablo 2.


Proton (Valve's Wine version) does it even better - it runs windows without a an emulated desktop but transparently scales fullscreen applications to your native resolution so that there are no mode changes needed.


I can't find it right now, and no idea if it'd work on older things but there's some 3rd party app for Windows that will put a full screen game/app into a stretchable windowed mode.



FWIW, Beavis and Butthead in Virtual Stupidity (1995) runs perfectly fine in Windows 10/11 with compatibility mode enabled. No need to test anything else as that's arguably the apex of software (and humanity's collective output).

https://www.myabandonware.com/game/mtv-s-beavis-and-butt-hea...


That sounds like a scene from a post-apocalyptic story. Cobbling a working computer together, discovering it will only run Beavis and Butthead in Virtual Stupidity.


A plot point for a third movie: Beavis and Butt-head Do the Apocalypse.


I have always believed this has been one of the primary reasons for which Windows won decisively against MacOS.

Apple has never had a problem throwing away their customers’ investment in their ecosystem. I was at a company with several hundred Macs when the transition away from PowerPC happened. It was just brutal. And costly. Not just hardware, software too.

And, what for? From a business perspective, you can do the same fundamental work woth both systems. The difference are: My investment is protected in one case and not the other. We have a bunch of Macs here. Only where absolutely necessary and for multi-platform testing.

As much as MS is maligned by purists, the truth of the matter is they have always protected their customers by having a remarkable degree if backwards compatibility, which isn’t easy to achieve and maintain.


My windows 11 upgrade experience was one of the smoothest ever. I was playing AOE, got a notification that my machine is eligible for upgrade, paused and saved the game, did the update and resumed.


Microsoft's commitment to backwards compatibility is definitely one of the strengths of the Windows platform. On the flipside, it's also at the root at the insanity that is the forbidden word list in MS Teams channel names.

https://learn.microsoft.com/en-us/microsoftteams/limits-spec...


I wonder if they could have mitigated that by leaning more heavily on the ability to have multiple personalities running on top of the kernel. So for instance, you could run a program from 1996 on the "Windows 95" subsystem and get better compatibility, but Teams could run on the "Windows 11" subsystem that treats CON as just another file name. Of course, there might be a problem if programs at different compatibility levels interact with the same files, but there's actually precedent for dealing with that kind of thing quasi-transparently, or perhaps I should say QUASI-~1.


So much so that even only linux, the most stable API arguably is win32 through wine. Atleast for desktop related tasks.


Are the compatibility issues on Linux really an API issue or having incompatible, newer versions of libraries? The effect is the same, your old program doesn’t run, but the cause is very different.


The latter. Old versions of libraries aren't maintained and you're on your own to compile effectively from scratch an entire distro's worth of libraries. Ironically, it should be easier to have as strong backwards compatibility on Linux as Windows, since all .so files are versioned and almost never break the API within the same major version, so you're spared the DLL hell of a program linking with blah.dll expecting it to be version 1, but you have version 7 installed. On Linux the program would link with libblah.so.1, not the libblah.so.7 you have installed.


Incompatible, newer versions of libraries is an API issue? An OS is more than a kernel, and on Windows if you link to the standard set of libraries provided with the Windows, Microsoft tries not to break them.


Lots of Microsoft app compat success stories out there, but plenty of breakage too. One example: I had to create IndirectInput [1][2] to fix Myst on Windows. Microsoft refused (via private email thread with leadership) to take responsibility for what is clearly an appcompat bug and were borderline jerks to me about it too. Oh well.

[1] https://github.com/riverar/IndirectInput

[2] https://support.gog.com/hc/en-us/articles/360019256854-Myst-...


I’m seriously impressed that you fixed this with a little drop-in DLL, and also that it was fixable that way.


Does Linux have this backwards compatibility? What about macOS?

I know instruction sets changed from PowerPC to Intel to ARM, so probably not macOS at least. But this is a CLI and I doubt old system calls changed


MacOS is actively hostile ( I still remember when they dropped cocoa or carbon or whatever it was, many apps died that day ).

Linux isn’t hostile and if you ONLY use the kernel ABI or statically compile everything, it will work quite a long time.

But if you use dynamic libraries, you gonna have a hard time unless you have source.


> MacOS is actively hostile ( I still remember when they dropped cocoa or carbon or whatever it was, many apps died that day ).

These changes are not like they announce something and the next day the libraries/APIs are gone. There is always a transition period that is long enough for the apps to be updated.


Linux will just compile 30 yo binaries.


30 yo source code?


Better bring a 30y old compiler too. The amount of UB that was assumed to work in code from that era will turn into bugs due to more liberal optimizations or fail with compiler warnings thanks to stricter checking.


There's always a flag to override that, usually the compiler will warn you on how to fix it.


I have a command line Windows binary compiled over 30 years ago (June 1996) that won't run on Windows 10 (64 bit). Windows complains, "ANAGRAMS.EXE is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher."

Is this issue specific to Windows 10, and would it work on Windows 11?


> is not compatible with the version of Windows you're running. Check your computer's system information and then contact the software publisher

You would receive such response on the every 64-bit Windows starting from Win2003, there is no 16-bit NTVDM in 64-bit systems.

If you need to run it:

a) use 32-bit Windows 10 (Win11 do not have 32-bit variant)

b) use DOSBox

c) ... VirtualBox/PCem/whatever with 'real' DOS.

d) there are some NTVDM ports for 64-bit systems, use at own risk. Ctrl+F in this post.


have you tried various compatibility options? right click -> properties -> compatibility

there's a troubleshooter there, too


I just tried the troubleshooter, and it said it fixed the issue but it actually didn't: when I run the program, Windows pops up a modal (before it just displayed an error on the console) with a title bar that says "Unsupported 16-Bit Application" and a message reading, "The program or feature [...] cannot start or run due to incompatibility with 64-bit versions of Windows. Please contact the software vendor to ask if a 64-bit Windows compatible version is available."


Is it a 16-bit executable?


Yes


64-bit Windows can't use 16-bit executables. 32-bit Windows can.


Almost 30 years ago*


I'm ashamed to admit I really like windows 11.

Telometry and other questionable things aside, I've loathed and detested every UI change that Microsoft has done since Windows 7.

I unequivocally believe that windows 7 was peak windows UX. Every subsequent version, I've limped by using classicshell and then openshell.

Windows 11 is the first windows release where I didn't feel the need to install something to bring me back to the late 2000s.

My only pet peeve is not allowing me to create accounts that don't tie into outlook. Yes I know there are tricks to bypass this but I shouldnt have to do that.


Meanwhile my new Android Pixel phone will run literally none of the Android apks I bought 5 years ago on humble bundle because it's 64-bit only and they're all 32-bit.


That would have to be a Win32 Console App made for Windows NT 3.1?


I fail to see why 30 years should be considered extraordinary when in the physical world we have standards that ensure compatibility for much much longer than that.

If anything, software has it easier: you can layer emulation layer on emulation layer and then only have to adapt the outer layer for whatever pointless changes you are making to the current system.


Ha, the version of ZIP that I use was built in 1996, and I was using a version of 'ls' that I wrote for NT 3.1 -- maybe 1993 or so -- until a couple of years ago.

These programs don't use DLLs, and frankly there's little reason they for them to stop working.


I'm going to go ahead and say operating systems that don't work this way are the exception. Running a 30 year old binary isn't all that big a deal. Pretty much every mainstream system does so every day in its day to day operation.


iOS and Android aren't 30 years old, so that's not possible. Assuming you get by signing/appstore/etc, I don't think you can take a compiled app from early iOS and run it on an iPhone now; Apple removes old apis and there's the whole 64-bit transition too.

Google isn't as aggressive and most? apps are bytecode only, but I kind of don't expect an apk for Android 1.0 to function properly if run on an Android today. At least if it does anything advanced with networking or needs permissions that changed a lot.

Linux should work if it was static compiled, and probably helps if it doesn't use audio; because Linux audio has changed a lot in 30 years. A dynamically linked Linux binary from 30 years ago is nearly hopeless, because it would have been linked against a different libc that's in common use today, and I doubt that will be on your system today. If you had the full filesystem, it should run in a chroot.

MacOS was System 7 in 1993, on 68k, not power pc. Those applications aren't running on your M2 without emulation. Dropping 32-bit support doesn't help either, of course.

FreeBSD 1.0 was released in November 1993, so it's not quite 30 years old, but I suspect a static compiled app may work, but libraries will be hard. FreeBSD makes compat packages to get libraries to run older software, but I don't see one for 1.x or 2.x, the package for 3.x was marked expired in 2010, but compat4x seems alive and well; that gets you 20 years of probable compatability.


macOS won't let me run binaries from 5 years ago, let alone 30.


try running a plain c linux userspace program from the 90s without recompiling it on any modern distro of your choice


Specifically you can't run a.out binaries anymore and ELF came about in 98.

So that's a pretty hard limit. Linux can run binaries that were compiled to the latest binary standard in 98.


ELF came out in 1994, kernel 1.0.9.

Slackware 2.0 was one of the first distributions to ship it.


how about the oldest elf on the latest x86 distro available? please dont ruin my contrived abi changes example!


SunOS 4.x binaries, if statically compiled, run fine on modern day SPARC systems also...


This is fun and related "chain of fools 2017" https://www.youtube.com/watch?v=PH1BKPSGcxQ


Surely there's some kind of legacy Windows containerization / subsystem / emulation technology that would automagically handle such things by now?


Well, there's the WoW64 layer, but nothing particularly sophisticated beyond that.

For the most part the old DLL interface (kernel32 et al) just sticks around indefinitely even as parts are officially deprecated. Microsoft is careful not to break their public interfaces.


This works for win32 apps too. It's what gtk should have strived to be. New features should have never been prioritized over API stability. What a shame.


I remember not being able to play Diablo 2 on my iMac running mac os X and I was so sad about it. I've used Windows ever since.


I remember hearing that Windows 7 (XP?) had some code specifically to support the original SimCity game in it.


XP games don't work on Win7. Doubt they fixed it for Win10 or whatever the current version is.


That is very hit or miss. Out of the 1200 or so I own I have maybe 20 or so that do not run because the game did something weird with the APIs (or starforce).


I don't have nearly as large sample size, but all the games I used to play on XP work fine on W10. And I don't think I've used the compatibility mode, ever.

Actually yes, C&C Red Alert 2 was running slow, but the community came up with patches that make it play nice, including the multiplayer which now works better than it did back in 2000.


But it is also this compatibility that makes it slower for certain programs.


Isn't this the compatibility layer that's built into Windows?


It's sad that the windows kernel and API aren't even that bad, but Microsoft insists on shipping as much bloatware and spyware as they can. Do I really have to use a debloater to have a usable OS?


I mean, Windows 10 comes baked-in with 30 y.o interfaces all over the damn place, this is hardly surprising. They've been poorly applying thin coats of paint for ages.


All the people on that Twitter thread getting butthurt about it and complaining that Windows' backwards compatibility is a bad thing... WTF koolaid have they been drinking?

I don't often sing Microsoft's praises but backwards compatibility is something they get absolutely right: something they've always got right. Everything doesn't have to be changing and breaking all the time and, to me, it's a mark of maturity when an organisation can maintain compatibility so as not to inconvenience - and introduce unbudgeted (and sometimes very high) costs to - users, integrators, and consumers. Top marks, Microsoft.


I agree.

As an example: The Xbox Series X, their newest flagship model, is fully backwards compatible with all of the Xbox physical CD games from all Xbox systems. Just pop the CD in and you are good to go.


It's actually not backwards compatible with all previous Xbox games. There's a compatibility list...

https://www.xbox.com/en-US/games/backward-compatibility

However, that is an impressive amount of backwards compatibility that other game consoles don't have.


Apparently it does not run Outrun 2!

(I only have the JP version, which I assume will never work with my UK Xbox Series X, even if the UK version would actually be compatible. Maybe I need to dig it out of my garage and try it anyway.)


The Linux world would benefit strongly from having a much bigger commitment to backwards compatibility in foundational libraries. Obviously there is nobody who can force e.g. OpenSSL to keep supporting old versions or to provide them as wrappers over the latest version, or to force GTK to keep around working versions of libraries exposing all interfaces dating from GTK 1.0 onwards. But if we could have a project to do just that, that would be great. We all want people to move with the times and adopt new versions of their dependencies so security exploits can be patched, but right now if you're writing software for Linux, either you package all your dependencies with it (forever calcifying all security exploits), or you keep on top of all updates of everything you use, or your software simply won't run within a couple of years. The open source community lives and dies by the ability of users to run existing software. Effectively continually deleting open source software from existence, while windows keeps supporting every single program written since 30 years ago, means we're always falling behind in usefulness. There ought to exist an organisation dedicated to maintaining old versions of widely used system libraries either in their original form, or as wrappers over the new versions (to the extent possible), so that people can rely on their existing software. Since we're talking about open source, that would mean maintaining the development kits as well, so old source can be built on new systems. It makes no sense to just let all this work rot, it is our greatest treasure.


So many people dismiss Tcl as near garbage, but Tcl/Tk applications written more than 20 years ago still run as intended including GUI, and they run on Linux, Windows and Mac. You just install a new --or old, whatever-- version of Tcl/Tk and the scripts will run, agnostically. I have a few I made myself 15 to 20 years ago and at least with those I never have to worry about distro upgrades, repositories or libc version.


Is that because Tcl/Tk never added modern features?


That's pretty much exactly what Red Hat does with their Enterprise Linux.


> The Linux world would benefit strongly from having a much bigger commitment to backwards compatibility in foundational libraries

No point when this problem has been solved with containers (docker, flatpak etc).


What I find so frustrating is that Windows, under the hood, is so solid.

It's just the UI with Bing/Ads/telemetrics/etc integration is so crap, like they've ruined a solid OS with crappy surface level stuff.


It's as if, it's a product made by a giant corporation with over a dozen different teams of skilled people, with different managers and different visons on how their own team's work should impact the final product for their own career advancement purposes.


Sure. Apple is also giant and (probably?) fits those descriptors as well. Why is Mac so solid under similar circumstances?


I've only used MacOS for a year and only recently (maybe that matters), and I get all the same crappy inconsistent behaviour I do on Linux and did on Windows ~20 years ago when I last used it.

Apple has the benefit of controlling both hw and sw and still manage to mess it up.

Random crashes, slowdowns for long running sessions, crappy UI (eg those labels not checking their checkboxes in Settings), network weirdness (both USB ethernet dongles/hubs and internal WiFi), my USB audio interface picking up garbled audio which requires reselecting audio interface for it to fix itself...

Maybe I am doing something different, but it's even worse than Linux for the most part.


You haven't used Windows in quite a while have you ? Past Windows 7 it's been sliding downhill into bullshit UX, crapware, backend migration to Linux almost exclusively, etc.

Random things on top of my head :

MacOS doesn't come with candy crush, Instagram, TikTok, Spotify etc. prepopulating your start menu. Phone home telemetry and ads in OS ? Yummy

Dealing with Windows dev environment is always a PITA eventually - unless you're doing stuff where Windows is first class citizen (like games). For backend stuff it's almost implicit that you're running on Linux in prod and macos is well supported because it's fairly similar. On Windows it's always some path issues, stuff randomly breaking between updates, missing/incompatible CLI, etc.

Brew is pretty good. Chocolatey is garbage.

MacOS is fairly visually consistent. Windows regularly has me in Windows XP era screens, reached through 3 inconsistent UX steps developed along the way. Even Linux is better in this regard.

I like Linux when it works. Mac works more often. Windows is just a dumpster fire at this point.


>Phone home telemetry and ads in OS ?

MacOS also has telemetry.

>For backend stuff it's almost implicit that you're running on Linux in prod and macos is well supported because it's fairly similar.

Then Windows would be better than MacOS in this regard because WSL2 is exactly Linux, not just "fairly similar" to Linux.

>stuff randomly breaking between updates

What stuff broke for you between updates? Our entire DS team develops in windows + WSL2 and nothing broke for them in ~5 years. Maybe they know how to use a computer.

Ultimately just use what you like and what makes you productive, no need to crusade for some big corporation. The OS is just a tool for your job, like a hammer.


> Right click -> Remove. Done. 2 seconds.

Surely you aren't making the argument that Microsoft is generous enough to let you remove their bloatware?


That's a nice strawman you got there, give him a spin so we can admire it in all its glory.


My dude I literally quoted what you said, at least before you edited your comment to remove it.


>My dude I literally quoted what you said

You quoted what I said then turned it into this strawman for a snarky jab: "Surely you aren't making the argument that Microsoft is generous enough to let you remove their bloatware?"

>at least before you edited your comment to remove it.

I removed more points because the comment was getting huge.


You posted a comment that seems to have a thesis of “Windows isn’t actually as bad as you think it is”. Is that correct? If not, what were you trying to convey? If you had posted your statement in isolation or in a different context it might have a different meaning but I’m not seeing many interpretations here besides “yeah it’s actually super easy to remove this junk so it’s not a big deal”. The question is why it’s there in the first place?


> MacOS also has telemetry.

And who do I trust more - company that bundles third party crap ware and openly talks about OS ads as monetization avenue. Or a company selling me xxxx$ machine and wanting to sell me a next one with the OS in the future. As much as I dislike Apple walled garden - their incentive structure is way more aligned with me than Microsoft desktop.

> Then Windows would be better than MacOS in this regard because WSL2 is exactly Linux, not just "fairly similar" to Linux.

Haven't seen that be better than a VM.

Doing anything WSL on Windows FS is dog slow and vice versa, split toolsets conflicts (different git config between host and guest, different SSH). Just SSH into a VM - it's a way better experience - the boundaries are clear and the editors know the implications of working on a remote machine.

Not saying you can't use Windows, but you can also eat from a trough - I just prefer not to.


>And who do I trust more

You throw a lot of stones considering how fragile your glass house is. Apples is the same company that wanted to scan your phone, that you bought and own, for child porn and report you to the authorities if something was found.

How you or anyone can blindly trust them after that is beyond me. Trust no major corporation regardless of how shiny their products are is my life motto.

>Haven't seen that be better than a VM.

It is, that's why people use it. Look up tutorials online.


> Maybe they know how to use a computer.

Seems uncalled for.


Why? If your only feedback is "stuff randomly breaks" and unable to provide more on-point technical specifics, maybe you're not qualified enough. No shame in that. If I tell my mechanic "something randomly breaks on my car" he'll also know I'm clueless, and that's alright, not everyone's a car mechanic.


Windows 11'a main sin is Edge IMO. The start menu junk is shitty, but can be dealt with in matters of minutes. On the dev part, WSL2 basically make the point moot, an actual debian is way easier to deal with than brew.

Now have you tried uninstalling Apple Music ? or found a way to disable it from launching everytime you press the play button on your headset with no media player running ?

In the last few years I've looked at every macos updates with more an more dread of things that will stop working and generic enshitification. Windows stays more "in your face" on the cheap marketing stuff, but it also brought in a lot more improvements than macos did in the last 10 years, so I don't as much difference in experience as in the past.


Linux works more often for me than Mac, but I also have two decades of experience using it, so it may just be that I am more comfortable.

With so many webtech-apps (Slack, Google calendar...) and non-native UI browsers (Firefox/Chrome), visual consistency is lost anyway, so I stopped caring (the best experience I had was with GNOME in 2.* early HIG/a11y days when I used Epiphany as the web browser) — oh yeah, I use Emacs too, so there's that :)

Still, most common Mac-as-Linux approach with Docker Desktop is an incompatible emulation layer (eg. local UIDs are transformed into root UID on Mac, whereas they are not on Linux, so you get weird permission errors if you develop on Mac and rebuild/redeploy on Linux).


> MacOS is fairly visually consistent. Windows regularly has me in Windows XP era screens, reached through 3 inconsistent UX steps developed along the way.

I take that as a feature and is the whole crux of this discussion. I don't need ODBC or many such archaic features but if someday I need to use it, I trust it will be working.


> Dealing with Windows dev environment is always a PITA eventually - unless you're doing stuff where Windows is first class citizen (like games). For backend stuff it's almost implicit that you're running on Linux in prod and macos is well supported because it's fairly similar. On Windows it's always some path issues, stuff randomly breaking between updates, missing/incompatible CLI, etc.

YMMV but my experience is the opposite. Windows is a perfectly usable dev environment. The only time I face issues is when developers don’t choose to use cross platform tools.

> Chocolatey is garbage.

I’ll give you this one, but that’s why anyone serious on Windows is using scoop.


Windows Pro (and K/N versions), as well as Enterprise, should minimize the consumer-oriented software.


> Random crashes, slowdowns for long running sessions

I've been using Macs for a decade, and the only time I had this happen was on corporate laptops with antivirus software installed. Antivirus software are poorly written and they used to have constantly crashing kernel extensions. Apple has been deprecating kernel extensions in recent years, so the situation is improving. But the performance hit caused by antivirus crapware is unfortunately still a thing.


>Why is Mac so solid under similar circumstances?

Where do you see MacOS ruining 30 year old binaries?


There's B2B Windows and B2C Windows. B2C is the license sold to OEMs, who will fill up the OS image with bloatware anyway to make a few extra bucks. Microsoft is just getting in on that now.

B2B Windows is the stuff you would see for enterprise buyers with strict IT policies. Your experience will be mostly unchanged from "classic Windows".


I suggest hearing to ATP Podcast or Upgrade rants regarding how "solid" macOS happens to be.


Apple employs “Release Managers”, where a single person is ultimately responsible for deciding which features ship in new projects.

Apple also, due to the hardware business, adheres to a release schedule where features must all be consolidated onto single branches (“convergence”), rather than letting individual teams ship incrementally.


Not to mention Apple only has to support a limited number of hardware and they regularly drop support for older hardware with each new major release.


That may be so, but all the annoyances I have with windows don't seem hardware-support related. The laggy menus, the clock in the taskbar that slides to the right outside of view, etc. This can't possibly be related to the fact I have a shiny, brand-new Wi-Fi card.


>the clock in the taskbar that slides to the right outside of view, etc.

What? I've never seen the taskbar clock ever move.


How this usually happens is that there's a notification indication. I'd click on the clock to show the notification center, dismiss the notification, and the clock would slide "too much" to the right, so that almost half of it is outside the screen.

A quick google search doesn't bring my issue up (I'll try to take a screenshot next time it happens, but I'm usually too annoyed of having to use windows to think about it). But it did bring up a separate issue, where the right-hand side icons area (system tray?) and the clock are moved down so that only the top of the date is showing. I've never had that one.

I think these are lag-related, as in things move when something opens. But if the thing before didn't complete or something, the new thing happening doesn't get to remove the old one as expected.

The other day, on a PC that was doing whatever it is that windows does when the CPU fan goes full tilt while pretending to be asleep (complete with the blinking power light), after waking it up, I managed to have both the notification center and the quick settings displayed. I mistakenly clicked on the notification, then immediately on the settings. The notification panel took forever to show up, and it showed while the settings panel was still showing.


These issues from the original comment are not related to Microsoft supporting a wide variety of hardware:

> ...the UI with Bing/Ads/telemetrics/etc integration is so crap...


Those issues are because some exec in Microsoft decided that they can monetize user data and since users already don't care their data being monetized by Google, then they themselves not monetizing it as well, means leaving money on the table since users don't care anyway.

That's the logic. Using Windows web components is similar to using Google products.


Apple is not even on the same "giant" shelf as Windows, so the circumstances are not similar either.


I would like to refer you to Conway's law for business which states that, "Organizations, who design systems, are constrained to produce designs which are copies of the communication structures of these organizations."


My point.


Really? Filesystem performance on Windows is absolutely horrible compared to Linux. It blows me away how, even using SSDs, the performance of git over large repos is at least 10x slower than a Linux box with the same CPU. It is the bane of my existence. It's not just git. rsync of a large directory takes >10x as long. This is with Server 2016 (and 2012 r2). My 2012 r2 machine had spinning disks and when I tested on 2016 with SSDs I was shocked the situation did not improve.


This is mostly caused by the anti-malware scanning. Exclude your sources folder from real-time scanning.


Windows 11 is soon getting a new feature called Dev Drive, which uses an entirely different FS (ReFS) and reduced malware scanning, specifically designed for source code.


90% of the speed up is from not scanning the files. ReFS isn’t much faster than NTFS.

The “drive” part of dev drive is a clever hack to bypass corporate anti malware policy settings.


It's not "not scanning" completely, it has less aggressive scanning by default. You can turn it off fully if you want, but then, you can do that with folder based exclusions on NTFS folders too.

So, I'm not sure the perf increase can just be attributed to malware scanning. ReFS has some features that NTFS doesn't have like copy-on-write which might help in read-only I/O perf.


This is true. I'm not sure what the obstacle is, and I know a lot of smart people have worked on it, but opening a ton of files and traversing a bunch of them in directories is very slow. Once the file is open, IO is just as fast though.


Filesystems are a database and have to deal with CAP Theorem trade-offs like everything else. Windows and NTFS both took a heavy focus on Consistency/(lack of) Partitions over Availability. Most POSIX operating systems and their filesystems took a heavy focus on Availability at the expense of Eventual Consistency and Sometimes Partitions.

Neither approach is wrong, they are just very different approaches with very different performance trade-offs.

Also Windows' filesystem supports an entire plugin stack including user-space plugins, to support things like anti-virus scanners and virtual filesystems and all sorts of other things. Not all of Windows' "slow" filesystem is first-party problems, a lot of it can be third-party drivers and tools that are installed.


a local filesystem that can only be mounted once and of which state is 100% controlled by one entity (the local kernel) is not a distributed system

the CAP theorem simply does not apply

Windows IO subsystem was simply designed for extensibility over performance


A) It's a useful analogy whether or not you think it technically applies or is a perfect analogy.

B) The Windows filesystem (and to an extent the POSIX) isn't just "local", it also includes transparent and semi-transparent network file storage.

C) Windows and POSIX are both multi-user and multi-process. They operate over multiple cores and multiple I/O buses.

Even if it just one system API centralized in charge of all that, it still needs to be built on top of complex distributed dance of mutexes/locks/semaphores/other distributed control structures. Because of the nature of I/O control there are complex caches involved and transaction semantics of when data is actually pulled from/flushed to low level data stores. The transaction model in turn is reflected in what the files look like to other users or processes running at the same time.

Windows and NTFS combined have strong transaction guarantees that other users and processes must see a highly consistent view of the same files. It makes heavy uses of locks by default even for cached data. POSIX favors the "inode" approach that favors high availability and fewer locks at the cost of eventual consistency and the occasional partition (the "same" file can and will sometimes have multiple "inodes" between different processes/users, many common Linux tools rely heavily on that).

Those two different transaction models are most easily explained in analogy to CAP Theorem. They are very different transaction models with different trade-offs. Whether or not you see a single "local machine" or you see a distributed system of cores, processes, users, diverse I/O buses is partly a matter of perspective and will reflect how well you personally think CAP Theorem is a "perfect" analogy for a filesystem.


an NTFS volume can only be mounted by one entity at a time, the same as a zfs volume, ext4 volume, btrfs volume with 100% of the state managed by a single entity

it is almost the definition of "not a distributed system"

the fact there might be nfs/smb/ceph/... volumes bolted into the same namespace that happens to include it does not make it one (and neither does requiring transactions)

> Windows and NTFS combined have strong transaction guarantees that other users and processes must see a highly consistent view of the same files. It makes heavy uses of locks by default even for cached data. POSIX favors the "inode" approach that favors high availability and fewer locks at the cost of eventual consistency and the occasional partition (the "same" file can and will sometimes have multiple "inodes" between different processes/users, many common Linux tools rely heavily on that).

this is literally is not true, they are different abstractions

and I suggest you observe the size of a large file being copied if you want to see how "strong" NTFS "highly consistent views" are

https://devblogs.microsoft.com/oldnewthing/20111226-00/?p=88...


I remember hearing that allocating space on NTFS on Windows takes a long time for a large amount of files and/or disk space compared to ext4 and the like on linux. Presumably they are built to write out 0's or something when they do that whereas in linux you're just updating inodes or whatever. I remember this was in reference to Steam allocating file system space for game files before downloading them.

I don't know if it was the views of the designers of NTFS taking a different set of priorities or if it's more that NTFS wasn't designed as well as some linux file systems were.


That's my understanding: Windows doesn't want the security problems of people allocating disk space and trying to read back garbage deleted files left behind by other programs in the hopes of finding user secrets, so if you explicitly ask it to allocate a file of a certain size, it (slowly) fills it with 0s first.

But also, Steam's allocation step seems an interesting relic of Windows 9X design patterns, smaller hard drives, worse cache/temp folder options, and (much) slower download speeds. It probably isn't necessary and they could maybe design something simpler and better from scratch today. (But it's probably a "not broke don't fix it" thing at this point.)


My guess is ACLs, which are enabled by default on NTFS but not (afiak) on most linux distros using ext4.


Extended attributes are definitely turned on in my ext4 filesystems and to my knowledge I didn't do anything to turn them on.


Hmm that's true but I don't think much makes use of them? As in they are not set with anything. On NTFS they are used for everything.


Unless you have antivirus software and god knows what analyzing the files, then IO is actually slower...


Sadly a consequence of how NTFS plugin architecture works.

Using Windows 11 the answer is ReFS.


Do you still have to pay for a Workstation license to get that on your local machine?


It is coming to regular Windows 11 as well.

https://learn.microsoft.com/en-us/windows/dev-drive/


Did they not try to fix that user attention gets resources Bug called a scheduler?


This. I use Tiny11 for this reason. The UI is still super inconsistent, but otherwise it’s fine. Mac bothered me by likewise becoming inconsistent, but also soldering everything so the machine is bricked when an SSD or battery dies. In the Linux world the UI has never been consistent, but what really bothers me is that everything is constantly changing unless one uses enterprise Linux, but then I don’t get hardware support so… Winders it is.


In my experience linux, specifically Gtk Apps, have the best consistency. It's the closest you can get to a fully styled system.


You don't get hardware support? I have had all kinds of hardware support (Dell, mostly Latitude and XPS) over the years and Linux was never an issue. We even run Gentoo and Void on some boxes (for the reason you state -- to keep things consistent over time) and they've never said anything about it.


Imagine if a Linux user said this.

The bar is so vastly different for windows and Linux users and by and large I try to keep my mouth shut because I dual boot and I know DAMN WELL which OS needs more handholding and has required fuck-it-start-over handling. God forbid you ever try to login to Win11 with low disk space, you're totally FUBAR and there are countless reports online of this EXACT FKING SCENARIO. Imagine for one second if Linux became COMPLETELY BROKEN if you ran out of disk space, it would be A JOKE.

If manufacturer would consistently publish to LVFS, I would NEVER TOUCH WINDOWS AGAIN. And I play Halo every single day of my life. Sorry gaming on Linux is less of a complete pain in the ass than using windows. You paying attn MSFT? Your stock price is 80% of my life, so I sure hope so.

"Everything is constantly changing"?! Are you FUCKING kidding me? What has changed in Linux is the past 15 years other than Wayland, which, I've been running for 4+ years and know what is BS and what isn't (DM me and I'll screen share stream my desktop at arbitrary scale factors at 240hz). I can run Gnome2, I can run kde3 for gods sake. Meanwhile there's crapware discussed here weekly just trying to get a basic Start Menu back in Win11, or constant complaints about ads, or regressions or ON and ON; do y'all have ANY self respect?

Stockholm syndrome, laziness, take your pick, it's exhausting.


It's a bit surprising that there isn't a project that takes the old Windows "shell replacement" (LiteStep, etc) idea a bit further and replaces the majority of the Windows userland. That's probably more challenging now than it was in the XP-Vista-7 days but should still be doable.


An "alternative OS" built on top of the NT kernel would certainly be interesting. It would take a decade to a thousand-person team, but it would be interesting


> It would take a decade to a thousand-person team

By the standard conversion rate, I think that means 4 people could knock it out in 6 weeks.


You forgot the mandatory MSFT accounts to sign in with no way around last I checked (a few months ago).


Use the email no@thankyou.com with any password. It will let you install on Win 11 with a local account.


The developer who put that in needs a raise.


IIRC it's something to do with the account being permanently locked due to too many failed password attempts, so windows basically throws up its hands and lets you log in anyway.

(This is just a vague recollection from last time I installed Windows, might be wrong)


Last time I checked you could log in locally if the install process never detected an internet connection. A terrible work around but (at the time) a functional one.


I’ve also heard that the following trick still works: enter the email address no@thankyou.com with any password, this account has been tried so many times that it’s been locked out, so the installer will let you continue with a local account


Afaik this doesn't work anymore on Windows 11 unless you change some configuration in the shell and reboot before doing the setup.



Is this Windows 11? On 10 Pro you were just able to click past it. Need it for office though, I think.


Windows 11 Home requires you to get a working internet connection to continue. Windows 11 Pro doesn't as of the 22H2 version but it still has dark patterns. For Office you can still use other licensing methods, even for O365, but as a typical consumer getting it legitimately you'll de facto need a Microsoft account.

Using "no@thankyou.com" for your Microsoft account (and any made up value for the password) allows you to skip this requirement in any version of 10/11 as someone got that account banned and the Microsoft workflow bails out since they naturally don't want to force onboard a banned user to a new account. This still requires internet during install though, it just works around needing to make an actual account.

It's a shame how much of a dance the install process has become.


Windows LTSC (no ads / bloatware) + SimpleClassicTheme + RetroBar is about as perfect as it gets.

https://youtube.com/watch?v=1_7pqwf-gZM&feature=sharea


And that's why Microsoft will dominate corporate PC market for the foreseeable future.


Eh. More and more native software is being replaced by web versions (or versions that run inside a webview). I’d argue that is going to be the more important compatibility test in the coming years.


Even so, a lot of those corporate web applications rely on .net frameworks for integrating with shared point, ad, and many other established enterprises services and applications so they’re still stuck to windows. An example, all my partners medical software is through a browser but requires multiple system frameworks to function. Also they’re all using Remote Desktop to Amazon workspaces that are running windows.


Not really, the average user doesn't give a shit about backwards compatibility, they care about being able to create slideshows and send emails. Windows will dominate for the foreseeable future because of deals with manufacturers.


>the average user doesn't give a shit about backwards compatibility, they care about being able to create slideshows and send emails.

They may not care about backwards compatibility as an abstract concept, but they would like their computer continue functioning the same way today as it did yesterday.

Although MS is getting somewhat bad at that from a general OS perspective with their updates.


Corporate IT cares about managing all of the devices inside an organization. If a sizable portion of the workforce needs windows for access to platform specific programs then having non-Windows systems for other users that achieve what Windows can do is just overhead for IT and can see pushback.


I dunno, based on my anecdata of my last two jobs, Mac is starting to take over.

Shit, at my current job, we're developing software that runs solely on x86-64 Linux (And our output is an entire VM image, not just an executable), yet we're running on M1 Macs. TBH, it's quite painful and I wish I understood what the hell the engineering department is thinking, since you can't run x86-64 VMs on M1 hardware.

But for gamers, Windows's backwards compatibility makes it king. I can usually easily run 15+ year old games without a hitch.


I bet you're located on North America.

Across the rest of the world macOS numbers aren't that high, still better than "Desktop Linux" though.


Yeah, only beaten by IBM and Unisys compatibility on their UNIX, mainframe and micro systems.


That's cute and all, but it's fucking gzip. It doesn't have complex dependencies.

Pretty sure Linux could run a 30 year old gzip binary too. I've never needed to do that with gzip but I have definitely run binaries of a similar vintage without issue.

Windows backwards-compatibility fails miserably on non-trivial programs, you're generally pretty lucky if you can get something from the XP-era or older to work out of the box.


> you're generally pretty lucky if you can get something from the XP-era or older to work out of the box.

That's actually pretty good. XP was released over 20 years ago.


Off the top of my head I'm running Winamp (gotta whip that llama ass!), Paint Shop Pro 5 (predates XP), RPG Maker 2000, and many more stuff from almost or over 20 years ago with no problems.

So yeah, I'll take my chances with that luck.


> Pretty sure Linux could run a 30 year old gzip binary too

Not since Linux dropped a.out binary support completely in 5.18. Any newer kernel can't run it.


Someone ran the gzip binary on Linux just fine: https://twitter.com/stderrdk/status/1692652013711221045


They used an older Debian. Wouldn't work on the latest.


I clicked on it thinking something else :(


It's not really insane, what's insane is Apple refusing to run 32-bit executables.


Is there a link someone could share (other than the one to the post)?



Nitter hasn't been working for me since a few days ago. Is it just me or did X break it?



Is it? Maybe insane in a bad way.

Right click on windows desktop had what now 3 different menus that might show depending on what you want to do. Oh. And let’s not talk about how much of what you see just covers the stuff up from 1998. Still rendering the old stuff only to have a slightly larger menu render right on top of it.


Agreed. I would say backwards compatibility at this point hinders development and it shows in windows when you're looking at 5 different UI's. I have a 4k monitor, and the installers all look blurry because they can't scale up. We can't even get decent looking icons that were created in the last decade.

Honestly, I think a barebones, absolute minimist windows OS would be something that interest a lot of technical users. You can only stack shit so high.



  "it allows only apps from Microsoft Store"
That's a showstopper... but cool, ty for sharing


> Still rendering the old stuff only to have a slightly larger menu render right on top of it

Hahah no way? Is this documented somewhere? And the new UIs are slooow!


You raise a valid point. Still, I'd rather be able to run software I need than not.


We have containers now. You can still run it.


The Windows 11 UI experience is opening up new markets for Microsoft. For example, Windows 11 is sold to medical schools as Microsoft Stroke Simulator 2023.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: