Hacker News new | past | comments | ask | show | jobs | submit login
Any sufficiently advanced uninstaller is indistinguishable from malware (microsoft.com)
887 points by mycall 10 months ago | hide | past | favorite | 510 comments

Here's the codeproject link the code came from.


> Whether they follow the licensing terms for that code I do not know.

I'm guessing they didn't ship the binary with a link pointing back to this page?

These's also another codeproject example that uses a bat file, which is fairly similar to the recommendation in the post. I guess that's the better example.


At least the author seems to agree with Raymond Chan on the similarities between his approach and malware...

> shellcode is the technical term (in security circles) for binary machine code that is typically used in exploits as the payload. Here's a quick and dirty way of generating the shellcode from the obj file generated when you compile your source files. In our case, we are interested in whipping the shellcode up for the remote_thread routine. Here's what you've got to do:

The whole article has the vibes of some questionable DIY blog along the lines of "Your house is infested by vermin? Here is an easy way to get rid of them using a small, homebuilt neutron bomb!"

Unfortunatelt CodeProject is full of code like this.

Nit: Raymond Chen, not Chan

Ah, I'm sorry. That happens when you write messages on the go... seems too late to edit the message though unfortunately.

Funnily enough, real malware does this correctly. Usually by just ShellExecing "ping -n 3 >nul" followed by "del" - no temporary file needed.

The author says the binary looks like malware because it self-deletes, sleeps and touches this uninstaller thing. But the script he proposes, which would be triggered by this same thing, does the same. I am ignoring the injection thing since he guesses at it (likely correct) and also because, lots of things inject into processes without being malware. (monitoring stuff like AV etc.) Additionally, binaries which terminate with a run some script via a scripthost... this could just as well be some malware? (stage1 malware downloads script, runs it via scripthost?)

my question(s): How is the proposed solution better than the original thing? Isn't this a case of using bad heuristics to determine maliciousness?

In the end, he goes a bit further, and sees its non-malicious. So, with a more elaborate rule or heuristic, wouldn't it be clear its not malicious?

The .js script isn't injecting code into another program in order to deletee itself; it is deleeting itself directly.

It can do that because, I'm guessing, the file isn't open; the run-time isn't executing instructions from that file. The file was read, the content compiled into memory, and closed.

The script is deleting its source code, not itself. What actually deletes the script itself is the garbage collector in the run-time. Once the fso.Deletefile call and the for loop are executed, they are no longer reachable and so they are garbage. If there is a way for the "var fso" and "var path" variables to go out of scope, they become garbage also.

A binary executable is mapped into memory while the process is running it. In Windows, an open file cannot be deleted.

But, even on Windows, a prog.exe could delete the prog.c source code it was compiled from, right? Same thing, sort of.

Being pedantic, you can delete open files on Windows if you open them with FILE_SHARE_DELETE.

The script doesn't inject. But a lot of malware downloads a script and runs it, so you'd hit another rule.

The way I understand it, the uninstaller program that wants to delete itself doesn't have to download the script from anywhere; it generates the script out to a file.

this is true, a good heuristic would see the difference perhaps. but malicious scripts can also be generated rather than downloaded (or more commonly decrypted from some seemonly random data) so it can be hard to tell, especially given threat actors having access to security products easily while the opposite is not always true.

Is the behavior that a running .js script is fully loaded into memory and the file doesn't need to exist documented, supported behavior?

What if, hypothetically, the system was suspended in the middle of script execution, and the resume function was designed to reload the script from disk?

It just feels like a different hack to me.

Also - trying 20 times and pausing 500 ms seems wasteful. What are the chances that it's going to succeed a subsequent try if it fails the first try? Why not catch the error message and only retry for errors that have a plausible chance of succeeding if you retry?

There is never a good reason to inject code into another process - particularly a system process. At the point at which you believe this is necessary you are several layers of hackiness deep and should go find a beverage and think over what your actual goal is.

As a metaphor: you find the instructions to sweep your floor cumbersome so you reprogram your neighbor's Roomba to come clean your floor. Sure, it may well go back to their house and no harm done, but it's hacky, socially unacceptable, and no matter how hard your broom is to use it's not OK.

security products have valid reasons, though maybe not _good_. forcing plt and got entries to be bound rather than lazy loaded, forcing certain segments to be read only and hooking a bunch of stuff is neccesary for them, and that can only be done by suspending processes at startup and then injecting and modifying them. its a bandaid to a bad system hence its a valid but maybe not good reason (better to prevent than this cure of theirs..)

> Is the behavior that a running .js script is fully loaded into memory and the file doesn't need to exist documented, supported behavior?

If Raymond Chen says to rely on it then yes.

even though i admot this guy is genious level, its not really good to rely on one persons judgement for anything. thats a bad practice in general. zerotrust and all :)

Even if it doesn't, I think you could do something similar by just spawning a shell (command prompt) that executes a small script trying to delete the file. You just have to take care to make sure the process is detached from the original one and then let the spawning process terminate to release the lock on the executable. PowerShell could also work, but I know it is pretty restricted in a lot of environments. These completely avoid any intermediate file.

I think the retry is necessary because if you launch "wscript cleanup.js" from the process that wants to be cleaned up, you then need to wait for the spawning process to finish executing. I agree if it fails after 20 times, you should probably spawn an alert or something letting the user know that uninstall failed. There are also so many random processes that might take a reference on the file like antivirus in Windows so just spamming retry will help wait that stuff out (this problem does not exist on Linux since Linux generally just does garbage collection which has the downside of not keeping specific paths around, just file inodes).

Agree, I'm shocked at how ugly the recommended alternative is. This does not make MS look good.

There are plenty of other solutions; that was merely one straightforward option that could be shown in a few lines of code and which doesn't require *injecting code into a binary you don't own*.

Sure, but there isn't even an offhand remark about how hacky this kind of polling is. It's presented as if it's a completely normal way to do things in reliable software.

No worse than the 10 minute delay rule in DllCanUnloadNow.

DllCanUnloadNow returns an indication that a DLL can be unloaded. A DLL cannot be unloaded if any threads are executing the code. But a DLL can only change to the unloadable state by executing some code, and that code has to return after it has set the indication. Only after it returns is the DLL is actually unloadable! So a delay is needed for that thread to vacate the DLL.

https://groups.google.com/g/microsoft.public.vc.atl/c/AQvHCW... [2001]

So in the present example from Raymond Chen you need the loop for a similar reason.

The binary .exe program which the script is trying to delete is the one which created the script and launched it. So that means the .exe is still running at that point and cannot yet be deleted. Lauching the script indicates "I'm about to die", not "I'm already dead". The script cannot delete the .exe until the .exe terminates. Without some event to indicate that, you poll.

The script knows it can delete itself, so it tries that only once.

If a handle could be attached to the process, then the script could do a WaitForSingleObject on it; that would be the prim and proper way.

It doesn't seem worth doing; the chances are low that the process cannot terminate within 20 seconds of launching the reaper script.

i am not shocked, but its hacky like comments suggested. ultimately done because the lack of a better alternative. id still follow chens advice even though its hacky.

Once you move past comparing hashes against known malware (by definition useless against novel malware), and the slightly more complex matching of specific binary strings, detecting malware with "shitty heuristics" is basically all we've got.

Companies that buy AV/EDR products expect them to detect unwanted behaviour while allowing any sort of weird, hacky, abuses of the system that they rely upon for their business.

It's never been entirely clear to me why windows provides such a rich interface for one process to inject and start executing code in the address space of another but IMHO I want to know when this is happening even when it's done by a "legitimate" uninstaller.

The first point is true i admit. There are very complex and good ways to identify stuff but those perform so bad they cannot be used in practice.

AV/EDR products do try to prevent a lot of stuff. They can 'generically' block things like injection etc. by 'hooking all the things' and injecting into everything (yes, yuck :D and still kind of heuristic based i admit!) to make certain sections read only or remove executable mappings etc. (got/plt/stack/...)

Linux or more specifically ELF files also have an easy vector to allow injection by having a dynamic table entry for debugging purposes which can be trivially overwritten for example. "ibc.so" :(). I'm not sure anyone uses that entry validly... especially since there's better/less awkward debugging interfaces than injecting a debugger DLL into something :') at least in x86/64 Linux land. (ELFShell sure was fun tho!)

If you're talking about LD_PRELOAD, I used it for an integration test suite of low level system components.

im not sure how the implementation of ld preload works, but this is a linker directive if i am correct. the thing i am on about is the DT_DEBUG dynamic table entry. its an entry meant to allow a debugging dll to be loaded. you can overwrite it and point it to a malicious dll (with a bit of difficulties) to get injection going. its then hardcoded into tue binary by your modification. admittedly maybe ld preload is easier if thats allowed on a system

LD_PRELOAD is handled entirely by ld.so and includeed inside your ELF by the linker. It not only handles that but also all dynamic linking inside your program.

By writing a custom linker you can easily incercept all dynamic linking done at runtime and provide whatever you want to the program.

gcc -Wl,-dynamic-linker,/path/to/my/linker myprogram.c -o myprogram

>How is the proposed solution better than the original thing?

I'm only assuming here, but maybe because it won't crash explorer and it's just a few lines of self-documenting code?

Haha, well fair enough the crash is bad indeed, good point! This isn't intended behavior though and presumably, it doesn't crash on in cases of this technique being implemented in uninstallers. (a bit of a guess i admit!)

The fact it injects into another process means they can't know if it'll crash or not. You're just one Explorer update away from things changing enough for the hack to crash it.

I guess they could do this more robustly. I.e pause the entire explorer process, save all its state, remotely allocate new memory to inject their code, remotely create a new thread, run only that thread using the injected code, restore all the process' state and finally start it running where it left off. A script would be easier though.

The problem isn’t that the injected thread is racing explorer - indeed, pausing the entirety of explorer to run your uninstaller would probably be strictly more dangerous than what they’re doing - the problem is that the injected thread is using function pointers that do not exist in explorer.exe. Most likely, the reason is that the uninstaller itself has been “detoured” by yet another program to patch calls to certain functions, and it’s copying the detoured addresses instead of the addresses to the real functions.

Both detouring and remote thread injection are supported on Windows, but fall into the category of gray-hat techniques; there are some legitimate uses but quite a lot of illegitimate uses, and using these techniques correctly (without crashing anything!) can be a real challenge.

Agree completely.

I would have assumed (naively?) that they could just copy their uninstaller into a temp folder and run it from there, and just rely on the OS to nuke it in due time, but as a consumer I appreciate the thoroughness of an uninstaller that leaves no trace.

honestly your idea about the tmp dir i think is better than the no trace option. especially given the hacky nature of self deleting stuff. if you have so many uninstallers around that the disk fills up the tmp.dir cleaning would fix it, which is a common first thing to clean when cleaning the filesystem. i like your idea :)

You find the instructions to sweep your floor cumbersome so you reprogram your neighbor's Roomba to come clean your floor. Sure, it may well go back to their house and no harm done, but it's hacky, socially unacceptable, and no matter how hard your broom is to use it's not OK.

The technique could have been made a little more robust by calling GetProcAddress to get the function pointers in Explorer's context, assuming GetProcAddress wasn't itself detoured.

Cybersecurity is pretty much all bad heuristics with the belief that if you use enough of them they average out to an ok determination of maliciousness. It works alright, sometimes.

It's more about how it does it: injecting executable code directly into the stack so that some other code unwittingly transfers control to it. Stack-smashing is a lot more malware-ISH than a few lines of shell script.

They inject something into Explorer. I would assume that to be some DLL that is injected?

---- Neither code injection nor detouring is officially supported. I can’t tell who did the detouring. Maybe somebody added a detour to the uninstaller, unaware that the uninstaller is going to inject a call to the detour into Explorer. Or maybe the detour was injected by anti-malware software. Or maybe the detour was injected by Windows’ own application compatibility layer. Whatever the reason, the result was a crash in Explorer. ----

I'd think the anti-malware guess here would be correct, and that (DLL) injection was stopped, and thus some crash happened. Thanks for your reply.

The DLL would execute something from its stack so it can be somewhat dynamic (perhaps some path or something is generated before the injection or so - really little to go on here...) and not need to make a heap allocation within Explorer.exe. (this is perhaps a bit too much to assume idk.)

Thanks for your insights!

>I can’t tell who did the detouring

I was thinking hmm and so it continues that game of whack a mole.

If a new OS could be designed from scratch, there must be a way to prevent this sort of stuff.

i am making an os from scratch. but admittedly, i am so far from this stuff i will never ever have anything even closely related to such a problem hah. its so much work :'( (fun tho!)

Maybe it’s less likely to be flagged or interfered with by antivirus? Antivirus uses all kinds of shitty heuristics, seeing that my Go executables built with -ldflags="-H windowsgui" are flagged as malware by Windows Defender and co. all the time. It’s maddening.

that's fair enough. perhaps they can more easily in AV land make some kind of signature that whitelists this self-deletion javascript method because the script is more readable. though i'd expect a good AV to be at least worried if some executable on my system runs a script file via a scripthost.

Why do Windows programs need special installers/uninstallers? Why isn't this handled by Windows itself?

Windows has had an installer as an OS component since the late 90s (called Windows Installer). As a sysadmin I'd prefer apps use it. Many application developers do not. It's maddening. (Doubly so when Microsoft themselves don't use it-- newer versions of Office, Teams, etc. Microsoft suffers from too much NIH.)

I get unattended installs and uninstalls "for free" when well-behaved applications use Windows Installer. Patching is included, too. Customizing installations is fairly straightforward.

On the developer side it has historically used a very quirky proprietary file format (MSI) with a fairly steep learning curve and a ton of "tribal knowledge" required to make it work for all but the most trivial cases. (Though, to be fair, most installs are the trivial case-- copy some files, throw some stuff into the registry, make some shortcuts.)

Worse, it allows for arbitrary code execution ("Custom Actions"), at which point all bets are off re: unattended installs, removal, etc. Some Windows Installer packages are just "wrapped" EXEs (Google Chrome, for example).

I've packaged a ton of 3rd party software as Windows Installer packages. It's an ugly system with lots of warts and legacy crap, but if you need to load an application on a large number of Windows PCs reliably unattended it's decently fit for purpose.

There is reasonable free and libre tooling to generate MSI packages from plain text source (the WiX toolkit) and it can be used in a CI pipeline.

Can confirm. I would be considered by most to have been a Windows Installer expert at one point. Installshield / Wix / Whatever else.

It is intentionally obtuse at times (MSIFileHash table uses rearranged MD5 hashes for example), and also many features made sense for the late 90's/Early 2000's era where bandwidth was low and connectivity limited, and lots of stuff was distributed on CD's. The look on people's faces when you explain advertisement to them the first time... How their unrelated app can get stuck in a loop of repair for a piece of unrelated software...

It was deprecated by the newer AppX/MSIx/AppV format which uses sandboxes, binary chunks/streaming and no executable code to install stuff.

For my own desktop computing, I prefer MSI packages because I prefer having control post-install to tinker with things if I feel like it. Also, I have the skillset to modify the installer to my whims if I so choose.

> It was deprecated by the newer AppX/MSIx/AppV format which uses sandboxes, binary chunks/streaming and no executable code to install stuff.

I can offer a little perspective on MSIX, having devoted months of my life to it in a past job.

MSIX is nearly unusable outside the Store. It will work in a tightly controlled environment, but when you try to deploy it to a wide variety of users you will run into 1) unhelpful errors that basically can't be diagnosed, 2) enterprise environments that cannot/will not allow MSIX installs. I get the impression that the MSIX team is uninterested in solving either of those issues.

It's not a coincidence that virtually no first-party teams use MSIX to install their product outside the Store. Office tried for a while and eventually gave up.

Despite all that, there are still a few people at MS who will tell you that MSIX is the future. I don't really understand it and I assume there's a weird political reason for this.

MSIX can be made to work in that context. We've done it, although it required writing our own installer EXE stub that invokes the package management API rather than using Microsoft's own "App Installer" app, and doing lots of remote diagnosis to solve the mysterious bugs you were hitting. I would indeed not recommend anyone try to use it with Microsoft's own tooling.

Still, when you finally make it work you get a lot of benefits. MSI is clearly dead end tech which is why so many MSIs are just installer EXEs wrapped in the file format. It doesn't have any clear path to modern essentials like online updates, keeping the OS clean, sandboxing and so on. If you were on the Windows team, what would you say the future was?

For enterprise environments it's actually somewhat the opposite: MSIX packages can be installed without admin privileges due to their declarative nature, and it's very easy for admins to provision MSIXs to Active Directory groups because they don't have to do any repackaging work. Yes, some admins have hacked Windows to stop it working because when MS launched the Store they didn't provide any way for admins to opt out, but these days they have the knobs they need. Also, because they're just zips you can always just unzip them into your home directory to get at the app inside. It won't auto update that way, but as long as EXEs can run from the home dir it can work.

Products like Office and Visual Studio have entire teams devoted to nothing but their installers, which is clearly going too far in the opposite direction. Most products won't want to do that.

Orca ftw

If you go way down the rabbit hole, you end up at modifying OpenMCDF.

> with a fairly steep learning curve and a ton of "tribal knowledge"

Yes, people preffer to debug their own code rather than spend shitload of time to understand Wix/MSI.

Microsoft deciding early on to not produce low cost tools for Windows Installer also didn't helped with the adoption.

The joke is, Microsoft devs even now use NSIS for things like VSCode rather than deal with MSIs lol

But there is the modern implementation of AppX Bundles which was later extended to create MSIX which allows app distribution without the windows store. There are still drawbacks to using MSIX usually because you want to touch Windows in ways you can't inside the sandbox.

To my understanding, MSIX today supports the full MSI catalog and will do entirely unsandboxed (unattended [0]) installs if you want it to. But you need to understand all the same complexity of MSI to build installers in it. The biggest remaining difference MSIX and MSI is an MSI is a strange nesting doll of ancient binary database formats that is tough to build (which is why WiX exists and is part of why WiX is so complex) whereas MSIX is "just" an ordinary ZIP bundle of your files plus XML manifests. With the final punchline being that those XML manifests have yet another dialect from WiX's ancient MSI-database-influenced XML and of course it also isn't as simple as deleting the WiX compiler and just zipping a WiX project.

In my experience, you can pretty easily write the nice sandboxed MSIX manifests by hand, it's not too bad, but general MSIX doing weird MSI things you still want better more expensive tools to build them (and of course Microsoft themselves still don't exactly provide that and will point you to plenty of expensive third party installer studios for options, many of which are the exact same ones people have been overpaying for decades).

[0] The one complaint I'm aware of is that you can't do custom installer UI and "attended" installs with user choices. There's one MSIX UI and it is barebones but acceptable. That's all you get.

> Doubly so when Microsoft themselves don't use it

Often, as you mentioned, Windows Installer packages are wrapped by an executable (in WiX this is called a "bundle" because you may also choose to add redistributables like the C++ runtime).

However, what you see in installations like SQL Server, Office and Visual Studio is that the installers are bundles as well - of a large amount of MSIs that need to be installed and uninstalled in a specific order. A single Microsoft Installer package is transactional and can be rolled back on failure, but bundles are not as well defined and left open to the implementation of the bundle. Windows Installer does not reach beyond the single msi package.

As soon as Office 2007 didn't use MSI the format was doomed.

I assume the Here in NIH refers to an individual team, not MS as a whole.

Teams is entirely NIH https://github.com/Squirrel/Squirrel.Windows for updates to the Electron app.

I would use winget, but MS made it weirdly hard to run as a script on multiple computers, it installs per user, because... who knows.

So still using chocolatey

To be fair, Squirrel came from GitHub and early Electron before Microsoft bought GitHub, so it wasn't Microsoft's NIH that built Squirrel originally.

True, I used NIH in the opposite meaning accidentally, I mean it was not invented at Microsoft

I had a friend who worked for a company that specialized in Web browser bars and MSIs. In other words, they were a shop to put all kinds of malware into these things. It was a viable business model for a company of something like 50 people.

The whole story and ideas put into Windows installing programs are a stupid joke. It's designed by project managers who have no idea what they are doing and no imagination and is executed by the cheapest least motivated programmers from South Asia Microsoft can possibly find.

A lot of people who do useful stuff for Windows try to stay away from as much of Microsoft's stuff as possible, and integrate it as little as possible with anything else coming from the system because it's just maddening how horrible everything there is.

A lot of weird things in windows are reflections of the gestalt in the 90's and early 2000. People went all in on all sort of OOP-derived weirdness, like CORBA, COM.

"Plain-Text files for configuration? what do you think we are? savages? no, we need a hierarchical registry! every serious program must use some opaque binary format to store stuff!" seem to be the general animus at that time. Nowadays, even if you really hated the idea of a text files for configuration in your home direction, people would probably do something more straight-forward like using a SQLite db.

Agreed re: some of the Windows "strangeness". I think there was some amount of needlessly "Enterprise" architecting going on at MSFT back in the day.

There were also very practical solutions incorporated to accommodate the constraints of the hardware of the time that come off looking like needless complexity today, too. (There's also, arguably, some laziness in the old binary file formats that were nothing more than dumps of in-memory structures. That's a common story across a ton of old software-- not just MSFT.)

Rob Mensching, who worked at Microsoft on Windows Installer pre-release, has a nice blog post about internals of MSI.[0] He goes into some of the overwrought architecture in MSI, as well as quirks to overcome performance limitations in a world of floppy disk-based distribution and much smaller memory capacities. It's a good read.

[0] https://robmensching.com/blog/posts/2003/11/25/inside-the-ms...

Wix is part of the problem. It's basically making money for the developers who offer consultancy for it.

Therefore the documentation is poor , like the absolute worst I've ever seen. Opening issues for doc issues never results in anything. Pointing out UX issues is usually shot down. Finally, until this year you needed .net 2 installed to build it, which does not play well with windows docker.

I don't think any major desktop OS handles this well.

I suspect the final form for software installation is probably where iOS and Android are going in the EU, where there's a single means of installing software to the device so that everything can be sandboxed properly, but the acquisition/update process can be pointed to a URL/Store that the user has pre-approved.

macOS comes pretty close to what I'd ideally want in an OS with regards to installation - independent packages that are certified/notarised, but I'd like to see the OS allow for user-specified authorities beyond just Apple. That being said, I'm not sure I'd ever use them as it's part of what I'm paying Apple for, I'm really thinking more of Linux there.

A kind of flatpak/snap approach, but that has signing of the package and centralised management of the permissions for the sandbox at an OS level would be ideal in my view. That way it's still free-as-in-speech as the user can specify which notarisation authority to use (or none at all).

I really don't understand why seperate programs are handling removing their mother program in 2023, that's registry spaghetti messy.

Everyone is pointing at Windows but there are still installer software on MacOS. Normally crusty old corpoware like Citrix that needs to extend its tentacles to the whole system.

On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

I normally keep both types away from my computers.

> On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

This is a problem but only if you install software on Linux by manually going to the project page and copy-pasting whatever curl they have there, I think the difference is that mostly you're encouraged to go the package manager route, whereas on windows downloading .exes directly (ala the curl example) is the norm.

It seems to be increasingly the case that package managers just don't have some software - or have a version that's years out of date. Perhaps the number of different ones available has become self-defeating.

Directly sudoing a curl-ed script is like running a binary on Windows with admin permissions and with Defender turned off, which makes it somewhat more scary to me.

On Windows I use Chocolatey when I can, and if I can't (or it looks dodgy anyway) I'll either just not install it or try it in a sandbox. Things that aren't choco-able are generally commercial software obtained from the vendor's download page, we theoretically trust those things somewhat. YMMV.

> Directly sudoing a curl-ed script is like running a binary on Windows with admin permissions and with Defender turned off,

Most people would just say yes to any prompt they get, those wise enough not to aren't running random curl scripts either.

As for Defender being any kind of protection, I have my doubts.

> it seems to be increasingly the case that package managers just don't have some software - or have a version that's years out of date.

This is entirely distro dependant, some are very up to date and have most things you'd want, especially if you include the likes of AUR in that. But then there's usually a Flatpak or an AppImage that you can use in the odd case that they don't.

Actually no, the problem with curl | bash is that it can be detected on the server, so if the server is compromised, it can serve you malware and you will never know about it. It is safe(r) to curl > file, inspect the file, then execute it under bash.

The result of inspecting such a file is usually a series of disgusted shudders, "this will do WHAT do my machine"?

Sometimes a smile at the clarity and simplicity of the authors shell code, sometimes.

A rare delight but it does happen

Only installers I’ve seen are the .installer bundles, which leave behind a manifest for automated uninstalling.

On Unix/Linux land the prevalence of pipe curl to bash type installers is not much different.

True, but saying so will likely to earn you downvotes from those committed to this unhygenic practice ...

You are basically describing what Windows has as appx/msix. The decentrialized notarization authorities are the code signing certificate providers.

I had not seen this, but it absolutely does (on the surface) seem like a solution to this problem. Thanks!

I’d need to educate myself a bit more in terms of whether there are third-party authorities beyond Microsoft for the packages.

Found this introductory video for anyone else interested:


Note: I didn’t intend the Surface pun above, but it happened and we can all be glad that it did.

Yes there are a few certificate authorities. For example DigiCert, SSL.com and others. You can also create your own e.g. for enterprise deployments. Or you could even set up a public CA if you wanted to, the process is standardized.

So whilst Microsoft will sign for you if you distribute via their store, otherwise you pay per year for certificates and can distribute outside the store.

There are problems with the system (cost, bugs, usability problems) but it is decentralized.

> macOS comes pretty close to what I'd ideally want in an OS with regards to installation - independent packages that are certified/notarised, but I'd like to see the OS allow for user-specified authorities beyond just Apple.

It's easy to run unsigned binaries/app packages on macOS: right click on the .app, hold down Option, then click Open and confirm the warning.

That is not a user-specified authority.

I would also like this option. I see why Apple finds it undesirable though. Software installation safeguards are a game of whack-a-mole with (e.g.) support scammers who ask grandma/Lee-in-accounting/Cindy-next-door to naively click through all the warnings.

The closest Apple comes to this capability is achieved via device Supervision and MDM, which might be comfortable for some of us here in this forum but obviously isn’t practical beyond more technical circles.

Baddies keep ruining all the fun for the rest of us.

And being the only authority also happens to be conveniently aligned to their financial incentives.

> Baddies keep ruining all the fun for the rest of us.

IMHO the blame rather lies with our politicians who are unwilling to take the steps necessary to cut the baddies off from the Internet. Let's see just how fast India, Pakistan, Turkey and other scammer hotspots clean up their act when the US+EU threaten to cut them off from the Internet and SS7 unless the scam callcenters are closed down for good... the amount of corruption regularly exposed by scambaiters on Youtube is insane. Billions of dollars of damages each year [1] from that bullshit and our politicians don't. fucking. care.

[1] https://www.vibesofindia.com/fraudsters-in-india-cost-americ...

I’m more than a little skeptical that scams would be less of a problem if specific countries cracked down on large operations. For one thing it’s not clear how you’d ever get the whole world on board. Pressuring India is hard enough, try Myanmar, a place that doesn’t get along with the West at all and is already a hotspot for phone scams targeting Chinese speakers. And if centralized, relatively open operations overseas were no longer possible, it would likely become more like other types of fraud run by local gangs. So I’m all for pressuring India to crack down on scammers, but I don’t see how that would reduce the desire to tighten software controls on PCs.

> For one thing it’s not clear how you’d ever get the whole world on board.

You don't need the whole world. The Western world is enough - no Internet and phone service (both easily enforced by requiring providers to reject ASNs / phone country codes) means a lot of lost business for an affected country.

> Pressuring India is hard enough, try Myanmar, a place that doesn’t get along with the West at all and is already a hotspot for phone scams targeting Chinese speakers.

Honestly, that's China's problem to solve.

> So I’m all for pressuring India to crack down on scammers, but I don’t see how that would reduce the desire to tighten software controls on PCs.

When software vendors don't have to gate more and more features behind more and more obnoxious bullshit simply to whack-a-mole scammers, they won't.

they probably don't do it because it's a bad solution.

Is it? I prefer to tackle problems at the source, and its crystal clear that overseas scammers are exploiting corrupt local law enforcement in conjunction with easy access to targets via the Internet and shady telephone providers.

There is no Pareto optimal unicorn that provides both a democratized marketplace of software with low barriers to entry and an ironclad guarantee of security against compromise of personal user information. These two are fundamentally at odds. If anyone can produce and distribute software easily on a given platform, then so can people with malicious intent.

Or just run `sudo spctl --master-disable` one time; and it will change the allowed app sources to the invisible "Anywhere" option.

> I suspect the final form for software installation is probably where iOS and Android are going in the EU, where there's a single means of installing software to the device so that everything can be sandboxed properly, but the acquisition/update process can be pointed to a URL/Store that the user has pre-approved.

Basically how Linux distributions works since the beginning. Tough at the start the installation source was not remote but a CD-ROM things didn't change.

You have a repository of packages (that can be on a local source as a CD or remote source such as an HTTP/FTP server), that have some sort of signature (on Linux usually the pagkage is signed with GPG) with some keys that the user trusts (and the default are installed on the system), and a software that allows to install, uninstall and update the packages.

Android/iOS arrived later, but they didn't invent anything.

Android/iOS didn't invent this, no, however you're missing the sandbox part. Most Linux package managers don't sandbox anything.

iOS is the gold standard IMO. Apps are sandboxed, can only interact with the outside world via APIs (that the user needs to approve), one click uninstall and it’s all gone without a trace (at least in theory). Love it.

I think Android does it better with third party store and sideload support. It seems that iOS depends some security to their own the AppStore. (example: disallow dynamic code generation like JIT)

How could Windows handle it by itself?

If it provides a framework for installers/uninstallers, it'll be fighting the inertia of decades of legacy software, programmer habits, and old tutorials.

If it tracks file ownership by program, it might accidentally delete user files. How would it differentiate between a VSCode extension that should be uninstalled, and a binary compiled with VSCode for a user project? A false positive could be catastrophic.

If it restricts what programs can do to accurately track file ownership, you end up with Android. Which is fantastic for security, but is a royal pain in the ass for all parties:

- The app developers have to jump through hoops for the simplest actions, and rewrite most of their code in the new style.

- The operating system has to implement a ton of scaffolding, like permissions-restricted file pickers and ways to hand off "intents" between applications.

- The user is faced with confusing dialogs, and software with seemingly arbitrary limitations.

In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

Edit: the answer above applies only to Windows, because of its baggage. Linux'es are in a much better position, for example, though their solution is still not perfect.

The same way any linux distro does?

Define a separate directory for program installations, that user processes cannot write to. Only program that can do so is the package manager, which other programs can call to install packages. Uninstall removes everything related to a program from this directory.

> In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

The only reason these make things hard is that windows lacks any facility to deal with them. Solutions going forward: Outright ban having your own auto-updater, to auto-update you register your program and where to update it from with the package manager. Shared runtimes are trivial for package managers to handle, it's just a package that many other ones depend on. Extensions can be handled as packages.

I agree with you, now for completeness I should mention that Linux package formats usually allow packagers to provide arbitrary pre- and post- install shell scripts ran as root.

(which means that if you don't trust a provider, not only it's not safe to run the program, but it's also unsafe to install it)

>if you don't trust a provider, not only it's not safe to run the program, but it's also unsafe to install it

Isn't it same for windows right now? `.msi` and `.exe` can execute arbitrary code right?

The only difference is that you usually trust the repo in Linux, but that’s a pretty significant “only thing,” in the sense that the repo is already the source of your whole system, so it better be trustworthy!

The "elegant" way of distributing 3rd party software for Linux is to ask the user to add your APT/RPM/[...] repo to their system. And most Linux distro maintainers anyway don't vouch for software in the main repos, beyond basic install-ability. The Debian project for example definitely doesn't do in-depth security analysis of every package in the repos: they just check the license, re-package it, and keep an eye on security updates in upstream.

Yes, absolutely.

Right. You should generally never install a proprietary software package provided by the vendor in RPM, DEB, or similar. What keeps the use of those hooks safe is purely social convention and review internal to the Linux distribution, and vendors routinely use those hooks to do unacceptable things.

If you must install proprietary software on your Linux system, either package it yourself or use something like Flatpak or Snap (or even AppImage).

Hopefully in the future vendors will increasingly move to providing well-sandboxed Flatpak packages by default.

The packages are cryptographically signed, you have the option to abort the install of an untrusted package before it does something malicious.

> packages are cryptographically signed

packages are cryptographically signed by the packager, by the way on Debian you add the key when you install a new repository. The signature tells you "This package has been built by X and has not been tempered in the meantime", not "X and this package are not malicious, I promise".

> you have the option to abort the install of an untrusted package before it does something malicious

How do you do this in practice?

If I run apt install p or or dpkg -i p.deb, the thing is installed. APT asks you for confirmation if it has to install additional dependencies but that's it.

I don't have no guaranty such like for any package, I can install it without worrying something bad won't happen during its installation.

Of course you should not install untrusted packages, but still. The same could not be said if the package format didn't have anything to specify arbitrary install scripts.

> The same way any linux distro does?

I'm going to assume you are talking about rpm and deb packages since they are still currently the dominant installation packages on Linux.

> Define a separate directory for program installations, that user processes cannot write to. Only program that can do so is the package manager, which other programs can call to install packages.

Windows does this. Programs are installed in directories under "C:\Program Files" which is only writable with elevated system rights.

> Uninstall removes everything related to a program from this directory.

rpm and debs don't install all the files needed for a program in a single directory. They are scattershot all over the file system and in many of these directories comingled with files from other programs. Windows comes closer than Linux in this regard since it does create the directory under "C:\Progtam Files" which while unfortunately doesn't always contain all the required files usually contains the vast majority.

This is exactly how AppX/MSIX packages work, with C:\Program Files\WindowsApps (by default) being pretty substantially locked down. They even use filesystem/registry virtualization by default to isolate packages even further from each other. They also have solutions for framework packages and extensions though I haven't tried those out and suspect they have annoying practical limitations around edge cases.

Of course, a decade later almost nobody uses those because they botched the rollout by limiting AppX to the Microsoft Store and an entirely new poorly documented and very restrictive set of windows APIs and app frameworks. They've made huge progress on all of those problems with MSIX to the point that it's a reasonably good and easy to use choice for most apps with some neat benefits like updates only downloading the changes between versions. Of course if your app pushes the boundaries of the sandbox or capabilities or runs into a bug it becomes a huge pain.

I don't think MSIX is a good choice for most apps. With a decent-sized user base, you will have a lot of people who run into undiagnosable errors with MSIX or can't use it because they're in locked-down enterprise environments.

I think Affinity Photo's experience with MSIX is instructive; hundreds of negative results on their forum, eventually they had to back down and provide a non-MSIX installer (and at that point do ya really want to maintain 2 separate Windows installers?)

https://forum.affinity.serif.com/index.php?/topic/170529-ext... https://forum.affinity.serif.com/index.php?/search/&q=msix&t...

That was the first option, "provides a framework for installers/uninstallers".

But what would you do with the millions of existing programs, most unmaintained? And what about programs with strong opinions on update schedules, or built-in extension marketplaces?

It's easy to solve this problem if your first step is "replace every program".

If you care about this enough to abandon old software, they built that and called it Windows S and few wanted it.

Windows without backwards compatibility is a dead end because the only reason why Windows exists is backwards compatibility and the existing user base. As an OS it is decades behind all its competitors, with a 30yo filesystem, file locking ridiculousness (which is why uninstallers and updates end up being so complex and require reboots), an antiquated central registry for settings that ends up slowing the system down over time, and a security framework so broken that you need anti-malware software running and inspecting every little thing happening on your system or you're easily compromised (everything is executable by default).

The security situation is so bad at this point that you can't trust any Windows benchmarks anymore. The benchmark suite will run on a "bare" Windows system; probably with updates and Windows Defender disabled and many other system services stopped to maximize performance and prevent background services from slowing everything down. The reality though is that on a regular user desktop all these things and a whole lot more will be enabled, resulting in vastly degraded performance compared to the benchmarks. The end user experience sucks.

Now they're forcing ads down your throat and pestering you at every turn to use more Microsoft software (e.g. trying to get you to use Edge). They've also recently included UI changes in "essential" system updates that can't easily be reverted or undone, breaking people's workflows. It's anti-user insanity and it's all because Microsoft can't actually go back to the drawing board with Windows anymore because the alternatives are just too good.

After using a Linux desktop full-time for a while, going back to Windows feels like going from having modern plumbing to pooping in the woods.

You could provide the framework that well-behaved, maintained programs will use while still allowing the old installers to run.

By the way that's what we have on Linux, some programs come as a shell script that you run to install them. Most Java IDEs for instance.

(which can't be arsed to provide proper packages -- darn, what did I just write? :-))

>The same way any linux distro does? >Define a separate directory for program installations, that user processes cannot write to.

What about /usr/local/bin? Isn't that specifically for putting non package manager binaries into?

That's more for binaries and scripts manually installed by the administrator because they weren't available in the package manager or are custom.

> How could Windows handle it by itself?

In the same way 'Linux' (in the widest sense of the term, i.e. Linux distributions like Debian) handles this. User data is not touched by the (un)installer, configuration files are checked for changes from the package default and left alone unless explicitly purged. Files which do not belong to any package are left alone as well so that binary compiled with VSCode for a user project will not be touched:

   warning: while removing directory /splurge/blargle/buzz not empty so not removed
This has worked fine for decades and is one of the areas where those Linux distributions were and are ahead of the competition. It works fine because the package manager has clearly delineated responsibilities and does (or rather should) not go outside of those. Do not expect the package manager to clean up your home directory for you, that is not part of its duty.

> In the age of shared runtimes, auto-updaters, extension marketplaces, and JIT compilers, managing installed applications is harder than ever.

Most auto-updaters should be disabled on systems with functioning package management - Thanks Firefox but I'll update my browser through either the package manager as I prefer my executables to be read-only for users.

Some packages - the whole Javascript rats' nest being a good example - move too fast to be usefully packaged by volunteer maintainers so those are relegated to a different area which is not touched by the package manager. Other packages - anything Python fits here - are such a tangled mess of mutually incompatible versioned spaghetti that they are hard to fit inside the idiom of package managers so they get their own treatment - python -m venv ... etc. These are the exceptions to the rule that package management can be made to work well. By keeping those exceptions in areas where the package manager does not go - e.g. your home directory - the two can and do coexist without problems.

It's called MSI, and it's been in Windows for 20 years.

The issue is that MSI is very buggy when handling explorer extensions. If you're not careful, when you uninstall it'll prompt you to close explorer.

(I know because I shipped a product that installed via MSI and had an explorer plugin. The installer issues were more complicated than the plugin.)

In this case, the issue is that when explorer loads a plugin, it keeps an open file handle to the dll. This gives the installer two options: Restart explorer.exe, or somehow finish uninstalling when explorer.exe terminates.

The product that I shipped just restarted explorer.exe.

Oh. I thought MSI and WinGet (sorry, AppGet in fact) designed to solve these problems.

A VSCode extension would be installed and managed by the OS package manager. User created content would be not.

You, you want Microsoft to lose its total control over the VSCode extension "marketplace", don't you?

Really? Do you install Firefox extensions from apt-get?

It's not unusual to do this in the Nix world.

There are a ton of VSCode extensions in Nixpkgs: https://search.nixos.org/packages?channel=23.05&from=0&size=...

You can use them in combination with the vscode-with-extensions function to create a VSCode package that bundles in whatever extensions you declare: https://nixos.wiki/wiki/Visual_Studio_Code

I haven't used Linux in a while, but I do remember seeing browser extensions in the package manager.

Yes, there are a few that can be installed from the Debian packages.

It allows you to install applications from any source, not only the official store.

It allows for a variety of installers to exist with different features for different use cases.

It allows you to install the application in any location you choose.

It allows for portable installations and to run software just copied from other sources.

What is "it"?

Special installers / uninstallers and also the ability to install and run things outside the official OS store.

Many program can run as standalone .exe, or just unzip as a folder.

All the points you list does not need _Special_ installers / uninstaller.

Yes, that is what I mean by my last point: "It allows for portable installations and to run software just copied from other sources." You can think of decompressing from an archive as running a very simple installation program.

If the only installer available was one provided by the OS how long do you think it would take to make that the only way to install and run software. These things are being done right now on many platforms in the name of safety, security, and to a lesser extent convenience.

The more phone-like a platform is the fewer ways you have to install and run software on it. So far general purpose computers still allow you to install software in other ways than the built-in method (i.e. just unzip and place in a directory), but it's getting increasingly common to require executables be signed, and things are always moving to be more and more locked down.

Now the use of "Special" installers/uninstallers is from the original comment, I would just refer to them as "regular" installers/uninstallers. I do like the ability and freedom to have an ecosystem of these things, as I don't want the one OS method to be the only way to install applications.

>If the only installer available was one provided by the OS

There's the non-sequitur. OP never said that this is what should happen. It is strange to leap to this assumption while also wanting to define portable programs and archives as 'installers'.

In the context of Windows, 'special' installers means the programs you run to be able to use a different program that don't appear on other OSes.

I did not define portable programs and archive extractors as installers, just suggested the act of decompressing to a directory or copying to a directory would be considered as installing the program.

I guess "special installers/uninstallers"

In principle I have no objection with those options as I've had to use all of them given the nature of the Windows ecosystem.

The trouble is that MS never paid much attention to tracking and cleaning up after installations or after uninstallers has finished. Often this doesn't matter but when something seriously goes wrong untangling the mess can be almost impossible, it's often easier to reinstall Windows and usually much quicker (that's if one has a simple installation).

Unfortunately, my installations aren't simple so I take snapshots at various stages of the installion—stage-1 raw install with all drivers, stage-2 essential utilities, and so on. By stage-4, I have a basic working system with most of my programs. Come the inevitable Windows stuff-up I reinstall from a backup image, it's much quicker than starting from scratch.

Between those major backups, I use the registry backup utility ERUNT, it not only takes registry snapshots on demand but also automatically backs up the registry on a daily basis. This, I'd venture, is the most important utility I've ever put on a Windows computer, I cannot recall how many times it's gotten me out of trouble.

Just several days ago I had a problem reinstalling an update to a corrupted Java jre/runtime. Nothing I did would make the installer install as the earlier installation was not fully uninstalled, thus log files etc. weren't a help.

In the end I had to delete the program dir and other Java files I could find, same with registry entries. As expected, this didn't work, as I hadn't found everything.

Knowing the previous version number of Java I did a string search across the OS and found another half dozen or so Java files. Retried the install again and it still failed. I then ran ERUNT which replaced the registry with an earlier pre-Java one and the install now worked. This still meant that some programs that were added later, LibreOffice for example, had to be reinstalled to update the registry.

If I hadn't had ERUNT installed I'd have had to go back to reinstalling an earlier partition backup. And if I'd not had those then I'd have been in real trouble.

That's the short version. Fact is, Windows is an unmitigated mess when it comes to installations. Why can't I force an installer to complete even with faults? Why doesn't Windows remember exactly what happens during an installation so it can be easily undone?


Edit: if you've never used ERUNT and decide to do so, always ensure you shut Windows down and restart it after installing a backup registry before you do anything else—that's in addition to the mandatory reboot required to install the backup.

You may have multiple registry backups and decide the version you've just loaded wasn't the one you want. Loading another without this additional reboot [refresh] will blue-screen the O/S. You'll then have to install the backup manually and that's very messy.

It is, these days. Windows 10 onwards has a native package format called MSIX that somewhat resembles packages on Linux. They're special zips containing an XML file that declares how the software should be integrated into the OS (start menu, commands on the PATH, file associations etc). Windows takes care of installation, update and uninstallation.

The system is great, in theory. In practice adoption has been held back by the fact that it was originally only for UWP apps which almost nobody writes, and also only for the MS Store. These days you can use it for Win32 apps outside the store but then you will hit bugs in Windows. And packages must be signed.

Still, the feature set is pretty great if you can make it work. For example you can get Chrome-style updates where Windows will keep the app fresh in the background even if it's not running. And it will share files on disk between apps if they're the same, avoid downloading them, do delta updates and more. It also tracks all the files your app writes to disk outside of the user's home directory so they can be cleanly uninstalled, without needing any custom uninstaller logic.

One interesting aspect of the format is that because it's a "special" (read: weird) kind of zip, you can make them on non-Windows platforms. Not using any normal zip tool of course, oh no, that would be too easy. You can only extract them using normal zip tools. But if you write your own zip library you can create them.

A couple of years ago I sat down to write a tool that would let anyone ship apps to Win/Mac/Linux in one command from whatever OS they liked, no harder than copying a website to a server. I learned about MSIX and decided to make this package format. It took us a while to work around all the weird bugs in Windows that only show up on some machines and not others for no explicable reason, but it's stable now and it works pretty well. For example you can take some HTML and JS files, write a 5 line config file pointing at those files, run one command and now you have a download page pointing to fully signed (or self signed) self-updating Windows, Mac and Linux Electron app. Or a JVM app. Or a Flutter app. Or any kind of app, really! Also IT departments love it because, well, it's a real package format and not an installer.

Writing more about this tech has been on my todo list for a while, but I have now published something about the delta update scheme it uses which is based on block maps, it's somewhat unusual (a bit Flatpak like):


The tool is free to download, and free for open source projects if anyone is wanting to ship stuff to Windows without installers:


> For example you can get Chrome-style updates where Windows will keep the app fresh in the background even if it's not running

Considering the ability to update itself is a requirement of Cyber Resilience Act in EU, I foresee a big uptick in usage (and app stores usage of course).

that's a cool project, will definitely try it out later

Besides the "special uninstaller" thing. One of the things I hate the most with Windows filesystem management compared to Unix-like OSes.

On Windows, opening a file locks it. So you can't delete a program that is running, you will get an error. It means of course that an executable can't delete itself without resorting to ugly tricks like the one mentioned in the article. That's also why you get all these annoying "in use" popups.

On Unix, files (more precisely: directory entries) are just reference-counted pointers to the actual data (inode on Linux), removing a file is never a problem: remove the pointer, decrement the reference counter. If the file wasn't in use or referenced from elsewhere, the counter will go to zero and the actual data will be deleted. If the file is in use, for example because it is an executable that is running, the file will disappear from the directory, but the data will only be deleted when it stops being in use, in this case, when the executable process terminates. So you can write your uninstaller in the most straightforward way possible and it will do as expected.

I feel like this is some stupid question but aren't exexutables and their libraries loaded to RAM? If yes then why can't it just delete itself (from disk)?

I don't know the details but I think executable files are mapped into memory, and needed sections are loaded on demand. In case the system is low on RAM, little used sections can be evicted, to be reloaded the next time they are needed. This requires the file to be present on disk.

One thing I like about Linux package managers is that you can query any file to see which package owns it. How does Windows not track this?

Except they all leave files everywhere in ~, ~/.cache, ~/.config, ~/.whatevertheyfeellike

The ~/.whatevertheyfeellike is an antipattern (that is annoying) but the others are well defined in the xdg_desktop spec[0].

Personally I appreciate knowing where the config/cache for each application is. (Though it does annoy me when programs don't follow this as in your third example)

[0] https://specifications.freedesktop.org/basedir-spec/basedir-...

Why does the XDG spec have authority over software?

It usually doesn't, and it's mostly a good standards recommendation that even the most GPL of GPL codebases doesn't always follow (looking at you, emacs).

Emacs has respected $XDG_CONFIG_HOME for a while now. There are worse offenders (e.g. not likely to see the end of .mozilla any time soon).

GNU emacs was created at 1984. XDG Base Directory spec was started around 2003..

Also Emacs will reapect files being placed in XDG directories, it just doesn't put them there...

Software specifications are usually adopted by convention and implemented to minimize surprise and make things interoperable. They are not authorities and cannot make anyone do anything. One of the most common software failure modes is to implement a specification too tightly or in a way that nobody wants although the reverse is a problem as well.

They don't. XDG specifications are recommendations. Their only power is that your software will integrate poorly with other software (specially desktop software) if you ignore their guidelines.

Those files are user data, not part of the software package.

I would disagree, files that the user cannot edit or should not edit should not be going into their home directory. Things like cache files should go into a system wide cache directory instead.

Cache files might contain user's sensitive data. Makes sense to keep in them in the user's home directory in those cases.

File permissions?

There's no other path that the user is guaranteed to have write permissions to (except maybe /tmp, I guess).

Isn't that very anti-linux though, to have a directory owned by root but populated with subfolders owned by other users? /home is the only exception I can think of that does this.

Anti-linux I don't know, but it was not uncommon in unices to have home directories in /usr/home.

And there is no written or unwritten rule about that. In fact, /home is a subdirectory of / which is owned by root.

True, but /usr/home is no longer a common place to store home directories. It used to be, particularly in Bell Labs Unix. (Does FreeBSD still do this?)

The Linux Foundation’s File Hierarchy Standard puts user homes in /home, but it’s by no means mandatory.

/home being the *nix home folder directory isn’t written in stone, but plenty of software expects it. Of course you shouldn’t hard code things like that, but that has never stopped anyone from doing it. (Not that we should reward that with de facto standards necessarily.)

I understand the various reasons why a root file system hierarchy isn’t part of the Single UNIX Specification, but it might have been nice.


And /run/user

also mail and cron

If I uninstall ssh I still want to have have my authorized hosts. If I uninstall some firefox version firefox I want to keep my profiles. XDG defines a thumbnailing hierarchy followed by multiple libraries, uninstalling any of those shouldn't clear thumbnail caches.

Persistent user-specific state needs to live in a persistent user-specific location. You could choose not to use the concept of a home directory, but you would be doomed to reinvent it.

Why would you want that?

If you have separate partitions, would you really want user data to go to the system partition? Or a third partition?

Do you find having more places that user programs can write a benefit?

I would favor a /var/user/something directory.

The fact that nobody does that is pretty much a consequence of the difficulty of coordinating multiple projects that do not have a common authority, not because it is a bad idea.

Again, what do you prefer about that?

Maybe the reason no one does that is simply that no one shares your preference.

Having a clear separation between actual user data files / documents and stuff like cache for different reasons:

- easier to cleanup/wipe without risking deleting works/personnal files

- backup solution doesn't have to have a town of entries in an ignore/exclude file

- same as above for syncing software

- tier storage separation possibility

- disk space allocation separation depending on data vs volatile stuff

Should that count towards user disk quota?

I agree cache file should not go into their home directory, however I don't agree they aren't user data and that they would be part of the software installation.

That is not part of the software itself so it is still correctly installed/uninstalled.

Now I believe all software should have a manpage, dialog and a cli argument that describes where all the files[1] generated by default go but that is another subject.

[1] cache, config and even default save

That's a feature so that users can keep configuration files and even move them across systems.

Try opening C:\Users\%USERNAME%\Documents\My Games

MSIX packaged apps do support this, Windows redirects file writes outside of home dirs and other user locations to a package-specific directory that's overlayed back onto the system so the app thinks it's writing to wherever, but it's actually a package-private location.

> you can query any file to see which package owns it

Presumably you mean something like using dpkg/apt for a Debian-style system?

I think that only works if a file is actually installed from within the framework. As soon as you've installed a file via npm, flatpak, pip, snap, specialist plug-in, standalone binary, that ancient program you had to install by hand, or one of the other squillions of ways of getting an executable program, you're out of luck and have to figure it out manually.

Ok, I see what you're saying here, still, Linux's way is better, I'd rather have my system cluttered with useless files of deleted programs than be exploited because of something that was solved decades ago.

> Why do Windows programs need special installers/uninstallers?

This is supposed to happen using MSI-based installers. It's a windows component.

> Why isn't this handled by Windows itself?

Now, here's where things get tricky.

In the article, the issue is an explorer plugin. MSI is notoriously buggy with installing and uninstalling explorer plugins. If you don't jump through hoops, your installer will have a bug where it prompts the user to close Explorer.exe.

I know because I shipped a product with an explorer plugin. The installer was always a thorn in our side; and the workarounds, ect, that we had to do to install / uninstall / delete our plugin were more complicated than the plugin itself.

When the subject is Windows, and the question includes a “why,” the answer is always “for historical reasons.”

It's hardly specific to Windows. All the major Linux distros have excellent package management systems, and yet many, many packages and applications choose to ignore these in favour of party solutions, scripts, or even curl https://not-malware.trustme.lol | sudo bash style hodgepodge.

I had never heard of detours before, but I guess it isn’t any different that a good old fashioned LD_PRELOAD

it's a little more general, I think, since one common use case for it is to use it on your own process in order to intercept calls to stdlib/OS code from libraries you don't control.

For example, in the bad old days I used detours to virtualize the windows registry so that I could do "fake installs" of COM components inside of a VB6 app, allowing it to run without an install and without administrator permissions. This worked by detouring all the Win32 registry APIs, which was sufficient to intercept registry accesses performed by the COM infrastructure.

> it's a little more general, I think, since one common use case for it is to use it on your own process in order to intercept calls to stdlib/OS code from libraries you don't control.

This capability is intrinsic to how ELF linking works. The main application or even any library can interpose a libc function just by defining and exporting a function with the same name, and that definition will be preferentially linked in both the main application and all subsequently loaded dynamic libraries and modules. Your definition can then use dlsym(RTLD_NEXT, "foo") to obtain a function pointer to the next definition, which would normally be libc itself but may be from another library. A running application could actually have several implementations of a function, all proxying calls onward until the terminal (usually libc) implementation.

Basically, the way ELF linking works by default is that the first definition loaded is the preferred global symbol used to satisfy any symbol dependency with that name. It follows that there's normally a singular global symbol table. Though there are features and extensions that can be explicitly used to get different behaviors.

There's nothing magical about LD_PRELOAD within the context of ELF linking. LD_PRELOAD support in the linker (which is the first bit of code the kernel loads on exec(2)) is very simple; all the linker does is load the specified libraries first, even before the main application, so symbols exported therein become the initial and therefore default definition for satisfying subsequent symbol dependencies, including in the main application binary, and even if the main application binary also defines and exports those symbols.

All of this is basically the exact opposite behavior of how PE linking works on Windows, for better and worse--depending on your disposition and problems at hand.

Also note that all of this is different than so-called "weak" symbols, which is a mechanism for achieving one of the same behaviors--overriding another definition--when statically linking. Otherwise, when statically linking, multiple definitions are either an error or it's difficult (i.e. confusing, especially in complex builds) to control when and where one definition is chosen over another.

[1] Though main application symbols aren't usually exported by default, so you need to explicitly mark a definition for export or build the entire main binary with a compiler flag like `-rdynamic`, which is the main binary analog to the `-shared` flag used for building shared libraries. The Python and Perl interpreters, for example, are built with -rdynamic as the interpreter binary itself exports all the implementation symbols required by binary modules, rather than defining them in a separate shared library against which modules explicitly link themselves against at compile time. (This is also why when building Perl, Python, and similar language modules you have to tell the compile-time linker to ignore unresolved symbols.)

For those of us who don't Windows, can you explain what a detour is?

You essentially replace a function with your own. The project is at https://github.com/microsoft/Detours.

I’ve created a PowerShell module that wraps this library to make it easier to hook functions on the fly for testing https://github.com/jborean93/PSDetour. For example I used it to capture TLS session data for decryption https://gist.github.com/jborean93/6c1f1b3130f2675f1618da5663... as well as create an strace like functionality for various Win32 APIs (still expanding as I find more use cases) https://github.com/jborean93/PSDetour-Hooks

> as well as create an strace like functionality for various Win32 APIs

Yes please. Thank you for this

Detours is a library for instrumenting arbitrary Win32 functions Windows-compatible processors. Detours intercepts Win32 functions by re-writing the in-memory code for target functions. The Detours package also contains utilities to attach arbitrary DLLs and data segments (called payloads) to any Win32 binary.

Detours preserves the un-instrumented target function (callable through a trampoline) as a subroutine for use by the instrumentation. Our trampoline design enables a large class of innovative extensions to existing binary software.


And my more sophisticated library, https://github.com/stevemk14ebr/PolyHook_2_0

Interesting. Has anyone done the same thing on Linux?

I use and recommend subhook[0].

[0] https://github.com/Zeex/subhook

Imagine if Windows just allowed DeleteFile() even if the file was open, like unlink() on almost any other OS...

It does. But this issue arises because of file locks. Running an executable holds a lock that prevents deletions (but not renames).

Many OSes have file locks, though they often don't use them as liberally as Windows.

Imagine all the "WinBLOWZ is bullshit, I deleted 200 gigs of shit and my C: still has no more free space" posts if Windows started doing soft deletes

Dropping wscripts is a good way to get malware profiled too, and no way to code sign it or verify it's integrity before executing.

If your program creates the script and executes it, is verification necessary? This would be like verifying your 1st party scripts in a webpage that you wrote. It won't really hurt anything, but I'm not sure there's a point.

Reminds me of a simple app I made for Windows 95/98 to add every directory including System32 to the uninstallers' lists. No AV, neither Norton or McAfee, saw that timebomb comming. Good times.

Well maybe if Windows applications were packaged similar to MacOS (one of the few things I like) with the application data and user data for the application in 2 folders then it wouldn't be such an issue.

Most Windows apps sit under program files, some sit directly on drive root. But all spray configuration/user data files for themselves all over the damn place, requiring unique uninstallers.

MS, build app install/uninstall into Windows directly...

This tracks, I’ve flagged the nvidia uninstall for hours work because it code injected and flagged behavioral consistent with malware

And today I learned that Windows supports running Javascript as shell script. huh

Malware delivered as an email with a link to a zip file containing a .js file is one of the most common methods of delivery, right behind word macros. The "map the .js extension to notepad.exe" is a common security trick with a measurable, immediate drop in malware in large orgs. You can deploy it via GPO or InTune.

Personal promotion, I built this as a better alternative:


Note the built in .js parser hasn't basically ever updated, if you're writing for this you're writing like you're targetting IE5.

> It creates the file "example.com" in the same directory containing the EICAR test string. This should set off appropriate alarms

Huh, neat!

It is very common for malware to contain java script payloads that try to obfuscate themselves like like this:


The seemingly_random_code decompresses/decodes whatever is in the seemingly_random_string and hands over control to it. Interestingly the decoded code is another version of the same with different code and string. This goes on for ~100 layers deep then at the end it just downloads and executes some file from the net.

It’s amazing how much we haven’t moved on since iloveyou.txt.vbs

> This goes on for ~100 layers deep then at the end it just downloads and executes some file from the net.

I understand doing one layer. I guess I could maybe see two layers. But why would it bother with 100 layers? Either the antivirus or reverse-engineering tool can grab the final product or it can't.

Typically scanning tools have some limit to how much they probe complex formats, to avoid stalling the entire system while they're scanning. It's very much conceivable that a scan tool will try to resolve code like this for 10 layers, and then if the result is not found to be malicious, consider it safe.

This is similar to how compilers will often have recursion limits for things like generics, though in that case it's easier to reject the program if the recursion limit is reached.

Because of potential false positives, and the speed at which files need to be analyzed at runtime (suspend process executing it and then analyze it), having files which take a long time to unpack and identify can cause these to be allowed to run. They get offloaded to a sandbox or other systems to be analyzed while the file is already being executed. The sandboxes are too slow to return a verdict before the main logic of the file will be executed. IF those dynamic systems cannot identify a file, an engineer will manually need to look at it.

In very strict environments or certain systems it might be practical to block all unknown files, but this is uncommon for user systems for example where users are actively using javascript or macro documents etc. (developers, HR, finance etc.) The FP rates are too high and productivity can take a big hit. If all users do 20% less work that's a big loss in revenue (the productivity hit can be much more severe even!). perhaps this impact / loss of revenue ends up being bigger than a malware being executed depending on the rest of the security posture/measures.

technically its possible to identify (nearly?) all malware by tracking p-states/symbolic execution/very clever sanboxing etc.. but this simply takes much too long. Especially if the malware authors are aware of sandboxing techniques and symbolic execution and such things as they can make those processes also take extra long or sometimes even evade them totally with further techniques.

I wish it _was_ possible to do all of the cleverness that malware researchers have invented to detect things, but unfortunately, in practice this cannot happen on like 90+% of environments.

If you run like a DNS server or such things, it's possible to do it as such a system would not be expected (ever?) to have unknown files. (gotta test each update and analyze new versions before deploying to prod). As you can imagine, this is also kind of a bummer process but imho for such 'static' systems its worth it.

With enough conditional evals() with dynamic inputs you can make the search space unsearchable big.

The search space is linear as the algorithm is linear.

This stuff is mostly done to make static analysis harder.

Been using this for years. Mostly really useful. Sometimes tricky to get right since the available APIs are semi-well documented and it's JScript, which is some sort of old Internet Explorer-ish version of JavaScript.

By the way, there are also HTAs, which are Microsoft HTML Applications. You can create a simple double-clickable GUI with these using only HTML and JScript.

Pretty crazy how Microsoft basically invented the Electron app as HTAs all the way back in 1999. Of course we browsers weren't as capable as they are today, but "I just want a HTML+CSS GUI" had been a solved problem for over ten years when Electron first came out.

Yes, and XULRunner allowed this too, using Gecko, Firefox's web engine, to render HTML-like markup specifically designed to build native-like GUIs.

Apparently XULRunner was first released in 2006, but Thunderbird, which uses (used?) the same technology, was released as early as 2003, and maybe this was existing in the Mozilla Suite even before.

Thunderbird never quite used XULRunner, I think; they always built their own binary (though at some point quite a lot of the shared stuff moved into the XRE stuff). Think of it as they had a fork of Firefox (much like Firefox had a stripped down fork of the SeaMonkey stuff).

Also, I think one of the Start Menus (might have been XP‽) was kind of HTA-ish? Not sure about that part, though.

> they always built their own binary

> Think of it as they had a fork of Firefox

Yep indeed, you are right.

Notable projects using actual XULRunner included Songbird (a music player) and BlueGriffon, an WYSIWYG HTML editor (a successor of Nvu and KompoZer, themselves succeeding Netscape Composer). Both released after 2006 indeed.

I liked XUL, I strongly believe Mozilla could have dominated the market taken by Electron, had they pushed XULRunner more, and perhaps make it transition to pure HTML, like they did to Firefox's core, because that's what people know and because XUL was a maintenance burden. I think XUL tags made more sense than HTML to build UIs, though, and with XUL, Gecko have had a CSS flex-like mechanism for a long time by the way.

[1] https://en.wikipedia.org/wiki/Songbird_%28software%29

[2] https://en.wikipedia.org/wiki/BlueGriffon

There was an experiment back in the hazy past around that time called Entity that did something similar. It was never complete enough to be a competitor to XULRunner, but it was fascinating for two reasons:

1) You could write event handlers in multiple languages, including C. If you wrote them in C, it spawned gcc and compiled it into a library, and dynamically loaded it... The overall idea of a polyglot runtime like that was fun.

2) #1 is only really weird because this could be done at runtime. One of the demo apps was an editor for the GUI itself, where you could add buttons to the editor, then write that event handler in C, and have it compiled and loaded into the editor itself...

It was a fascinating starting point, though full of heavy duty foot guns, and I'm still sad nobody took it further.

> The overall idea of a polyglot runtime like that was fun.

Active Scripting, which powers scripts in both WSH and old-school IE including HTAs, is polyglot and extensible. It’s why Active{Perl,Python,Tcl} are called that—the original headline feature (IIUC) was that they integrated with it. It’s also why you could write VBS in IE: IE just passed the text of the script along with the language attribute to AS and let it sort things out.

Nobody did ever a C interprerer, though, I think—perhaps because you basically have to speak COM from Active Scripting, and while speaking COM from C is certainly possible it’s nobody’s idea of fun. (An ObjC-like preprocessor/superset could definitely be made and I’ve heard that Microsoft had even entertained the idea at the dawn of time, but instead they went with C++, and I haven’t been able to find any traces of that project.)

That’s not to say AS is perfect or even good—the impossibility of caching DISPIDs[1], in particular, seems like a design-sinking goof. And the AS boundary was also why DOM manipulation in IE was so slow.

[1] https://ericlippert.com/2003/09/16/why-do-the-script-engines...

the best epub reader for desktop I've ever encountered, was epubreader (pre-WebExtension version), I used to launch it as a standalone app with XULRunner.

Well, you can use IE 9 in HTAs - that browser is plenty capable. :) Been using this as a Windows-only Electron alternative for years.

For the curious: Here's a completely unfinished guide to how you might start developing such an application: https://marksweb.site/hta/ From HTAs, you have access to the file system, the network, the registry, the shell - everything. It might be a bit different than normal web dev, but it's not too bad either.

Wow, that's so cool! I played around with making HTAs as a kid and never thought those could be that powerful. (I quickly moved on to topics more exciting to a teenage hacker, like making WinForms apps with some PHP RAD IDE.)

Wondering what would it take to port mshta (with all the ActiveX goodies) to other platforms. Maybe it's a little bit late for that, but sounds like it might be a fun project to me.

You're brave, putting "ActiveX" and "fun" in the same sentence.

Wine Gecko supports ActiveX, supposedly, so if someone implements all the common ActiveX components, that could be a cross-platform method of running HTAs outside of Windows.

That said, I'm afraid the Electron API is the closest thing we have to a cross platform HTML application these days. On Manjaro, several packages are already implemented by installing Electron next to the application specific code, so that would be the closest thing to a modern HTA alternative that I know of.

PWAs work fine if you don't need integration with the system itself other than file prompts, for chat apps for example. They're not really alternatives to HTAs to be honest.

It should be noted that HTAs are a common way to infect computers (because they're executables that aren't usually recognised as such) and they're disabled in many security conscious environments.

To be honest, in my ideal world, mshta, Electron and the like would be discontinued and, instead, there'd be a cross-desktop-platform HTML/CSS/JS app-runtime (_not a browser!_). This runtime should support a sensible, large subset of modern Web APIs plus a set of cross-OS and OS-specific APIs so it's easy to work with for developers. To be easy to use for users, it should be installed by default on all major consumer-facing OSes. So yeah, it's probably not gonna happen anytime soon...

how do you feel about PWA?


This feature has existed for more than 25 years.

My concern is more than Raymond Chen suggest that using it is still the recommended way. So much malware came through WScript.

Scripting is normal functionality for an OS to support. I don't know why people pretend JScript/WScript are evil but Bash is fine.

Well, he did warn you it would be indistinguishable from malware…

Yes, the same way one could write VBS (Visual Basic Script).

I think Windows 98 already had this ability. Possibly Windows 95 as well. It's a variant of the language called JScript, which is what was used in old versions of IE too.

It was about Windows 98 that Windows Scripting Host ended up prominent.

WSH btw allowed you to run any language that you had interpreter for - they had to support necessary COM interfaces (and to be truly usable, allow you to call COM objects), and register their interpreter class with ActiveScripting (WSH internal) engine.

Then you could use them not just for desktop automation, but also for scripts inside Internet Explorer (essentially, classic IE used WSH engines to implement scripts, iirc)

I've seen WSH (including HTAs) used with Perl, Python, Tcl, Rexx...so long as you install the interpreter with compatible COM service, you could use it.

It's technically JScript.

about what a sibling mentioned 'JScript' - not javascript; the infamous Microsoft EEE (the 2nd part) It has been there for decades.

What could possibly go wrong?

And it's even funnier that the solution the author gives is "hey execute this javascript code that uninstalls a program and deletes itself afterwards"

like, really? can't you write that in C? I don't think most Win32 apps use JavaScript for their installers.

> can't you write [a self-deleting executable] in C?

The point of the exercise is that, on Windows, you can’t, because Windows won’t let anyone delete executables that are currently in use (try it, you won’t be able to delete one either). Upgrading shared DLLs in the face of this fact is why installers for Windows programs often have to have you reboot the system (and in more civilized times asked you to close other programs before installation to reduce the probability of hitting a locked DLL). It’s also why there’s a registry key[1] containing a list of rename and delete actions to be performed on next reboot (usually accessed via the MOVEFILE_DELAY_UNTIL_REBOOT flag to MoveFileEx).

You can’t (straightforwardly[2]) make a self-deleting batch script, either, because the command interpreter parses a command at a time and so wants the batch file to exist. The Windows Scripting Host, on the other hand, will parse the whole file at once, close it, and then forget about it, so you can write self-deleting WSH scripts.

The workaround used by the uninstaller under discussion is instead for the executable to inject some code into the Windows Explorer (on the assumption that it’s always running and the user has to have access permissions for it) that accomplishes the deletion through return-oriented programming, so that the stack it’s executing from can then disappear into the wind (apparently? I’m not seeing how they plan to clean that up).

On a POSIX system you are explicitly allowed to delete any open file—including an executing one—making it languish in a kind of system-managed limbo (and take up disk space, invisibly) until it’s closed. The tradeoff is then that it’s impossible to ensure you’ve opened the same file as somebody else when all you have is its name. (I think you can at least check for success, provided you also have the device and inode numbers for it.)

[1] https://superuser.com/questions/58479/is-there-a-registry-ke...

[2] https://stackoverflow.com/questions/20329355/how-to-make-a-b...

I wonder why Raymond Chen suggest a WSH solution. Isn't PowerShell the official scripting language for Windows nowadays?

PowerShell has weird restrictions where it'll refuse to run scripts unless they're signed and stuff.

If the sysadmin chooses to, otherwise PowerShell can be run arbitrarily

The key is that unsigned scripts are opt-in, not opt-out. Chen is not going to suggest a solution that requires all users of the software to configure their computer to be less secure.

It's not really a security measure in that sense. It's a "safety feature" that prevents accidentally running such a script. Anything can trivially disable the protection using a bat script (or anything else) to bootstrap.

E.g. `powershell.exe -ExecutionPolicy Unrestricted`

I still long for the approach many software used on the AmigaOS - the app is a folder, the folder has the main exec and any assets it needs (libraries, images, etc.) and documentation and... That's it.

Install? Copy the directory to where you like. Uninstall? Delete the directory.

And if you wish you could keep any files used/generated with such an app in the same folder, making it 100% self-contained.

I remember being rather grossed out when I learnt Windows has "a registry" (that was a long time ago). "Why would you have a global registry? Whatever preferences a piece of software has they should live where the exe is".

(and yes, I am aware AmigaOS had an installer and dir structure not that unlike of Unix, with `sys:`, `devs:` and so on)

To be fair, Windows applications can be designed to be installable this way: a single executable, with everything it needs sitting next to it in the folder. Even better, a single executable with no other dependent files at all! Lots of little utilities used to be distributed this way. But many developers deliberately choose to structure their monster such that it needs to spread its tentacles all over the filesystem for it to work.

And for legacy/backward compatibility reasons, once MS allowed this behavior to go unchecked, there was no way to put the genie back in the bottle and stop it, without giving up backward compatibility. It didn't help that Microsoft software tended to be the "tentacle" kind as well.

It sounds great but there are simple use cases where the "portable" app isn't enough. For example, if you want multiple users to be able to use the program and have their own settings, you need something to be saved to the user folder. Or, if you want any basic interaction with the system (run on startup, run from a browser address, etc), you need to start messing with the registry.

So in theory apps could be distributed portable .exes but in practice Windows doesn't any ways of interacting with the rest of the system that are that nice.

I still love most aspects of the Amiga user experience, but a lot of Amiga applications would need libraries installed to Libs: and deleting the application's "drawer" would leave those libraries behind. (Having said that, by default libs: is assigned to sys:libs but you could assign extra targets, so that libraries would be sought from application-specific directories.)

Also, it suffers from the same problem as Windows here, in that you can't delete a file or directory which is currently open. The executable itself wouldn't be open after launch is complete (with the possible exception of overlay executables, but they were pretty rare) but the directory would be locked for as long as it was the program's current directory. If a subdirectory with app-specific libraries was assigned to Libs: that would also prevent deletion.

This is how a lot of apps on MacOS still work.

Sort of. They still leave garbage behind in ~/Library though.

Does that means the corollary: "Any sufficiently advanced malware is indistinguishable from an uninstaller" would be true as well?

I mean can you write a simulation of an uninstaller to create havoc on target's system and still remain "the good guy, the OS is at fault" type of situation when you write a malware?

I've heard this before, about cryptolockers. It's hard for the OS to know if you're encrypting all of your files on purpose, because you might actually want to do that.

I had an old hacky community program basically ruin my Windows install so I agree, it was a BLP viewer it automatically added previews on windows explorer, but if yo u removed a file while previewing a blp it'd crash your explorer and all your open tabs would close, really annoying.

I've seen Malwarebytes flag uninstallers a few times.

Any time I see a Microsoft link with a cheeky title, I assume it’s a great Raymond Chen deep dive. Haven’t been wrong yet!

As an aside every time I come across a Raymond Chen article I remember this post from Joel Spolsky - https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

I remember very distinctly this quote about him:

> The only person in the world who leapt to my defense was, of course, Raymond Chen, who is, by the way, the best programmer in the world, so that has to say something, right?

So in my mind I've made the connection that Raymond Chen = best programmer in the world since then haha.

This is an interesting article, because it's a product of its time: modern languages solve a lot of these exact problems. I think this is a resounding success that they have correctly identified a genuine problem that people used to struggle with (safe types, exceptions) and make it standard and correct and ergonomic.

Hah, I had the same experience. Saw microsoft.com and thought "it's gonna be a Raymond, I can feel it"

It is a very provocative title. I guessed Raymond Chen as well. Of course he delivers an interesting deep dive behind the title.

You can't rely on JScript being present unfortunately. It can be disabled.

It probably should be disabled on most machines. The last time I heard about it was @swiftonsecurity complaining about it being an easily overlooked malware vector.

I'd be surprised if this capability is only available from jscript though. (and sad, I don't think jscript has been updated in years)

Can't spell unfortunately without fortunately.

What can you rely on then?

Uhm, for uninstallers? How about Windows Installer?

If you mean in other contexts... I think the point is you're not intended to be able to do this? Outside of uninstallers, running code that only exists in RAM is... the type of thing malware typically wants to do more than anything else.

But in terms of what's physically possible, I suppose there's the command prompt, PowerShell, and scheduled tasks? I'm not sure if all of those can be disabled.

Edit: I forgot about this, but there's also the official solution of MOVEFILE_DELAY_UNTIL_REBOOT. But (as with scheduled tasks) the delay can cause problems: https://marc.durdin.net/2011/09/why-you-should-not-use-movef...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact