But nowadays, lots of applications seem to think it's OK to put normal files in my home folder without asking first. This is just outright barbarism. In particular, Ubuntu now seems like it's all-in on snaps, and if you use snaps, they create a non-hidden "snap" folder in the user's home folder. Madness! And when people complain, the devs have the gall to suggest that this is all fine:
I like the idea of snap/flatpak in the abstract, but until they get their act together, and stop creating non-hidden files in my home folder without asking first, I am never going to use snaps.
And of course, all this is on top of the new fashion for the OS to "helpfully" create Documents, Music, Video, etc folders in my home folder, and set the XDG environment variables to point to them. Noooooope.
But at least in that case the user can change the environment variables and delete the stupid folders. No such luck with ~/snap.
I have huge amounts of cache files in both .cache and .config. It's supposed to make my life easier by being able to backup .config and dump everything. but in reality this doesnt happen.
Backing up .config for just the user defined configs is a massive PITA. you have to put so many gitignore exceptions and I'm not even talking about the gigs of data all the chromium derivatives put into that folder.
Yeah but chrome devs half-assing the trivial xdg spec doesn't mean that xdg sucks - its just not implemented properly.
Browsers tend to work fine. Some apps just fail badly like rofi (command history stored there oof)
For what it's worth I also tmpfs+asd .nvm, .npm, and */node_modules, without any major headaches. I did have to fix a couple minor pathing dramas but nothing too drastic.
It also has an option to ignore entries listed in your .gitignore.
Here you are referring to separating the config and data?
Is this actually a problem? I can see how for things like games it would be valuable to have a special location for thing like saves games, like how you can move the roaming save for minecraft from a computer to another; but I never understood the net gain from splitting the application files in config/data
Other "new" softwares that fail to follow best-practices by putting top level files into ~/ just on my laptop are: npm, Ansible, pylint, docker, steam, rustup+cargo, vscode, IDEA IDE, Zoom, AWS CLI. These applications have no excuse.
Also I agree on the Documents, Music, Video, etc. I hate upper case letters in my folders, so when I do a reinstall I delete those and restore my backups from the lower-case versions.
The Arch wiki has more instructional docs:
I just looked at my Win10 profile dir, and it has 36 dot-prefixed folders, and 7 more files on top level. Offenders include .gitconfig, .rustup, and .vscode, just to name a few.
15 year Mac user (re)exploring Windows due to bad hardware...
Nevertheless there are alternatives. One liked by many are Norton Commander alike file managers like Total Commander or even Far. I am not too much in favor of them though.
I like to use Directory Opus. It is powerful and flexible. Costs money though.
Much better than storing them in C:\Windows or C:\Program Files, as was the case a decade ago, before windows introduced UAC and blocked write access to these.
All this kind of stuff is what Appdata\Roaming and Appdata\Local are supposed to be for.
Users often need to back them up. Backup systems often include Documents by default but not AppData.
Files in AppData are hidden, and you would not expect users to find them.
Further, many save games can be opened with a text editor just fine :)
My witcher saves were 10GB, until I realize that problem and deleted them, no wounder backups were slow. Skyrim is not that far off.
Pretty sure backup is already integrated in steam for these two games.
In the witcher 1 there is also an uncompressed BMP thumbnail with each save that is about a megabyte. It's PNG compressed in the witcher 2. They learned from their mistake.
Chrome's cache is in AppData/Local/Google/Chrome/User Data and Firefox's cache is in AppData/Local/Mozilla/Firefox/Profiles/. The only big offender on my system is npm with a cache in AppData/Roaming/npm-cache
Finding the directory requires new APIs introduced in Vista and Server 2008. See SHGetKnownFolderPath
I just tried to use the API. I literally couldn't find a working example on the internet in 2 hours so I guess noone figured it out in a decade.
I'm surprised that these new IsWindows...() functions claim compatibility down to Windows 2000. Doubt microsoft knew how they would name their next release 20 years ago. A Google search indicates that they are in fact broken.
If IsWindows... doesn't work then you can use the VerifyVersionInfo function directly.
I don't even get what you want to say.
The user folder exists for a reason. Drop your crap in there.
But just dropping stuff in my home directory--look, if I'm in a hurry and my defenses are down, you basically just encouraged me to clutter things up.
The last software I found that does this is a "download helper"...even though I have a Downloads folder that my distro already set up, that Chrome and Firefox know to use, this download helper thinks I want those downloads dumped in my home folder. I hope some generous person changes this behavior, meanwhile I forgot to charge it manually for the last batch and I have a bunch of cleanup to do.
Let me know what I'm missing.
It looks like maybe the "official" documentation is here:
But I haven't gone through that carefully.
I strongly prefer apps default to the previously used directory, and the home directory otherwise. IME apps are wrong 99% of the time when they guess their own save location.
If it’s hard to change, you should’ve either fixed your architecture from the start, or thought a bit harder about the implications of magically creating a new directory in $HOME.
If you do deeply nested paths (as npm used to do) then you can run against the old MAX_PATH limit that limits paths to 260 characters, but the correct solution is to either choose a flatter directory structure or use the newer APIs that allow paths with 32,767 characters.
Or even better to work with localization
Steam should have worked to figure out a better approach to UAC years ago. At this point it is one of several big tech debt issues with Steam that has my confidence in Valve at an all time low as a consumer.
I also have C:\CMD for my own batch files.
Every single program should be fully contained in its own directory including all the settings,logging,configuration etc...
No touching of any common resource or repository.
It should be enough to delete single tree to fully uninstall program with all the garbage it created along the way
But it seems because of the idea that some people may want to back up data and config separately, we end up with stuff scattered everywhere, with some even hidden away in the Windows Registry. Makes backups ridiculously complex.
That said, for something like source code, it makes sense to go into $HOME, so I don't understand this particular complaint the other poster made.
It's weird that the Go developers were working in such isolation that they didn't pick up on this. All programming languages that I know of that work on modern tooling (Haskell, Rust, Node.JS, even C#!) adopted the Ruby style dependency management when it turned out to be correct years ago.
I can't remember the last time I had a version management problem in Ruby, though the ecosystem is so stable right now not many backwards incompatible things happen any more.
From my impression the answer was more "we set up things in a hurry because of deadlines and now we are stuck with this until we implement the epoch system"
Yuuuup. It baffles me how Linux users can simultaneously complain that 'its never the year of the Linux desktop' and then at the same time rile against basic usability tenets. The average users wants/needs Documents, Pictures, Videos, etc folders, and it wants the OS to be aware and use these as such.
Now I'm okay with being able to set a specific flag that doesn't make these folders during install, but by default they absolutely should be there, unless you're doing a server install.
And I guess I would call it a "basic usability tenet" that the computer does what I tell it to. Documents, Music, and Videos folders violate this, because if I do the seemingly-obvious thing of just deleting them, the applications that want to use them will just recreate them unless I edit ~/.config/user-dirs.dirs. So, if developers really think they need to 'help' their 'average users' by creating these folders by default, they should finish the job and honor my desire to not use them, which I've expressed by deleting the damn things.
Also, if I want to rename them, the system should honor that. E.g. if I rename "Music" to "music", applications should not then go and re-create a "Music" folder.
Or, you know, they could just simplify my life and theirs by not creating these 'special' folders in the first place.
At this point, Guix and Nix seem the most promising for cross-distro packages.
I don't know how I did this.
I actually tried the Snap thing too, I don't remember how it went but I would say it was bad. I think I couldn't find a way to search for packages from the CLI.
If every desktop application that needed to store cross-invocation state asked for permission we'd probably say it was an invasion of bad-design huns.
You're complaining on a really high level and nobody suggested "this is fine". They suggested "wishlist" is fine according to their definition of "if it's not immediately broken and a fix is not trivial"
If you have an issue with default folders that are quite normal in a modern desktop environment I suggest you move back to Arch or wherever you came from and install i3 again?
Also, the bug was first reported in 2016, and Mark Shuttleworth himself has commented on it. And yet it's still not fixed. That gives me the impression that the devs don't consider it super-important. Which is odd to me, since I consider not doing this kind of thing to be 'table stakes' for any application that wants me to take it seriously.
> If you have an issue with default folders that are quite normal in a modern desktop environment I suggest you move back to Arch or wherever you came from and install i3 again?
Ummm, I came from Linux before they started having default Documents, Videos, etc folders. And, like I said, the nice thing is that, in Linux, I can change the XDG environment variables and delete the (IMHO) foolish Documents, Music, Videos, etc folders.
I'll drop "/etc/papersize" in return. If you are not familiar with this and do not print to letter paper then you will find it very, very useful.
My life at home is now worth living every time the wife hits print. Such a simple idea - brilliant.
A video game can read my tax forms. WTF.
Mobile & tablet OSs solve this. Everything has to follow.
Until then, basically every app should run in a docker container. Then it can do what it wants.
No no no NO!!!!
Think of all the things you can't do on a mobile device that you can do on a PC, or which require ridiculous workarounds like uploading from one app to some server on the Internet and then downloading again into another. Effortless data interchange is what makes computing great, and stops proprietary walled-garden data-silos from becoming the norm.
Implementation really really is key when it comes to the UX here though. Windows actually already has this feature (Controlled folder access) but it's done so badly you'd almost think it's a joke. Take this as a warning of how bad things can be if you just sandbox everything without thinking. Enabling the feature accidentally is deceptively simple without any warnings on how difficult it is to use: "Protect files from unauthorized changes by unfriendly apps" is all it says, who wouldn't want that? After that havoc breaks loose. You can only have ONE group of folders protected, you must have Documents protected. Whenever a new program tries to write in a protected folder the program just fails, then 15 seconds later you get a confusing prompt saying excel_32_broker_confusing_name.exe tried to write to protected area, clicking the prompt takes you to a page where you can't simply press yes to allow, you have to manually browse to find the exact program location, good luck finding c:\program files(x86)\office\bin or whatever they decide to call the exe.
If every normal app has to ask for permission to use your disk, you’re just going to click “Okay” when malware asks. Then the whole system was for naught.
Indeed the capability exists (unix/linux pioneered it?), but it relies on manual containment by the user. It should be automated, with prompts at install or execution time, like on mobile OSs.
An idea is each file should have its own user, basically, with both relative and absolute file creation/modification permissions. If you run an arbitrary executable as a user there's great security risk since it can do anything you can (without sudo of course).
It seems like the natural progression from
Run everything as root ->
Separate root and day-to-day user ->
Don't use root and achieve things with sudo ->
Each file has its own permissions (i.e. essentially its own user) set by the parent user
Permissions should be both for individual actions and permanent classes of actions, on the user's discretion (i.e. allow it to create this file or always allow it to create files)
This approach is even naturally hierarchical, executables could create other executables as long as they have permission to do so, and set the child permissions to at most its own. In this context an executable asking for permissions can be akin to sudo: it really is a file-user asking for its parent file-user for expanded permissions.
To give a real life allegory, consider a Technician in a company wants to make a tool purchase, so he asks the Engineer. The Engineer doesn't have permission, so he denies or asks the Manager. Finally the Manager either denies or asks the CEO which has permission over everything.
I don't trust many of the apps on my phone (all of Google, FB, messaging, etc). I trust the open source apps (example K9 email) and I'm replacing proprietary apps with open source ones.
I trust almost all programs I use on my PC, which I install with apt-get from Ubuntu or other open source PPAs (no snap, no flatpack). There are a few proprietary programs I might doubt of, for example NVidia driver, Skype, TeamViewer, Telegram. I understand that one is enough to compromise all the PC.
I'm ok with separating untrusted apps/programs but I don't want to get permissions in the way of programs I trust. Software development would be a nightmare if emacs and vim and all GNU userland and interpreters, compilers, etc would run in different environments.
That's because you have to. I trust all my programs too, because I wouldn't install them if I didn't trust them. We've spent decades telling people only to install applications they trust, and not to click on suspicious executables. The result is that they are very resistant to install new programs, unless the programs are from organizations they already know and trust.
That's partly why web applications are so popular: it requires less trust to use a website than to run a native application. If native applications were less dangerous, people might be willing to use more of them.
A concept I find useful is that there are two kinds of security:
-- Accidental security
-- Malicious security
Permissions help with both. Not only you don't want 3rd parties to invade or disrupt your system, you don't want users to invade or disrupt your system (or their own systems) accidentally.
A classic example I believe is an user wanting to delete all files in current directory, typing
$ sudo rm ./*
$ sudo rm /* #(delete *all files*)
> The command rf requests root access for file deletion at /
> Accept? (y/N)
> User's administrative password:
> The command rf requests access for file deletion at /home/user/cache/
> Accept? (Y/n)
You may not be particularly worried about malicious security for system programs, but even professional administrators could be worried about accidental security and best practices (if they're not they could easily select persistent full permissions). I get the impression the last thing e.g. sysadmins want is mess up production systems, and wouldn't mind a few more prompts when doing things manually (which should be rare?), or assgining scripts the exact permissions needed for their job.
1. Desktop OS's have awful security
2. Therefore you're VERY careful about what you install
3. Because you're VERY careful about what you install, you don't need the extra security, it would only be an hassle.
But -- speaking from experience -- they are painful to use; most users prefer convenience over heightened security.
Why does it have to be "Docker"? Any other container framework should do as well, no?
Note that systemd can create cgroups and namespaces for processes it manages: http://0pointer.de/blog/projects/security.html
I still see a lot of Mac apps that basically say "disable system protection first."
One should/must/can not build an operating system on trust alone, as especially the mobile software space consistently continues to show with its numerous malware apps (imagine that on the desktop! "oh sure, have root/admin '50 smileys FOR FREE'", "oh sure, be able to read and write all my user's files <insert dubious clicker app>").
Mobile OSs have become vastly better (not perfect still) than this than desktop OSs.
I would like to see desktop Linux's application management adapt a minimum necessary permissions policy: Allow access to some ressource/API/permission only if the app needs it (and the user allows it of course) and offer mechanisms to allow permission management and ways to enable inter-application communication. This also modularises applications more by bundling up their connections to the rest of the platform and their effects in a central location.
For instance sandboxing applications into their own empty folder would make it simpler to operate on files these applications created (backup, migrate, autocreate <- NixOS would benefit from these especially, allowing finer grained declarative control). It would still be okay then for application developers to follow their own conventions within that sandbox.
Nowadays files are all bunched up in arbitrary folders and nobody knows where they're coming from. It's become so much harder these days to coordinate so many projects that need to create files in the home folder on Linux by trying to introduce standards versus taking that capability away from programs transparently; this should also be much more efficient man-hours-wise.
That said, this is going to be a nontrivial project with how many implicit connections between applications exist. Executing untrusted (or buggy!, I'm looking at you, Steam) software should no longer have to make me afraid my system gets messed up.
For now I think it's best to simply not run untrusted programs and creating regular backups... I wish I had the time/skills to work on something like this.
If you simply must, use a VM or AppArmor or SELinux. Don't inconvenience everybody because you're too lazy to be responsible for your own data.
Besides, untrustworthy apps will bypass the protections anyway. Judging by the popularity of "curl <url> | sudo bash" lately, they'll probably just ask for root directly.
In general, everything in computing will keep going in the direction of "trust everything as little as possible for it to do its job" forever, I think, and probably has to.
Running untrusted programs is the default way we do computing. No one has time to audit every single program they run. The current desktop security model comes from when people mostly ran the programs that came with the OS and programs they wrote themself. Its vastly inappropriate for modern computer usage.
Even that isn't particularly safe.
So basically no apps at all.
I only ever sold one license, but I was 12, so I was pretty excited.
It's not easy to implement nicely, though.
It would be harder to make it so that all the tag-oblivious programs, like mv or vi, to say nothing of rsync, would preserve the attributes when moving or modifying a file.
f = open("the_file.new")
This has a number of advantages, but does not play well with any extended info the old file used to have, unless they are copied explicitly. And in a tagging-oblivious program, they won't be.
f = open( "the_file.new.$$" );
write( f, new_contents );
close( f );
rename( "the_file.new.$$", "the_file.new" )
Backups often take place, too, like you mentioned, depending on editor.
fstat the fd, get st_dev and st_ino.
stat the new name. Compare st_dev and st_ino.
If the value matches, you renamed the right file. If it does not match, you renamed a wrong file. Without holding the fd, it is impossible to know if it is the right file.
I have to deal already with these on file shares, specifically for Apple: .DS_Store, .Trashes and .AppleDouble, or for Windows: Thumbs.db, $RECYCLE.BIN (for some reason Windows sometimes ignores the fact I've disabled the recycle bin on a share and creates this instead) and desktop.ini. Please don't drop crap around directories where there exist a multitude of tidier alternatives.
It gets annoying real quickly because you see the "crap" files of all the OSes you're not using. And you delete them, you only have to insert the stick and they're back. This happens in different ways for all of Mac, Windows, Android and certain Linuxes.
In my research team, we used a tainting tracing mechanism to understand the behavior of malware. Basically, we installed a malware on a clean phone and we then traced all information flow originating from the APK to processes, to files, to sockets, etc. It helped reverse-engineering the malware.
The problem is file manager. Tracker/Finder are several steps ahead of Explorer. Under Linux, its a greater mess even though there are no technical limitations.
Surprising Explorer is so bad at it since they're good at getting EXIF data and ID3 tags into the Properties tab in Explorer.
What's not quite so straight forward is how you query the system. The worst implementations of file tagging only implement retrieving a list of all files that have a given tag. Slightly better than this are tagging systems that will return the intersection between two or more tags. Most tag systems never go beyond this.
Going slightly further, tag exclusions are powerful and are sometimes implemented (given a set of files from some subquery, exclude all files that have a given tag.) What you rarely see are systems that allow you to exclude one subquery from another.
However what you almost never see implemented is a system for preferential tags; given a subquery, reorder the results such that files with preferred tags are raised to the top, ordered by how many of the preferred tags they have. Once you implement this, the system's UX changes dramatically because the user no longer has to make strong assumptions about how well their files have been tagged. Many files might be missing relevant tags and the user may not be sure if the file they're after is one of these incompletely tagged files. When using a system with preferential tag querying, the user will receive files in their result list that don't match all the tags listed, but most of them. This is a bigger deal than it may sound, since the main drawback of using tags for file management is incomplete tagging. By addressing this drawback, you stand to bring file tagging to the next conceptual level, which is rendering hierarchical file management obsolete.
Consider that hierarchical file management is a strict subset of file tagging. You can model file hierarchies inside a file tagging system. To demonstrate this by example, consider /home/joesixpack/Documents/seventh-novel.pdf For each level of the hierarchy, we can create a new tag, such that this document has the tags: '/home/', '/home/joesixpack/', '/home/joesixpack/Documents/'. But because we're using file tagging, we can also automatically tag that file with things like 'pdf' or maybe even 'Documents'.
Now before I go on, there is something to be said about the number of tags in the system exploding when you model a tree as a tagset. In practice this probably isn't the approach any real system should take, if only because most of those tags will be useless and because there is a great deal of redundancy in trees that wouldn't exist in a native filetagging system. Consider /home/joesixpack/Documents/ and /home/johnsmith/Documents/. We have two different Documents directories because they exist in different parts of the hierarchy. However in a file tagging system we'd ideally only have a single Documents tag and one tag per user, such that querying the intersection between 'joesixpack' and 'Documents' returns the files that would otherwise be in /home/joesixpack/Documents. Some tags, such as the user tag, could be implicit; such that if joesixpack simply queries 'Documents' the tag 'joesixpack' is automatically intersected with the results presented to him.
With that out of the way, consider how we can go further if we apply straight forward statistics to the tag system. Suppose 99% of joesixpack's 'Documents' that are 'pdf' are also 'novel'. When joesixpack creates a new file that's in the tags 'Documents' and 'pdf', what are the chances that file should also be tagged with 'novel'? We could analyze the contents of the file if the system has a semantic understanding of what 'novel' means, but let's not go there. Tag systems created by the likes of Facebook and Google do this sort of analysis, but it's heavy, tricky, goes wrong in ways that create PR disasters, etc. We can get pretty good results by ignoring the contents of the file and looking instead at merely what that file is already tagged with. If a file is tagged with 'joesixpack', 'Documents', and 'pdf', it _probably_ should be tagged with 'novel' as well. Probably, but not necessarily. So the system can expose to joesixpack the suggestion that he tag that particular file with 'novel'. By presenting suggestions like that to the user, you greatly reduce the UX friction needed to tag files well and therefore increase the usability of the tagging system, while at the same time improving the quality of the tag suggestions in the future. What we create here is a 'virtuous cycle' of sorts that gives back to the user more as it's used more. Such a tag suggester can be implemented as multi-label classification using naive bayes; it's fairly straightforward.
Going further; if we have such a tag suggester, and an appropriate caching scheme, tag suggestions can be used by queries. If joesixpack queries for 'pdf','novel', the system could return to him the intersection of 'pdf','novel', but also return to him files that are tagged with 'pdf' but not tagged with 'novel', in cases where the tag suggester indicates there is a high likelihood that the file should be tagged with 'novel'.
That last paragraph may or may not play out well, I haven't experimented with it extensively yet. But getting back to my original point: if you're going to implement file tagging in a filesystem, you should think carefully about how the users will query that system, and whether the query system will be extensible in userspace to facilitate new powerful methods of querying it. It would be a total tragedy if the system only supported basic tag intersections and the only way to extend it was to implement a new kernel module.
This would require a ton of difference from how modern desktop and mobile OSes handle files / documents. Maybe some time later, some research OS would implement it.
Getting file tagging into the kernel level to replace directory hierarchies would be a huge paradigm shift, a very dramatic departure from Unix. To be honest I'm not sure whether or not getting such a system into a kernel would be appropriate or not. Traditional hierarchical file management seems more than sufficient for "system files". But I'm really interested in replacing hierarchical file management from the users' perspective. More or less, put ~/Documents, ~/Downloads, ~/Desktop, etc under control of the tagging system but leave the rest of the system as-is. At least for the proof of concept.
I think a demonstration system could be implemented as a custom desktop environment running on regular old linux, where the open and save dialogs of GUI applications have been replaced with tagging/querying windows. Instead of the user clicking through a few directories to find a file to open or find a directory to save something, they would instead click or type in tags to add or query. The GUI file manager would likewise be replaced with the GUI frontend to the tagging system.
This doesn't help with installation, obviously, but it helps with cleanup.
Isn't what you describe basically the file extension? Not so much what program created the file, so much as what program is meant to handle/open the file.
dpkg -S .foo
apt-file search .foo
rpm -qf /foo/bar
These people are arrogant shitheads and I’d like to know who they are so I can avoid their software. I assume if they do crap like this their software does other nefarious things.
Wow, this is really assuming the worst in people. It’s much more likely to be ignorance because the other developer doesn’t know where the file came from either (some compiler flag on a library accidentally enabled), or also doesn’t ls -a in their home.
Yes, I am. I'm fed up with programs that spam my system without asking: install plug ins, menu items, directories on the desktop, dock items etc for "convenience" -- it's convenient for them, not me. It's especially painful when I pay for an app and it sprays shrapnel through my filesystem.
> It’s much more likely to be ignorance
Indeed, but if they can't be bothered to learn basic system hygiene how can I be confident the rest of their package is safe to use?
A big driver for sandbox/containers in user systems is not protection against malign actors per se but against the lazy and ignorant.
Have I written bugs into code? Of course I have, I'm human. But at least I try to get things right up front; to be a good citizen.
(there are additional reasons for containers in server systems, hence my qualification of "user systems" above).
Perhaps having all parts of this chain called "shithead" is extreme but they would all have responsibility and disrespect for the user.