It's easy to understand and requires only a marginal increase in effort/code.
It's not at all easy to understand, nor easy to implement, nor does it have any tangible advantages.
It's one of these superfluous pseudo-standards that do nothing but add needless clutter. But gladly nobody seems to be using it anyway, I see only one directory in my ~/.local: vlc.
The first benefit is that it removes clutters from your $HOME.
The second benefit is that you can now manage and backup your settings in a sane way.
* ~/.config contains config files (should not be lost, but if lost you can recreate them);
* ~/.local contains user data files (save them often, never lose them for they are not replaceable);
* ~/.cache contains cached information (can be tmpfs mounted, you can delete it any time you want, no loss in functionality, you just lose some optimization);
* ~/.run is for temporary system files (must be tmpfs mounted or it must be cleaned during at shutdown or power on).
Luckily most of the apps used on Linux systems now use it, you are probably using Mac OS X.
And for the backup-case: Whitelisting is usually a futile idea to begin with. Normally you'd prefer to backup the odd superfluous file rather than miss an important one.
Luckily most of the apps used on Linux systems now use it
$ find ~ -maxdepth 1 -name ".*" | wc -l
$ find ~/.local | wc -l
$ find ~ -maxdepth 1 -name '.*' | wc -l
$ find ~/.local/ -maxdepth 1 | wc -l
$ find ~/.local/share -maxdepth 1 | wc -l
$ find ~/.config/ -maxdepth 1 | wc -l
$ find ~/.cache/ -maxdepth 1 | wc -l
$ lsb_release -d
Description: Ubuntu 12.04 LTS
My box is not a desktop, so that's probably the difference. I still find that scheme an atrocity.
When going to that length they could at least have settled for one directory (~/.appdata or whatever). Half-baked is the most polite description I can come up with.
.config can be posted online, and shared with others (like the many "dotfile" repos you'll see on github)
.local needs to be backed up, and may have private data.
.cache can be blown away (or tmpfs.)
.run MUST be blown away on restart.
This is simple, sane, and works well.
When your goal is to "reduce clutter" then 2 layers would be the minimum. You make another 4(?) folders in my home-directory and call that reducing clutter?
And when I delete an app then I have to look in all of them?
That is just utterly backwards for no conceivable reason.
Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away). This will by definition be fragile and have funny corner-cases in the first few iterations. Also what happens when ".run" is not blown away, like on a system that does't implement this nonsense?
The definitions are blurry and complex, many apps will get them wrong (.local vs .config etc.).
Unix already has a location for temp files. It's called /tmp.
And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.
I can see a clear and very useful difference between RUNTIME_DIR, CACHE_DIR, and CONFIG_DIR. Consider a scenario where $HOME is on a networked filesystem. RUNTIME_DIR has to be outside that, and local to the machine's namespace. That's because it references things inherently local to the machine. That is, pids and pipes. These wouldn't make sense on any other machine and will just make the application's job harder.
I also set CACHE_DIR to be local (/tmp/$USER.cache.) That's because caching performs terribly when it's flying over the network. Chrome is the main culprit for me. It also fills my file quota with hours of using it. However, it's still useful to keep that data in the medium term.
CONFIG_DIR and DATA_DIR, however, don't seem to be very different to me. I can't imagine a scenario where I want one but not the other. I might be using the wrong sort of applications. (For the record I have 8 files in .config, 10 dot files, and just 1 in .local/share.)
Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away).
Having RUNTIME_DIR on a tmpfs, like what most distributions do with /var/run, solves that problem. I map mine to /var/run/$user, even though I've yet to see an application actually use it. The spec BTW doesn't even specify the default value!
And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.
I agree, the default values are silly. This, like most of Modern Unix, is an ugly hack which makes dealing with the rest of the ugly hacks a bit easier. If you want an elegant solution you'll probably have to throw away most of what was added during last 20 years. May I suggest starting with sockets?
% find ~/.local | wc -l
% find ~ -maxdepth 1 -name ".*" | wc -l
Invisible clutter? That's a strange concept.
But the rest of your point indeed makes sense. It still is easier and probably comment to backup the whole $HOME. But those points you can see as a benefit, though not obvious.
A standard that uses environment variables means programs don't have to provide extra options for this customization (I've seen -f,--file , -c,--config and other variants). It allows for common code (libraries that implement the spec).
If you poke around for feature requests for various open source programs, you'll find XDG basedir compliance come up occasionally (moreso for CLI utils). I wouldn't say "nobody seems to be using it"; quick scan of my folders includes Chromium, uzbl, htop. Git's next release will be compliant too.
Consider a browser cache. /tmp is not guaranteed to be preserved between program invocations. /var/tmp  might be a better place. /var/cache  is intended for exactly this kind of thing.
Unfortunately, it can be far more useful for an administrator in a multi-user setup to want user-specific caches in home directories. That way, you get all of the infrastructure for managing user data for free, like quotas.
Also note: if you're considering putting data into /tmp or /var/tmp, please honor the TMPDIR environment variable, if it is set.
What fraction of those requests are by XDG advocates?
I ask because every standard comes with folks insisting that it be followed. While those folks claim to represent the interests of users, those requests are different.
It definitely has tangible benefits as well -- it promotes the clear separation of app data and user configuration, and unclutters the home folder.
And as pointed out by other replies, ~/.local is not the only XDG data dir.
I'm curious, how do you find it not easy to implement?
cd: no such file or directory: /home/tammer/.local
% uname -s -r
FreeBSD 9.0-RELEASE - No ~/.local, ~/.config
Linux 3.2.0-23 - No ~/.local, ~/.config
Linux 3.0.0-16 - No ~/.local, ~/.config
Linux 3.0.0-12 - No ~/.local, ~/.config
Honestly, I'd never even heard of this scheme before today.
PS Listing a kernel version does not really provide any information about your modern linux distro.
How about consistency?
find .local/ | wc -l
[jlgreco@local] ~ % find ~/.config ~/.local | grep ssh | wc -l
Emacs saves them under %USERPROFILE% - I have in there - .alice, .android, .easyhg, .eclipse, .gstreamer-0.10, .lighttable, .m2, .matplotlib, .... .VirtualBox, .zenmap
Also in "Application Data" - .emacs.d, .mc, .subversion
My point is - this system works somehow even under non-unixy systems.
Because it's simple.
What did not, and still does not, is to create them via Windows Explorer.
You can create them just fine from the command line, or via Windows APIs.
But the value not being set for me means implies to me that it not being set is pretty common. Thus it seems like supporting this standard is going to involve supporting a fall-back of whatever you would do otherwise.
So I don't see "easy" at all but rather extra BS. Sorry.
Also, for my 2c, you should consider updating your Intrepid install if you at all can. It hasn't been supported for over two years, so it hasn't seen any security updates in that time. The Ubuntu do-release-upgrade system is pretty easy and reliable.
I want my space to be mine. Keep your app's stuff out of there!
Your homedir equivalent on windows (since Windws XP) is /Documents and Settings/username, or the modern shortened equivalent. That has subdirectories for documents and application settings. Apps that put config data in My Documents are using behaviour dating back to Windows 95.
It's the same on Mac OS - /Users/username has a Documents subdirectory. Mac spps tend to be better-behaved, but a few manage to pollute the Documents directory. On Linux you may roll your own or use a ~/Documents directory created by your desktop environment. The fact is your homedir is the home of pretty much anything that is user-specific, and you either need to collect all the config data into some sort of config directory, or do your own organising in a subdirectory, or both.
"My Documents" is a nightmare... I think only one folder (out of nearly a dozen) and only one file are actually of my making.
The canonical example is the backup. There's a strong case to be made for "tar czf /tmp/backup.tgz ~". Do you really want your backups to become as complicated as they are on OSX and Windows?
Likewise, being able to mount home-directories from remote servers, and being able to easily delete/move/quota users are highly desirable features in multi-user systems.
Point is accepted, that it came into being due to a lazy programmer. But surely early people might have just liked the unintended consequence of some files (dot files) being hidden. Just like most of us, whenever we learnt unix thought that it is by design.
If early users, had found the consequence a handicap, it would have been fixed long back.
Its similar to the use of hash-tags on twitter or the @for addressing which got adopted by users first and so became features (although the paths to them being considered features are different).
It also isn't that easy to roll back once userspace programs start to rely on it - for example, the assumption that the number of files in a directory is equal to st_nlink - 2 is now so widespread that it's part of the UNIX API.
Dotfiles provided a poor, but at least simple way to store program-specific-user-specific configuration, since another standard was missing. After all it's a simple and decentralized system that worked very well with the concept of unix user and ACL: you write something inside your home directory, and this changes the behavior of your program.
Consider that this was invented many decades ago. Now it seems a lot better to have directories with sub directories. Maybe back then it was considered to be a waste of resources, inodes, and so forth.
We can improve it, create a new standard, and have something better than dot files, but dot files are better than many other over-engineered solutions that I can imagine coming out of some kind of design commission to substitute them.
Every time to passed your vim configuration to a friend you just copied a text file, sending it via email: you enjoyed one of the good points about dot files. Every time you did something like cat dotfile | grep option you enjoyed the positive effects of single-file plaintext configuration.
Also it's worth saying that dot files are not just the concept of an hidden file with config inside. A lot of dot files also have a common simple format of multiple lines "<option> <value>", that's better than some XML or other hard to type format (IMHO JSON itself is not good for humans).
I would like to hear a good argument for why hidden files and folders are a good thing.
Moreover I personally much prefer them to global configuration directories because they are always local to the thing being configured and you can always override a global configuration by using them.
In fact I think they are a very elegant way to handle "hidden options" - stuff you want to expose to the power users but not bother newbies with.
tl;dr: I am not convinced they are a misfeature.
There is no question there is a need to have a place that "plebeian users" can't access. IMO, dot-files and dot-directories are as good as anywhere.
This just shows to me what a bad idea sandboxing is for those kind of apps that are supposed to interoperate with the whole system. Is there even a security benefit vs. pure unix permissions if you sandbox the filesystem but then you link in tons of crap that could be potentially attacked?
Isn't the point of sandboxing specifically to prevent apps interoperating with the whole system? (I've not really paid it much attention so far)
There are plenty of examples: video games slowly reveal more skills as you learn and encounter progressively harder enemies. A good app should be usable at first launch (or only require minimal setup). Configuration and advanced features can come later.
Another great filesystem level example are OS X app bundles- an entire directory hierarchy appears as a single file/application. If you need to look inside (not likely), you have to know about right-click or the action widget, but for 99.9% of the time, you see only what you need. OS X and Windows also both completely hide "system" folders in Finder/Explorer as well.
Yes, hiding things can be confusing if it's not done right, but the alternative of showing everything always is definitely not the way to go.
They are a natural segregator of novice and advanced users.
Yeah, they let that guy in the computer lab who "really knows Unix" show his stuff.
I remember encountering these dot-whatever files back in the day, how changing all the idiotic terminal settings depended on them and how remembering their names or interpreting their values was nearly impossible, and how the cool geeks of the lab had about six seconds of their time available to explain the situation.
What is actually in them is an entirely different matter.
Putting files and directories like this inside another directory would not win you much, especially if you only have one in a given location (e.g. you only have a .git directory). But you wouldn't want to keep tripping over your .git directories either!
However, having a dedicated directory for configuration files applying to a user or the whole system is a different story. I would much rather just have a ~/config directory than a home folder full of dot files.
They keep users from monkeying with stuff until they are smart enough to find the hidden files.
> I do find it odd that so many apps have this problem. It's trivially easy to get the proper location. Just call getExternalFilesDir() . Deletion at uninstall happens automatically. In fact, it's the ONLY way to make sure those files are deleted when you uninstall, because you can't run code at uninstallation.
There are, no doubt, numerous unintended behaviors of programs. Most of these are simply ignored and certainly not leveraged the way the dot behavior is.
People don't go out of their way to abuse an unintended system behavior; they simply leverage all capabilities of a system ("intended" or not) to meet their needs. Had dotfiles not gained traction, some other solution would have been designed (or "engineered") to meet the needs of program state/configuration/metadata storage.
[Tangent: All of this reminds me of grammar freaks that harp on "correct" usage, completely oblivious to the fact that grammar changes, and "correct" is merely a lightweight pointer to the current norm.]
Is this something intrinsic to G+? Is it a function of the author's popularity? The discussions here are normally much more sensible, but I would expect HN's readership to have a fairly similar demographic to that of Rob Pike's G+ subscribers.
The main advantage is that it's a large community (10m+ users presently) initially seeded by Googlers (e.g.: tech-savvy people), and including a few notable luminaries such as Rob Pike. And if you've got the right circles, eventually good content finds its way into your stream. Sometimes (content discover/surfacing is something G+ does surprisingly poorly, and is an area at whch HN, Reddit, and StackExchange win hugely).
Little did I know.
The opinion that there shouldn't be hidden files comes from a perspective of someone who is a "power user" and who can't step back and see that most users really don't care for some .config file. To them it's clutter that gets in the way of finding what they really care about.
That said, dot files maybe the wrong way to do it. I like ~/Library in OSX. That's one good way.
Edit: Note that I'm talking about a general trend in the post's comments and on here. Not the author's opinion.
Try to change any of it though, and a lot of luddites will come out screaming bloody murder. It's just not UNIX if it makes sense.
"There has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries)."
As someone said - naming things is the hardest!
etc > e.t.c > edit to configure.
the .rc suffix has a nice history btw
Sigh, if only most of us had worked on a software system that has lasted as long as that.
Their content is XML. What's wrong with project.xml and classpath.xml?
oh well nothing of value was lost.