Hacker News new | comments | ask | show | jobs | submit login
Rob Pike: the origin of dotfiles (plus.google.com)
470 points by keyist on Aug 3, 2012 | hide | past | web | favorite | 153 comments

Instead of putting a dotfile or dotdir in the user's home directory, do follow the XDG Base Directory specification: http://standards.freedesktop.org/basedir-spec/basedir-spec-l... .

It's easy to understand and requires only a marginal increase in effort/code.

I see no benefit.

It's not at all easy to understand, nor easy to implement, nor does it have any tangible advantages.

It's one of these superfluous pseudo-standards that do nothing but add needless clutter. But gladly nobody seems to be using it anyway, I see only one directory in my ~/.local: vlc.

> I see no benefit.

The first benefit is that it removes clutters from your $HOME.

The second benefit is that you can now manage and backup your settings in a sane way.

* ~/.config contains config files (should not be lost, but if lost you can recreate them);

* ~/.local contains user data files (save them often, never lose them for they are not replaceable);

* ~/.cache contains cached information (can be tmpfs mounted, you can delete it any time you want, no loss in functionality, you just lose some optimization);

* ~/.run is for temporary system files (must be tmpfs mounted or it must be cleaned during at shutdown or power on).

Luckily most of the apps used on Linux systems now use it, you are probably using Mac OS X.

All of your points are invalid for a simple reason: Almost no software uses this fancy standard.

And for the backup-case: Whitelisting is usually a futile idea to begin with. Normally you'd prefer to backup the odd superfluous file rather than miss an important one.

Luckily most of the apps used on Linux systems now use it

Excuse me?

  $ find ~ -maxdepth 1 -name ".*" | wc -l

  $ find ~/.local | wc -l

  $ uname

I counter your anecdote with my anecdote:

  $ find ~ -maxdepth 1 -name '.*' | wc -l
  $ find ~/.local/ -maxdepth 1 | wc -l
  $ find ~/.local/share -maxdepth 1 | wc -l
  $ find ~/.config/ -maxdepth 1 | wc -l
  $ find ~/.cache/ -maxdepth 1 | wc -l
  $ uname
  $ lsb_release -d
  Description:	Ubuntu 12.04 LTS

That's interesting, and sort of disturbing.

My box is not a desktop, so that's probably the difference. I still find that scheme an atrocity.

When going to that length they could at least have settled for one directory (~/.appdata or whatever). Half-baked is the most polite description I can come up with.

But all those folders are different, so a single one would be annoying (or: require two layers.)

.config can be posted online, and shared with others (like the many "dotfile" repos you'll see on github)

.local needs to be backed up, and may have private data.

.cache can be blown away (or tmpfs.)

.run MUST be blown away on restart.

This is simple, sane, and works well.

Yes, if you push me like that I'll say it: it's incompetently overdone.

When your goal is to "reduce clutter" then 2 layers would be the minimum. You make another 4(?) folders in my home-directory and call that reducing clutter?

And when I delete an app then I have to look in all of them? That is just utterly backwards for no conceivable reason.

Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away). This will by definition be fragile and have funny corner-cases in the first few iterations. Also what happens when ".run" is not blown away, like on a system that does't implement this nonsense?

The definitions are blurry and complex, many apps will get them wrong (.local vs .config etc.).

Unix already has a location for temp files. It's called /tmp.

And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.

When your goal is to "reduce clutter" then 2 layers would be the minimum. You make another 4(?) folders in my home-directory and call that reducing clutter?

I can see a clear and very useful difference between RUNTIME_DIR, CACHE_DIR, and CONFIG_DIR. Consider a scenario where $HOME is on a networked filesystem. RUNTIME_DIR has to be outside that, and local to the machine's namespace. That's because it references things inherently local to the machine. That is, pids and pipes. These wouldn't make sense on any other machine and will just make the application's job harder.

I also set CACHE_DIR to be local (/tmp/$USER.cache.) That's because caching performs terribly when it's flying over the network. Chrome is the main culprit for me. It also fills my file quota with hours of using it. However, it's still useful to keep that data in the medium term.

CONFIG_DIR and DATA_DIR, however, don't seem to be very different to me. I can't imagine a scenario where I want one but not the other. I might be using the wrong sort of applications. (For the record I have 8 files in .config, 10 dot files, and just 1 in .local/share.)

Due to the semantics you now suddenly need a cronjob or similar abomination that traverses all home-directories and picks out stuff ("MUST" be blown away).

Having RUNTIME_DIR on a tmpfs, like what most distributions do with /var/run, solves that problem. I map mine to /var/run/$user, even though I've yet to see an application actually use it. The spec BTW doesn't even specify the default value!

And what the heck is going in .local anyways? When the user saves a file then he pretty surely doesn't want it buried under some dot-directory.

I agree, the default values are silly. This, like most of Modern Unix, is an ugly hack which makes dealing with the rest of the ugly hacks a bit easier. If you want an elegant solution you'll probably have to throw away most of what was added during last 20 years. May I suggest starting with sockets?

   % uname
   % find ~/.local | wc -l
   % find ~ -maxdepth 1 -name ".*" | wc -l

> % find ~/.local | wc -l

Missing something?

> The first benefit is that it removes clutters from your $HOME.

Invisible clutter? That's a strange concept.

But the rest of your point indeed makes sense. It still is easier and probably comment to backup the whole $HOME. But those points you can see as a benefit, though not obvious.

As Rob mentions in his post, the more dotfiles there are in $HOME the slower path resolution for any subfiles becomes. How do we navigate to ./src? We open the directory and read all the entries until we find the one called "src". What happens if we encounter a morass of dotfiles beforehand? src takes a while to find. The clutter may be invisible to you, but it does gum up the works.

For what it's worth, most modern file systems (JFS, XFS, ext4, reiserfs, btrfs, ...) have logarithmic (or better) directory lookup times. This is achieved using hashes and b-trees (or even hash tables).

Fair point. Though anything using the standard POSIX dirent API would still get the performance hit (even if path resolution doesn't).

Unless you have many thousands of files, I can't imagine you would ever notice a slowdown.

It's not invisible when you're actually looking for an invisible file.

I consider it good practice to allow users to customize where your program creates files. I don't like $HOME to be the default dumping ground for anything I run.

A standard that uses environment variables means programs don't have to provide extra options for this customization (I've seen -f,--file , -c,--config and other variants). It allows for common code (libraries that implement the spec).

If you poke around for feature requests for various open source programs, you'll find XDG basedir compliance come up occasionally (moreso for CLI utils). I wouldn't say "nobody seems to be using it"; quick scan of my folders includes Chromium, uzbl, htop. Git's next release will be compliant too.

I don't particularly like dumping everything in home, but XDG is worse. Now, if I want to start over with a clean profile, there's a whole list of locations I have to zap. Spraying shit in three directories is worse than one.

my problem with that "one" directory is that i have to take care specifically to back it up. applications that at least use ~/.cache means i don't have to worry about backing up a bunch of useless stuff.

Isn't that what /tmp is for? Why not put cache files into /tmp/<username> with 0700 permission?

No, that's not necessarily what /tmp is for.

Consider a browser cache. /tmp is not guaranteed to be preserved between program invocations[1]. /var/tmp [2] might be a better place. /var/cache [3] is intended for exactly this kind of thing.

Unfortunately, it can be far more useful for an administrator in a multi-user setup to want user-specific caches in home directories. That way, you get all of the infrastructure for managing user data for free, like quotas.

Also note: if you're considering putting data into /tmp or /var/tmp, please honor the TMPDIR environment variable, if it is set[4].

[1]: http://www.pathname.com/fhs/pub/fhs-2.3.html#TMPTEMPORARYFIL...

[2]: http://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARY...

[3]: http://www.pathname.com/fhs/pub/fhs-2.3.html#VARCACHEAPPLICA...

[4]: http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_...

> If you poke around for feature requests for various open source programs, you'll find XDG basedir compliance come up occasionally (moreso for CLI utils).

What fraction of those requests are by XDG advocates?

I ask because every standard comes with folks insisting that it be followed. While those folks claim to represent the interests of users, those requests are different.

I have made this specific feature request on an application (Mangler) and I have no affiliation with XDG.

Actually, it is in considerably wide use for desktop linux applications.

It definitely has tangible benefits as well -- it promotes the clear separation of app data and user configuration, and unclutters the home folder.

And as pointed out by other replies, ~/.local is not the only XDG data dir.

I'm curious, how do you find it not easy to implement?

You are running a modern linux distro with only one file/dir in .local?

% cd ~/.local

cd: no such file or directory: /home/tammer/.local

% uname -s -r

Linux 3.4.7-1-ARCH

FreeBSD 7.1-RELEASE - No ~/.local, ~/.config

FreeBSD 9.0-RELEASE - No ~/.local, ~/.config

Linux 3.2.0-23 - No ~/.local, ~/.config

Linux 3.0.0-16 - No ~/.local, ~/.config

Linux 3.0.0-12 - No ~/.local, ~/.config

Honestly, I'd never even heard of this scheme before today.

I guess I should have said "a modern linux desktop environment." Are these shell boxes or desktops? In my experience the majority of propgrams that use this spec are gtk/qt programs.

PS Listing a kernel version does not really provide any information about your modern linux distro.

Interesting... I have ~/.cache, ~/.local and ~/.config and all 3 of them have quite a lot of subdirs and dotfiles inside.

I see no benefit.

How about consistency?

When there are forty years of existing applications using ~/.appname?

Check out your .config and .cache directories.

~/.local isn't primarily where you should look; check ~/.config

    find .local/  | wc -l
So I guess it depends on your Linux flavor. Mine is Ubuntu 12.04.

It's only used by those few users that use SSH.

? ssh keeps its files in ~/.ssh.

  [jlgreco@local] ~ % find ~/.config ~/.local | grep ssh | wc -l
So.. no.

Somehow .dotfiles work just fine under Windows :). As such they are somehow more "portable"

Emacs saves them under %USERPROFILE% - I have in there - .alice, .android, .easyhg, .eclipse, .gstreamer-0.10, .lighttable, .m2, .matplotlib, .... .VirtualBox, .zenmap

Also in "Application Data" - .emacs.d, .mc, .subversion

My point is - this system works somehow even under non-unixy systems.

Because it's simple.

%APPDATA% is as simple as home but much better on the clutter front.

True. I actually kinda like the Windows directory structure since Vista. If only the application devs would also follow the standard (hence the whole My Documents mess).

Incidentally, Emacs will use %UserProfile%\.emacs.d if you set %HOME% to %UserProfile%, which, as a UNIX user, I personally prefer for consistency. While not Emacs-specific, another trick I've found useful is that, on 64-bit Windows, you can open %SystemRoot%\System32\somefile in 32-bit Emacs iff you refer to it as %SystemRoot%\sysnative\somefile (the original path yields %SystemRoot%\SysWOW64\somefile).

It didn't use to work. I think it was added for XP SP2 or so, probably due to the popularity of manu *nix ports like Emacs.

It always did, at least in NTFS.

What did not, and still does not, is to create them via Windows Explorer.

You can create them just fine from the command line, or via Windows APIs.

You can create them in Windows Explorer with a little trick: type the name with a trailing dot. Entering ".vimrc." will save as ".vimrc", no trailing dot.

Great! I was not aware of it.

I am always pleasantly surprised when programs follow XDG. One notable example is the fish shell. Most shells put dotfiles in $HOME, like .bash_profile or .zshrc. Fish puts everything in .config/fish/.

I would have preferred if it did not rely on environment variables since almost nobody sets them which means there will be untested code in applications implementing the specification.

Where possible, applications should rely on tested libraries to implement the specification, like PyXDG.

I notice the $XDG_DATA_HOME environment variable isn't set in my Intrepid install. Maybe its set in later Ubuntu version.

But the value not being set for me means implies to me that it not being set is pretty common. Thus it seems like supporting this standard is going to involve supporting a fall-back of whatever you would do otherwise.

So I don't see "easy" at all but rather extra BS. Sorry.

From the spec: If $XDG_DATA_HOME is either not set or empty, a default equal to $HOME/.local/share should be used. Similarly there's a default (.config/) defined for XDG_CONFIG_HOME. So the fallback is well-defined rather than "whatever you would do otherwise".

Also, for my 2c, you should consider updating your Intrepid install if you at all can. It hasn't been supported for over two years, so it hasn't seen any security updates in that time. The Ubuntu do-release-upgrade system is pretty easy and reliable.

$HOME/.local/share doesn't exist on OSX either.

XDG is a freedesktop standard. Meaning: checking that standard on OS X is a bit strange.

I thought we were talking about unix, not the "free desktop standard".

If it does not exist, you've not used a soft using it. I have one.

Not much of a standard then - I have done hundreds of installs.

On my system (Xubuntu), these environment variables are empty. Seems not to be much of a standard.

To be fair, that's how it works. They have default values when they're not set. These are for when you want to override them, e.g. to keep cache files local instead of over your net mounted home directory.

I hate having dotfiles in ~/ for the same reason why I hate "My Documents" in Windows: because it's supposed to be my space that I organize, not a generic dumping ground for your config files, brand-named folders, or other nonessential garbage.

I want my space to be mine. Keep your app's stuff out of there!

I am using a manually created /data partition as my personal space. Many dotfiles and 'dotdirectories' are distro+release specific. With having all my personal data in a separate partition makes switching distros (or using multiple ones) a breeze as I start with all my data intact and a virgin /home which I can populate with my dotfiles as I see fit.

My Documents was never meant to be a storage space for application settings. Application Data / AppData is the place for this. Of course, whether an application complies with this depends on the developer, and many applications break that rule

But My Documents is one level below your homedir.

Your homedir equivalent on windows (since Windws XP) is /Documents and Settings/username, or the modern shortened equivalent. That has subdirectories for documents and application settings. Apps that put config data in My Documents are using behaviour dating back to Windows 95.

It's the same on Mac OS - /Users/username has a Documents subdirectory. Mac spps tend to be better-behaved, but a few manage to pollute the Documents directory. On Linux you may roll your own or use a ~/Documents directory created by your desktop environment. The fact is your homedir is the home of pretty much anything that is user-specific, and you either need to collect all the config data into some sort of config directory, or do your own organising in a subdirectory, or both.

On Windows I've retreated to Desktop which remains fairly sane as long as I opt-out or remove shortcuts being created after an install.

"My Documents" is a nightmare... I think only one folder (out of nearly a dozen) and only one file are actually of my making.

Where would you choose to put application config instead?

Literally anywhere but my personal space. Why not have /etc/<username>/? I could even live with a single ~/.config/ but very few apps use that.

It's actually quite useful to have everything that belongs to a user under one directory.

The canonical example is the backup. There's a strong case to be made for "tar czf /tmp/backup.tgz ~". Do you really want your backups to become as complicated as they are on OSX and Windows?

Likewise, being able to mount home-directories from remote servers, and being able to easily delete/move/quota users are highly desirable features in multi-user systems.

Microsoft now encourages to use AppData/ which is much better.

I think the case of hidden dot files is a good example of 'convention over configuration'[1].

Point is accepted, that it came into being due to a lazy programmer. But surely early people might have just liked the unintended consequence of some files (dot files) being hidden. Just like most of us, whenever we learnt unix thought that it is by design.

If early users, had found the consequence a handicap, it would have been fixed long back.

Its similar to the use of hash-tags on twitter or the @for addressing which got adopted by users first and so became features (although the paths to them being considered features are different).

[1] http://en.wikipedia.org/wiki/Convention_over_configuration

Edit: Grammar

He also writes why the dd command is so horrible: "dd is horrible on purpose. It's a joke about OS/360 JCL. But today it's an internationally standardized joke. I guess that says it all."


The fact that "." and ".." were allocated actual directory entries and were returned when reading the directory, rather than just being handled by the kernel when parsing pathnames seems like the original sin of expediency here.

Why? Seems like a brilliantly simple quick solution that would be rather easy to roll back in the future and lacks any real downsides besides being a tad weird.

Because it was creating real directory entries that caused "." and ".." to be visible to userspace programs reading a directory, which then led to the hack in "ls" to hide them, which is where the article picks up.

It also isn't that easy to roll back once userspace programs start to rely on it - for example, the assumption that the number of files in a directory is equal to st_nlink - 2 is now so widespread that it's part of the UNIX API.

shrug, it seems to have all worked out pretty well.

But all programs must make exceptions for these entries. Imagine trying to add a third magic filename, '...', for the grandparent for example. You would have a lot of coding to do. Not really possible. It would have been better to mark the dirs as special somehow rather than having each program have the convention programmed in.

...and of course even the kernel had to have exceptions for them anyway - it knew they were special because it had to stop userspace from unlink()ing or rename()ing them.

(Of course I meant "the number of subdirectories in a directory" here rather than "files").

Except, like, in the root.

Even the root has a .. entry, a link to itself.

Plus you can find out the directory inode and the parent directory inode

dotfiles are not perfect, but to have this very negative vision on a feature that also helped is a bit a revisionist attempt IMHO.

Dotfiles provided a poor, but at least simple way to store program-specific-user-specific configuration, since another standard was missing. After all it's a simple and decentralized system that worked very well with the concept of unix user and ACL: you write something inside your home directory, and this changes the behavior of your program.

Consider that this was invented many decades ago. Now it seems a lot better to have directories with sub directories. Maybe back then it was considered to be a waste of resources, inodes, and so forth.

We can improve it, create a new standard, and have something better than dot files, but dot files are better than many other over-engineered solutions that I can imagine coming out of some kind of design commission to substitute them.

Every time to passed your vim configuration to a friend you just copied a text file, sending it via email: you enjoyed one of the good points about dot files. Every time you did something like cat dotfile | grep option you enjoyed the positive effects of single-file plaintext configuration.

Also it's worth saying that dot files are not just the concept of an hidden file with config inside. A lot of dot files also have a common simple format of multiple lines "<option> <value>", that's better than some XML or other hard to type format (IMHO JSON itself is not good for humans).

How does any of those advantages apply only to dotfiles and not just plain files?

I think there's an aspect of convenience, too. Personally, I prefer that to, say, configuration stored in some unspecified location in ~/Library/Application Support/Application/* (on OS X).

For those who object that dot files serve a purpose, I don't dispute that but counter that it's the files that serve the purpose, not the convention for their names.

I would like to hear a good argument for why hidden files and folders are a good thing.

They are a natural segregator of novice and advanced users. If you have a limited level of interaction with the unix shell (remember than in the old days everyone in a science academic department used a Unix machine, even the dusty professors you kept in the back of the supplies cabinet), setting up their account and then making sure that they couldnt get into any trouble because they didn't know about 'ls -a' was very much a feature, trust me.

Moreover I personally much prefer them to global configuration directories because they are always local to the thing being configured and you can always override a global configuration by using them.

In fact I think they are a very elegant way to handle "hidden options" - stuff you want to expose to the power users but not bother newbies with.

tl;dr: I am not convinced they are a misfeature.

OS X has a ~/Library directory that houses application settings and much else. It is hidden by default.

There is no question there is a need to have a place that "plebeian users" can't access. IMO, dot-files and dot-directories are as good as anywhere.

OSX, because of its big corporate watchdog, and the separation of the 'GUI' layer and the unix layer, has the cleanest directory structure by far IMO. Most of the times you don't even need a manpage or google to find out how to repair a misbehaving application - just delete its plist in ~/Library/Preferences, possibly also its ../ApplicationSupport directory. The OSX defaults system is so well designed, every time I need to do support on other systems I just ask myself why not every OS works this way. I can understand why Linux is the way it is and I like that too (for other reasons), but it at least shows why the Windows Registry is such a bad idea.

Why can't I reply to some comments? That comment about the new sandboxed folder structure made me realize he's right. What's going on with all those symlinks inside the container directory?

This just shows to me what a bad idea sandboxing is for those kind of apps that are supposed to interoperate with the whole system. Is there even a security benefit vs. pure unix permissions if you sandbox the filesystem but then you link in tons of crap that could be potentially attacked?

It's to do with comments that get deeply nested quickly, presumably to prevent back and forth flame wars and the like. If you really want to reply to such a comment, click "link" and there is a reply link on the that page, or just wait.

> This just shows to me what a bad idea sandboxing is for those kind of apps that are supposed to interoperate with the whole system.

Isn't the point of sandboxing specifically to prevent apps interoperating with the whole system? (I've not really paid it much attention so far)

Yeah but what's the point in making Preview.app sandboxed? It's a damn document viewer.

There have been numerous PDF exploits in the past (including jail breaking iOS) - sounds like it needs its scope limiting.

To be fair, that filesystem structure was already available in NeXTStep.

had the cleanest directory structure until 10.8:

~/Library/Containers/com.apple.Preview/Data/Library/Application Support/Preview


Isn't this change because of app sandboxing?

I wonder if that is a good solution for separating novices and pros though? For me, hidden files and folders only help to reinforce the confusion people have about computers. It's one thing to provide a folder to a user with a lock or shield on it, it's another to completely hide it from them.

Hiding is absolutely good design at every level. Not seeing UI controls, preferences, or anything that is not relevant to the task at hand can make that task simpler, easier to understand, and more productive. If you need to see dotfiles, well, you might need to know about man pages or at least that commandline programs take flags. It's not like they're invisible.

There are plenty of examples: video games slowly reveal more skills as you learn and encounter progressively harder enemies. A good app should be usable at first launch (or only require minimal setup). Configuration and advanced features can come later.

Another great filesystem level example are OS X app bundles- an entire directory hierarchy appears as a single file/application. If you need to look inside (not likely), you have to know about right-click or the action widget, but for 99.9% of the time, you see only what you need. OS X and Windows also both completely hide "system" folders in Finder/Explorer as well.

Yes, hiding things can be confusing if it's not done right, but the alternative of showing everything always is definitely not the way to go.

What games reveal skills you have always had, as opposed to unlocking better powers? I would be really annoyed by players-guide-only abilities.

In the Quakes, various movement techniques are only available to those who know they exist. Worked very well I think, lend the games a much longer learning curve.

Negative. A lock or shield will convince a user that they're not to touch it, ever... Unless they are too dumb to understand the connotations, in which case they'll simply break it anyway. Hidden files are a good way to go - Out of sight, out of mind for most people, but not impossible to show them when they do need to touch.

They aren't hidden, completely or otherwise. Dotfiles are locked, '-a' is the key.


They are a natural segregator of novice and advanced users.

Yeah, they let that guy in the computer lab who "really knows Unix" show his stuff.

I remember encountering these dot-whatever files back in the day, how changing all the idiotic terminal settings depended on them and how remembering their names or interpreting their values was nearly impossible, and how the cool geeks of the lab had about six seconds of their time available to explain the situation.

99% of all dotfiles that I am aware of are named for the programs that they are for. Bash's start with .bash, Zsh's with .zsh, mplayer is .mplayer, Vim's start with .vim, elinks is .elinks, screen's start with .screen, Dropbox is .dropbox... I'm not sure how any of that is hard to remember. The only real barrier to entry here is knowing that you should look for them in the first place (well, that and the new XDG crap..).

What is actually in them is an entirely different matter.

Perhaps the op prefers to use the registry to store settings.. After all HKEY_XXX1230_APP_123 is so friendly.

Excuse me, but much more typical example is HKEY_CURRENT_USER\Software\MiKTeX.org\MiKTeX\2.9\Core. The registry has rather respected conventions.

Yes, thanks.

They work well with per-directory settings. It's a simple convention and it works really well for things like .git, .gitignore, language-specific settings files and so on.

Putting files and directories like this inside another directory would not win you much, especially if you only have one in a given location (e.g. you only have a .git directory). But you wouldn't want to keep tripping over your .git directories either!

However, having a dedicated directory for configuration files applying to a user or the whole system is a different story. I would much rather just have a ~/config directory than a home folder full of dot files.

I type ls in my home directory quite frequently. Somehow stuff accumulates there and I need to find a better place for it. I never type ls -a, precisely because there are a million files I just don't care about. Some means of saying "this file is here if you want it, but it won't show up in a listing because it's unlikely to be the file you're looking for" is convenient.

I would like to hear a good argument for why hidden files and folders are a good thing.

They keep users from monkeying with stuff until they are smart enough to find the hidden files.

*knowledgable enough

No, 'smart' is the right word.

Why is that? As a long time Windows user who only recently switched to a Unix system, I was unfamiliar with dot files. Windows uses a different method to hide files. Would you call me stupid because of that? Once I gained the knowledge I needed, these files were no longer a mystery (in fact I learned that there are actually three ways to hide files in Unix systems).

Smart has the right number of syllables for the sentence I typed. Knowledgeable, too many.

The main takeaway I think is that the reason we are doing things the way we do them is because that is how we have been doing them, not because of some kind of genius design. And actually the whole thing can start out as a mistake, and then later on basically becomes a religion. Its very funny actually.

Yes, very organic. The original programming shortcut accidentally created a sort of ecological niche, in which all sorts of things started to take root. Now, nobody can bulldoze the lot because there's too much stuff growing there.

If not dot files lazy programmers would have just found another way. Just look at the results of lazy programmers on any Android SD card.

Is there a preferred standard for using the SD/external storage on Android?


> I do find it odd that so many apps have this problem. It's trivially easy to get the proper location. Just call getExternalFilesDir() . Deletion at uninstall happens automatically. In fact, it's the ONLY way to make sure those files are deleted when you uninstall, because you can't run code at uninstallation.

Probably because it came in fairly late (Froyo, I think).

Yay, I am one of them.

The author's gripe seems to be that the hiding of dotfiles was unintended, ergo dotfiles are Bad. Whether they were intended is irrelevant; their wide usage vindicates the practice. After all, traction = value. The problem of program state/configuration/metadata storage is adequately met by dotfiles.

There are, no doubt, numerous unintended behaviors of programs. Most of these are simply ignored and certainly not leveraged the way the dot behavior is.

People don't go out of their way to abuse an unintended system behavior; they simply leverage all capabilities of a system ("intended" or not) to meet their needs. Had dotfiles not gained traction, some other solution would have been designed (or "engineered") to meet the needs of program state/configuration/metadata storage.

[Tangent: All of this reminds me of grammar freaks that harp on "correct" usage, completely oblivious to the fact that grammar changes, and "correct" is merely a lightweight pointer to the current norm.]

I really like it that G+ is becoming a "lightweight blogging" platform. There are too many of these around, and folding them into a "social network" seems like a good idea. I wish they would add more features that would make this easier, though, since in general I think it's in everyone's interest and will pull more traffic to the site.

A bit off topic, but is it just me or is the quality of comments on public G+ posts is absolutely terrible? Within a few hours there has been more than 100 replies. Most are inane, rude, of the useless “great post!” type, or just plain old spam. I could only manage to glean about 8 or 9 insightful replies.

Is this something intrinsic to G+? Is it a function of the author's popularity? The discussions here are normally much more sensible, but I would expect HN's readership to have a fairly similar demographic to that of Rob Pike's G+ subscribers.

G+ is fair to middlin' at a bunch of stuff. Comments lack threading. There's poor noise control. Content filtering is limited to "+1" or "flag". There's no "-1" button (though various Chrome extensions have offered this at various points.

The main advantage is that it's a large community (10m+ users presently) initially seeded by Googlers (e.g.: tech-savvy people), and including a few notable luminaries such as Rob Pike. And if you've got the right circles, eventually good content finds its way into your stream. Sometimes (content discover/surfacing is something G+ does surprisingly poorly, and is an area at whch HN, Reddit, and StackExchange win hugely).

That was exactly my thought--there were a few thoughtful replies in there, but otherwise just absolute mindless bullshit. It's the same way when Linus makes a post about how he's just updated the code in his preferred emacs editor, spawning 500 posts of "I use Vim!"

Adding a new one is not a good solution to "there are too many around". And I don't think consolidating everything into google is a good idea; these kind of posts belong on a different network to photos of my holidays.

I'd like it if I didn't need a Google account to read anything on it.

Well be happy then, you don't need a Google account, as can be seen by entering the url in an Incognito window.

The problem arises if you are signed into google elsewhere but not on G+.

Yeah, it's pretty lame that if I'm signed into Google but don't want to accept G+'s real name policy, I can't read otherwise public posts.

That's really interesting. I always assumed hiding dotfiles was a deliberate convention, but to semi-quote one of the commenters, Rob's got a point. Or two.

Heh, I remember when I first found out about dotfiles. I thought it was a genius idea...

Little did I know.

If someone thinks hidden files are a misstep, then I don't want them designing a OS. Grandma really doesn't care about some conf file, or anything like that. She cares about the pictures of her grandkids.

The opinion that there shouldn't be hidden files comes from a perspective of someone who is a "power user" and who can't step back and see that most users really don't care for some .config file. To them it's clutter that gets in the way of finding what they really care about.

That said, dot files maybe the wrong way to do it. I like ~/Library in OSX. That's one good way.

Edit: Note that I'm talking about a general trend in the post's comments and on here. Not the author's opinion.

Sounds like you agree with him, as he advocates a 'conf' directory at the end of his post.

Did you read his post? He said exactly that. Put them in a config directory of some sort.

Yes. I'm not talking about his post. I'm talking about a few other comments here and on G+.

This is a problem that libetc is supposed to help solve: http://ordiluc.net/fs/libetc/

That’s nice, shame it’s not maintained. On the other hand, one the goals of XDG is to differentiate between application cache, actual settings etc. If FooApp stores 2GB cache file in .foo, redirecting it blindly to .config just makes my backups that much harder.

It also begs the question, why it was named "etc" :)

You can ask this question for most of unix. Why /etc? Why /bin and /usr/bin? (Answer: At one time hard disks were very small and crashed a lot), why do we presume screens are black and white, etc, etc.

Try to change any of it though, and a lot of luddites will come out screaming bloody murder. It's just not UNIX if it makes sense.

The origins of /etc are lost in history. Wikipedia [1] says that at Bell Labs /etc was pronounced "et caetera," and contained files that didn't belong elsewhere. And it had the advantage over conf or misc that it was only 3 letters.

[1] http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

Maybe the alternative was "..." :)

I'm guessing it's because /etc contains configuration files and dot-files are just configuration files. Or do you mean, why was /etc named etc?

For the latter, a lazy copy/paste from Wikipedia:

"There has been controversy over the meaning of the name itself. In early versions of the UNIX Implementation Document from Bell labs, /etc is referred to as the etcetera directory as this directory historically held everything that did not belong elsewhere (however, the FHS restricts /etc to static configuration files and may not contain binaries)."

I think they meant the latter. Why is it /etc instead of, perhaps more obvious, /config or /settings?

If it was meant for configuration - probably it would've been - /cfg /ini /set /opt /flg /arg /prm (params)

As someone said - naming things is the hardest!

my de-obfuscation attempt:

etc > e.t.c > edit to configure.

the .rc suffix has a nice history btw

What's the history of that?

Legacy of the runcom shell that let's you record sequence of commands.


"How many bugs and wasted CPU cycles and instances of human frustration (not to mention bad design) have resulted from that one small shortcut about 40 years ago?"

Sigh, if only most of us had worked on a software system that has lasted as long as that.

Tangent: I'm grumpy that Eclipse IDE uses the file names .project and .classpath. So they're hidden by default. Requiring special treatment.

Their content is XML. What's wrong with project.xml and classpath.xml?


Can't read without signing into Google…

oh well nothing of value was lost.

I'm with you.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact