Maybe my brain has been twisted by too many years of using Unix, but that's exactly how it's supposed to work. "rm" is really editing the directory, not the file, so the owner of the directory is what matters. The file may not actually even be deleted; if there's a hard link to it from another directory the file will still be there in the other places. inodes are awesome.
I agree. This should not be surprising. The ability to link a file to a directory is different from the set of permissions necessary to change the data in the file.
Here's a practical example: I can create ~/public with permissions 777 (fully permissible) and allow my friends to read and write freely in it.
They can create files with permissions 600 (only they can read/write)
I can't open up these files and read the contents, but I can remove the entire thing from my home directory.
Yes, I'm not sure why this is surprising, since the write bit refers to modification of the file itself, whereas the ability to create (or delete) files in a directory is related to the write permissions for the directory.
EDIT: NelsonMinar explains this better in his comment, but if you think of the inodes, not the files, this behavior makes perfect sense.
This attack is actually not relevant because tmpwatch (the daemon whose behavior causes the problem) runs as root and is calling unlink as root (this is explicitly noted).
In fact, /tmp generally does not have the behavior cited by this article, as it is marked with a special access flag that only lets users modify the directory entries of files they own.
True. Thanks for clarification. Actually, I never understood why "sticky bit" is called that way, wikipedia article gives an answer:
The sticky bit was introduced in the Fifth Edition of Unix
in 1974 for use with pure executable files. When set, it
instructed the operating system to retain the text segment
of the program in swap space after the process exited.
Yes, at the risk of sounding arrogant, what is this doing on the front page of HN? This isn't a security flaw and it isn't new, it's a well-understood Unix design decision.
If the aim was to educate people about Unix design principles, then it would have been useful to include references to the relevant specifications:
or at least have some meaningful discussion about the rationale for this design and its consequences. But this is just a statement of well-known fact with no context or added value, placed behind a link-bait title.
Your point is well taken. I did not mean to imply that this is bad design decision on the part of Unix. I was also trying to articulate why the current behavior makes sense by describing the mechanics of inodes. However in my experience many novice developers, especially those with limited knowledge of Unix, find this behavior quite surprising. Delving into this curiosity presents a good jumping off point for finding out more about Unix file systems (and, yes, this is all the more reason including references). The HN audience surely includes many people, such as yourself, that consider this to be completely common knowledge but the fact that at least 9 people found the article useful indicates otherwise.
This is of course intentional. As far as I can tell, though, you can't delete a non-empty subdirectory owned by root containing files that are also owned by root, even if you own the directory containing that subdirectory. In order to delete the subdirectory you need to delete the files in it, and you can't do that unless you have permission to modify that subdirectory.