Hacker News new | past | comments | ask | show | jobs | submit login
Unix file access rwx permissions are simple but tricky (igoro.pro)
67 points by hyzyla 36 days ago | hide | past | favorite | 74 comments



It's counterintuitive that the owner can have less rights than the others. Honestly, I've never seen it put in practice in any real-world file system.

Incidentally, this is also not very efficient: UNIX permissions as they are today require 9 bits, namely rwx for owner, rwx for group, and rwx for others. But in an alternative universe where owner's rights win over group's rights which win over others' rights, permissions could be coded in just 6 bits: 2 to express who can read, 2 for who can write, and 2 for who can execute. Each set of 2 bits would be interpreted this way: 00=nobody, 01=only owner, 10=group or owner, 11=everybody.


Alas, you're missing some bits. Sticky bit in particular can be associated to each of those too. There's probably others that I don't remember off the top of my head.

> It's counterintuitive that the owner can have less rights than the others

I completely concur. I've also never seen it used in the wild, but I know about it because I stumbled upon it more than once building scripts and not being careful about what flags are set.


> Alas, you're missing some bits.

Why "alas"? I'm comparing apples with apples: UNIX base permissions vs alternative-universe base permissions. If you want to add the 3 flags (sticky, setuid and setgid), you can add 3 bits to both sides of the comparison.


>Sticky bit

Interestingly, I have come across some people who confuse the sticky bit with the setuid bit.


It's pretty common for people the misuse the terminology but not the semantics.


I have had some colleagues who did not get the semantics either, for many features of Unix permission bits, not just the two mentioned above.

For example, some did not know how to use the symbolic form of the chmod command's permission flags, e.g., like ug+rx, only the octal ones, like 660, and would typically indiscriminately use 777, not knowing about the security risks :)


...especially when they don't read documentation as often as they should ;)

ahem

Yes, I mean the setuid bit. The bit that makes the groups work for read/write to a directory. The same bit that's dangerous if it's set for a root user or group, and can be iffy if it's executable... yes that bit.


> owner can have less rights than the others.

Indeed it's funny that you can `sudo chown root regularfile` and you'll then be able to read it, since the group now applies not the user.


I'd argue you could drop x as well. It's really an attribute not a permission, since you can copy a file without x then chmod +x it.

That fails for setuid files but that's a setuid thing not an x thing. It also fails I guess for executables that check argv[0] but probably not important.


This would also have the advantage of being cheaper to list out; instead of rw-r----- (0640), it would just be “gu-” (-, u, g, a for the four levels).


I attended a Tanenbaum lecture once where he talked about how silly it was that nothing happens if permissions are reduced for a file while some other user/process has an open handle to it, and this is something Linux doesn't care to handle and MINIX does (or perhaps just that a kernel/filesystem should handle it, and few do -- I don't recall exactly). Surely an edge case (logging? what else? I never keep files open for too long), but I thought it was an interesting one.

You can test this in Bash: userA does cat>/tmp/newfile (assuming a chmod or relaxed umask so /tmp/newfile is created with permissions 0664), userA types in lines of text every few seconds, userB does tail -f /tmp/newfile and watches lines appear, then userA does chmod 600 /tmp/newfile, but userB can continue to tail -f /tmp/newfile and watch lines appear.


Yes, it's the equivalent of "perimeter security" in networking. Once you are inside accessing a resource (connected to a network node or reading from a file descriptor in the kernel), you don't lose that access.


The existing pattern leads to very useful usecases though: there are resources a server only needs to open once (e.g. during startup), and being able to then remove access while holding onto the one handle you're going to use is a security win.


How would you want memory-mapped files to work, if permission changes affected open files?


You could synchronously invalidate the PTEs corresponding to the mapper pages on chmod and let the soft fault handler re-evaluate pages when next accessed.

That said, Linux gets this right, and Tanenbaum gets it wrong. Preserving access rights requested on open makes the system easier to reason about.


You could but it would induce corruption in programs. Programs expect I/O to fail much more often than they expect memory accesses to fail.


Programs using mmap have to be prepared for SIGBUS on mapped region access anyway --- consider the surprise removal (yanking out USB stick) case.

That few programs, especially native code ones, bother to make themselves robust against this failure mode of mmap doesn't make it any less part of the mmap contract between the kernel and userspace.


Contracts are one thing, reality is something else though.


The nicer things are suid a guid.

suid is to run things as another user without passwords. Mostly used for root access today and ignored for anything else. I personally think that's a missed oportunity when they added the unshare/namespace/capdrop stuff... would have been so nice if the interface to containers was a freaking simple 'suid as this lowly user' for a userland api. anyway.

and guid ON DIRECTORIES, are so that users can save files in a groups that then others can also update. So you can have `/srv/http/htdocs userA webmasters -rwxswx---`

then there's umask which may help or get in the way. and getfacl et al.

overall it's a mess that covers many usecases after you've been initiated.


Interesting, I was just diving into the permission system today. I was wondering if it was possible to delegate administration of a directory, e.g. give permission to some non-root user to delete files created by others in that directory.

Turns out it doesn't seem possible. Even if you use ACLs, whatever default ACL you set can just be removed from sub-directories by their respective owners. This seems like a big blind spot, unless I just missed something; all those groups, access lists, bits, and I can't even do that?


Yeah AFAIK you'd have to make a frontend to `rm` and execute with sudo. I've done this a few times.

Relatedly, and possibly helps you implement half of the scheme. You can make a dropbox[0] style directory by removing the search (x) attribute and having some program continuously scan and rename dropped files to some random string.

[0] dropbox in the traditional meaning of course, not the cloud storage


I actually never had the idea. It's truly unintuitive.


I've been wondering about this for awhile. Do we really need multiple users for desktop unix? I get that you want some division between system and user, to protect the user against themselves. And read-only files are similarly useful, if only because some devices are read-only. But do we really need user/group/other permissions for desktop unix? and all the complexity of groups, and euid, etc.

Edit: not sure why I'm getting downvoted. Is it that offensive to question orthodoxy?


User is useful for isolation, not just between system and user, but also between different bits of the system. This is more useful on a server running multiple different services, but desktop software often has multiple services as well (although I can’t think of an example right now).

Groups are a bit more niche IMO, but without groups there is no real other way to express the constraint of thing X uses file A,B, thing Y uses file B,C - how can they share the data without making things globally accessible or duplicating it. That’s probably a less frequent occurrence, but does come up (but again more on servers than desktop).


Except they all already use the insanity that is SELinux because they found users and groups too limiting for their "guaranteed to break production at runtime" security scheme. People suggesting we need user accounts for isolation are a generation behind the engineers tasked with papering over security issues.


SELinux is definitely over-engineered, but it’s not for the same problem as file permissions. File permissions are purely for files, whereas increasingly Linux does not nicely follow the philosophy of everything being a file. Once you have non-file resources and capabilities, it is very tempting to attempt to generalise permissions beyond them and if you attempt to bolt it on, you’ll get something like SELinux.

I wouldn’t say file permissions are necessarily outdated. You can get pretty far using something like systemd capabilities, which are much more supplementary to the existing Unix file permission model.

The very best approach to this IMO is OpenBSD pledge and unveil. Managing the permissions as part of the binary itself is much less fragile, although it doesn’t have exactly the same threat model. It also has the added benefit of temporal safety, which I’m not sure you can do with SELinux.


AppArmor tends to break less because it doesn't resolve around esoteric file metadata for labeling, IME. Keep the policy definitions separate.


Who is “they?” SELinux is not universal.


Those multiple users could be used to implement sandboxing.

And of course if one has a family then one might want accounts for Mom, Dad, Alice and Bob.


Yea, I'd like to see a system where each application runs as its own user, especially third party applications not vetted by the OS vendor, the package manager and so on. Maybe this already exists--I think it would be a good step forward for security and privacy.

You can no longer trust 3rd party applications to stay in their lanes. Running an application with full access to everything that I as a user have access to seems insane in 2024. Ideally, I don't want a third party application to read or write anything outside of its "home directory" without my explicitly giving it permission. That includes files on my filesystem, network shares, hardware devices, everything.


u would either need to statically includw everything in the binaries all the time causing huge files everywhere, or use something like bsd jails. both is doable. then it is still possible to access the kernel though. so its highly impractical. as other guy pointed out Android somewhat works like that, but lots of malicious programs also exist for that..


> Maybe this already exists

That's how Android works.


In the early days, nowadays not only it does that still, it also has LinuxSE and seccomp enabled, and Linux drivers are seen as legacy, modern drivers run out of process as well, on their own accounts, talking via Android IPC with the kernel.


Stop running 3rd party propietary crap. Problem solved. Better: separate your work, gaming/leisure and 'sensitiive' (banking) files with 3 different users.


Depressingly i think sharing computers at least in the western world has become a thing of the past. At the very least, sharing your main form of computing.


?? not all households have money to buy pc for everyone. for phones maybe (also less common outside of rich countries), but definitely not true for pc platform, for most of the world actually.. there a lot of pcs for the family still.


In poorer countries, the phone/tablet is the only "PC" they have, and I guess that was the point being made.


Using computers in general outside of work is quickly becoming a thing of the past too.


We're sharing a computer as we speak.


Right, in the sense of "the network is the computer".

Tangent: one of the most talented engineers I ever met gave an amazing (tho sadly company-private, unrecorded) talk about how the OS was his IDE. Kind of analogous... anyway, I like this type of re-framing or meta-level-shifting.


he must be getting a bit twitchy lately then


The days of multiple family members using the same computer are long gone. Do you ever log into anyone else's desktop/laptop, or does anyone else ever log into yours? That's what I'm getting at.


Yes, actually. My wife occasionally logs into my desktop and vice versa. But I take your point - it's far less common than it used to be, even if it does still happen.


Yes, my children log into their own accounts to play games and/or do homework on the desktop computer with large monitors and good GPU.


My children all share an unprivileged account on our family desktop computer to play video games, etc.

In addition to this, my wife has an account on the computer which is separate from mine.


I have to clean my grandmas tablet every time my nieces come and screw it up by installing games, moving icons around and setting who knows what.


Right, but you're not logging in to a separate account for that.


A separate account for kids would make this cleanup unnecessary. They don’t want to screw it up for her, they just do what they think will help with their activities.


> I get that you want some division between system and user

Which, as others have pointed out, means various system services running as other users (since you don't want them running as your user, and you also don't want them running as root). On most desktop unix machines that only one person uses, that's the main use case for multiple users (and for multiple groups since groups are used to manage access to various functions like printing, usb sticks, cd-roms, etc.).


Users are still useful for isolation, many daemons on your system are likely running with different UIDs (or could be configured to do so) to increase isolation between them and the rest of the system.

Groups are a bit less useful (IMO), but still good for handing out access to things like device files. If a daemon should have permissions to XYZ /dev file then you add them to the group associated with it.


You want every app of yours to be a different "user" so they can't access each other's data without arbitration. The term "user" is an unfortunate Unix inheritance. There's no reason that a single human (as he might on Unixes like Android or iOS) shouldn't have a hundred Unix "user" IDs at his disposal, one for each app.


It's a 70's permission system designed for 70's style computer usage - ie one computer shared by many people, with a relatively high level of trust among all the users.


It seems to me that stuff like iwd, ntpd, udevd, bluetoothd, dhcpcd, etc. etc. each running as a different user is pretty desirable. Every system works like this, including Windows.

The most obvious reason for this is so that a security problem in one of these daemons won't be able to read your Firefox cookies, install a rootkit, and stuff like that.


It took me a shockingly long amount of time before I realized it was silly to have a username on my machines. I am the only person using this, why am I typing unnecessary cruft? Username switched to “a”, which ends up saving space in my home path and terminals.


It took me a shockingly long amount of time before I realized it was silly to have a username on my machines. I am the only person using this, why am I typing unnecessary cruft? Username switched to “root” on all my machines and I remove the username from PS1. /root is shorter than /home/a (:


That requires faith in software not making hard coded assumptions about filesystem structure. Something which I am not prepared to do. How much software still cannot respect XDG?


Not a problem in practice. I've seen a lot of systems having user homes in places other than /home for various (usually historical) reasons. Never broke anything afaik.


It probably is fine. Still makes my spider sense tingle about some unforeseen failure that will crop up one day.


I've seen scripts with hardcoded "/home/$username" so your worries aren't entirely unfounded.

It's understandable too, because Linux doesn't provide a nice way to retrieve that information. You'll have to parse /etc/passwd (which might not contain all users) or hope that whatever language you're using supports path extrapolations so that you can do realpath(~username) or call a third party tool that does, like a shell.


Of course there is a nice way. 'getent passwd' if you have a shell script, or the getpwent() libc function if you are in some real programming language somewhere. That is ages-old POSIX stuff from almost the last century even (POSIX.1-2001, actually older than the standard even, since it has been in SystemV and BSD...), and Linux has always supported that.


$HOME.


ln -s /root /home/root


Just change your home directory to /me. Who cares about /home?

As for permissions: on a single human machine, sudo is useful principally to introduce friction to random install scripts doing random things to my system. In day to day use, there's no reason for me to input my password just to install some Firefox snap or something.


Once I had this thought and renamed the root user to my own name. Hilarity ensued.


lightdm brings up my username automatically, ~ is home. Are you saving much? That said, unix originals used three char usernames.


Login from a terminal requires exact username, so I am saving literal seconds per day. <a> <enter> just rolls off the keyboard.


Of course, nowadays even more so, unless users want to expose $HOME to the world.


I feel like macOS had the right idea for desktop security, with a per-binary permissions model when it comes to accessing sensitive areas in $HOME.

I know this can be done in Linux using flat packs, snaps, and the like, but I would really appreciate if sandboxing could be done at a more fine grained level, without coupling sandboxing and distribution.


That's what SElinux is for. However, you might have to write your own policies, the usual ones that most distributions ship are tailored to the server usecase. Android also uses SElinux for that kind of access restrictions, but works somewhat different form "normal" desktop linux.


> Do we really need multiple users for desktop unix?

I do, and not just for system services as mentioned by others.

I have separate user accounts for general desktop use, gaming, software builds, software testing, and a variety of containers.

Isolation is useful.


Yes, we really need users for desktops on all operating systems. Fundamentally limiting a computer to a single user is immeasurably idiotic and I wonder how one comes to that question even.

Somehow it slipped in for phones and that’s a big part of why they suck. E.g. you can’t have work, life, private/second life and tmp/trash accounts on your phone and have to either carry multiple devices or mix lives together.


You can though! Stock Android supports multiple users. Just enable it in settings


Do we really need multiple users for desktop unix?

I find them valuable. For example, I have a workstation that is used for different projects with different clients, as well as administrative work for my own business. I want 100% separation between assets related to those different contexts.

It’s bad enough that we have package managers allowing package installation scripts to run arbitrary code, or software wanting you to install via:

    curl https://example.com/imnotmalwareipromise.sh | sh
I’ve seen people seriously make the argument that if your entire system gets nuked by malware through these installation methods then this is entirely your fault. That’s obviously an absurd victim-blaming stance, but the fact is that the risk still exists with modern software development systems.

At least if I have separate users for each client or each major project then the worst that is going to be compromised by a vulnerability introduced during the work for that client or project is that same work.

It’s not just about security though. It’s also about convenience and manageability. Those different clients and projects frequently require the use of specific security credentials and configurations, often for remote services that other clients/projects also use. In a perfect world, I’d like all of the software I use to be XDG-friendly, and I’d like each client/project to have its own home directory with its own independent XDG-style directories underneath, so each user has the configurations and credentials required for its own work and has no knowledge of or access to those of any other user. Finished a project? Archive/nuke that entire user and home directory as appropriate, and nothing is left lying around to break anything or leak anywhere later.

I’m currently playing with NixOS, which means I can also have a limited set of system-wide software installed and have specific additional packages installed per-user or even activated on demand when I change into a specific directory. Again, this means my system has only the software I actually need available at any given time, at the exact version I need for that specific work, and if something is no longer needed by anything I’m doing then it will automatically get cleaned up next time I do an update/rebuild.

None of this really works without the concept of separate users running different software in their own isolated little worlds, possibly concurrently on the same workstation and even sharing the same input/output devices (in a safe way where again they can’t unreasonably interfere with each other – something else that is not 100% there yet, but certainly a lot better than on de facto single-human-user operating systems). The only real alternative is to spin up something like a different virtual machine for each client/project where everything from the OS down is isolated, but I don’t really gain anything by doing that and it’s potentially more work to set up and more difficult to share input/output devices.


No, a group called "wheel", "dialout" and users "irc games uucp list gnats mail news" are essential to the Linux desktop. The only cruft facing the Linux desktop today is the unification of bin and sbin.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: