How about ptrace(), depending on how your system is configured, an attacker running under your own user account could do similar damage to any process you run, make your browser display fake login forms, steal saved application passwords, etc.
Once someone has access to your the user account of someone with sudo it's game over, there's just too many ways to elevate, I couldn't even name them all. They could even just straight up "tmux attach -d" and steal your logged in root shell.
> there's just too many ways to elevate
That's a problem that a lot of different people are trying to tackle because there's a lot of holes to plug.
* Wayland is plugging the Xorg hole where an app can listen or send keystrokes to other applications.
* sudo plugs this hole (by default) by requiring your password and binding your ticket to a single tty.
* Polkit plugs this hole (with Wayland's help) by having privileged dialogs and authenticating individual applications (i.e. dbus clients) instead of the user as a whole.
* The kernel switch plugs the ptrace hole.
* Flatpak and other namespacing frontends are working on the filesystem/network holes by keeping apps in their own little world unless allowed out.
* SELinux also lives in this space but it's a step removed preventing compromised apps the ability to use various escalation tricks in the first place.
We're closer than we've ever been. Now is pretty much the worst time to throw up our hands and say it can't be done.
- Even if you use the kernel switch you can still ptrace an application on startup, do you read the code for every .desktop composing your launchers?
- You can trivial add say, a modified web browser or terminal emulator to a user's PATH.
- You can add aliases to a user's shell configuration
- You can change a user's entire shell to a shim designed to fool them into thinking things are normal
- You can replace that desktop update notifier with a tweaked one that logs that root password first
- Wayland too will fall when you replace a user's compositor with a customized one, must of Wayland's security improvements have actually just been moving things from display server to compositor. Simply re-configuring the user's session choice will likely be enough to do this. Just like changing your shell.
- The PolKit "hole plugging" you suggest only works if there's no way to recreate a dialog which would look exactly the same, so I'm not sure how it has any relevance here
- SELinux more prevents problems between user accounts, as the isolation is designed to work, not within them and not for authorized escalation tools like sudo
- In the case of desktop systems it's also important to remember that most if not all of the important data will be in the user's home directory. https://xkcd.com/1200/
This will not be possible, as long as the same user controls their own configurations for core stuff like shell and launchers. User accounts weren't designed for this level of isolation. Make a separate user if that's what you want. This is why most distros run all servers under different user accounts out of the box. If you're concerned about the security of say, your browser, you should probably consider adding user account isolation to it too, on Android every single app gets it's own account.
If your threat model involves giving untrusted users access to a sudoer, you're not going to have a good time. User accounts are still the primary form of isolation on Linux, it's not a sandboxed mobile OS, maybe Flatpak will change that someday, but for now we have barely even started on this sort of switch to an entirely different security model.
You can only ptrace an application when you are its parent. Modifying system-wide desktop files is a privileged operation and flatpak's filesystem namespacing prevents applications from messing with the launchers in your home directory.
> Messing with environments variables and shell files.
Not when you don't have access to their home directory and all your file accesses are via XDG portals.
> You can change a user's entire shell
There are a number of access controls preventing this. First, modifying a users passwd entry is a privileged operation and using `chsh` requires the user's password.
> replace a user's compositor with a customized one
The compositor is the security boundary. Of course if you replace it the model is broken but if you have a POC where a program running as the user can change a users entire compositor to a malicious one I would love to see it.
GDM launches your session and is outside your control as an unprivileged user and if the compositor dies so does your session. You can't 'swap' it and your malicious compositor can never be shown as a choice in GDM since those preferences require root to modify.
SELinux isn't designed to stop you from calling sudo. But it does make sure that the scope of a what compromised application running as your user is limited. It's not directly related but MAC is a piece of the puzzle.
> if not all of the important data will be in the user's home directory
Yep, which is why apps won't have access to it except for the files the users chooses to give the application via a system dialog.
> Modifying system-wide desktop files is a privileged operation and flatpak's filesystem namespacing prevents applications from messing with the launchers in your home directory.
Okay, but let's jump over to reality for a second, no one uses Flatpak and your desktop shortcuts are on your desktop, not the system one, so are your other commonly invoked applications, due to the unnecessary complexity Flatpak introduces I feel it's unlikely to see widespread adoption in the near future.
> GDM launches your session and is outside your control as an unprivileged user and if the compositor dies so does your session. You can't 'swap' it and your malicious compositor can never be shown as a choice in GDM since those preferences require root to modify.
Did they change it so GDM no longer prefers your ~/.xinitrc if available?
You may also be able to LD_PRELOAD the compositor from your ~/.xprofile, but I haven't done any experiments there.
> Yep, which is why apps won't have access to it except for the files the users chooses to give the application via a system dialog.
This entire concept is nothing more than an annoying joke not widely deployed on any desktop system today.
Consider also that such an extreme level of sandboxing could also prevent malicious access to your tmux session and prevent this issue entirely in such a case, so why would this even be considered a problem if you're talking about that level of sandboxing?
Perhaps I was a too dismissive of it - but in any case it's a complete change of security model and its not clear that tmux is what would be at fault under such a model or if something else would be expected to protect it's session from access. Maybe some magic Flatpak garbage should be blocking access or something. Just slap more layers on it, I'm sure that'll fix it and not annoy the hell out of anyone.
This isn't tmux's fault; this is fundamentally the sort of thing that's possible under the security model of modern Linux desktops.
Not with most default sudo configurations. Your sudo ticket exists outside your control as a regular user and, by default, is bound to your tty. An attacker controlling another terminal can't convince sudo to execute commands with your ticket.
> manipulate the memory of your terminal emulator
On some distros this might work but you can absolutely flip a switch to disallow processes running as the same user to access each-other's memory. On secure systems this causes devs a lot of annoyance since they cant attach a debugger.
You can still attach a debugger on a newly created process, but if you want to attach to an already running process, you just need sudo. It's not really annoying.
How about controlling not another, but the same root terminal via send keys without tmux with another xorg terminal window?
You do appear to be correct that it's exploitable via other, also trivial, means. That does not make the situation any less bad.
That should be absolutely no one.
The memory modification one sounds truly not possible. In Windows it is possible via OpenProcess and WriteProcessMemory to modify other process memory under some circumstances, but I do not think the same thing is generally possible under Linux (because in most distros ptrace has been mostly locked down for a few years now)
Tmux is part of the OpenBSD base system.
Tmux is not an officially developed program by the OpenBSD community.
It was imported June 1 2009.
Here is Theo de Raadt's post:
> By Theo de Raadt () on 2009-07-07 04:37
> The most impressive thing about tmux, in my view, is how frustrating the code audit was. In 2 hours, I found only one or two nits that had very minor security consequences.
> It was not accepted into the tree based on license alone. It is high quality code.
In any event, tmux on OpenBSD also uses libevent as libevent is, naturally, part of the base system. libevent as most people know it was originally a portability fork of OpenBSD's version, similar to the portable versions of tmux, OpenSSH, etc, though unlike those projects core libevent development eventually switched to the portable version and OpenBSD stopped (AFAICT) backporting changes wholesale.
The tmux GitHub repo is the reference implementation. As an example, OpenSSH is developed internally to OpenBSD and the project creates a separate portable version. tmux is the opposite, it is developed independently of OpenBSD and the project maintains its own implementation.
Edit: because there is occasionally confusion on this point I recently documented it here: https://github.com/tmux/tmux/wiki/Contributing
The permission model used in UNIX is just that weak. This is why there's so much going on around capability-based operating systems (mostly built around 3rd generation microkernels such as seL4), like Genode.
Unix does have a decently strong, capability-based permission model. On the one hand, you have file descriptors which are literally anonymous resources to ad hoc system objects. Capsicum only required small tweaks to the Unix API to round it out.
On the other hand you have UIDs, GIDs, along with SUID and SGID bits on executables. A good example of how this can be used is BSD Auth, the BSD answer to PAM. Unlike PAM, which heavily relies on root permissions--typically by the authenticator itself--in BSD Auth authentication methods are implemented by binaries under /usr/libexec/auth/, most of which either have the SUID or (more typically) SGID bit set so the module runs with the necessary permissions to access the credential database without giving those permissions to the process initiating the authentication. Whether you use SUID or SGID depends on whether you want to restrict authentication; if so you limit execute permissions by GID and use SUID to switch roles, otherwise better to just use SGID.
BSD Auth shows the power of the Unix UID.GID model, but it's woefully underutilized. Which hints at a larger problem.
Our software sucks because we're all bad programmers. We keep rehashing things without understanding and making use of the tools at our disposal, and of course even when we use the correct architectures our software is buggy.
To my mind seL4 and, to a lesser extent Genode, isn't really about permission models. It's about being able to separate correctly written programs from broken programs. Until you can _absolutely_ trust the core software, obsessing over models and architectures is pointless. The reason Administrator and root permissions are so commonly used, and why privilege separation not more commonly applied, is because software is so buggy and broken that if you took away the ability for people (directly, or indirectly through another layer of software) to introspect/supplement/hack _privileged_ software, people would flock elsewhere because of the usability nightmare. I mean, this is how we ended up with VMs and containers, which are at _best_ a totally lateral move in terms of architectural and practical security.
seL4 and formal verification is about breaking that cycle so we can begin laying down a layer of trusted software upon which we can consistently and meaningfully make use of better security models. The capability models employed by seL4 are principally directed toward that end, not toward the end of making writing secure software easier for your typical C++ or Node.js developer. The capability and ACL models best suited for those developers will look and operate differently, and in all likelihood look very much more like the Unix-based models for various reasons--path dependency, practicality, and the fact that they're quite capable, especially if made more consistent (see, e.g. Capsicum wrt capabilities, Plan 9 wrt to namespace visibility, both of which are heavily based on file descriptors and the Unix UID/GID model). From a usability standpoint the key hurdle is figuring out the best semantics for the capability _broker_. L4 doesn't really address the broker problem directly; Unix and Plan 9 and various other systems do, which is simultaneously the source of their convenience and flaws.
If send-keys didn't exist, a script could attach tmux in its own pty instead.
Detach/attach is a fundamental part of how tmux works, everyone must understand that it is fully available to anyone with access to your user.
GNU screen also has a similar feature to send-keys (they call it "stuff").
It has always been good practice to detach long-running root programs by starting a new tmux as root rather than running them in a non-root tmux, and to use sudo rather than su to run root commands inside tmux the same way as you would outside.
For completeness, the same can be done with screen:
screen -S session_name -p window_number -X stuff "whoami^M"
Or better, if you wanna run malware for fun, run a VM with an isolated network connection that only routes out to the world via a separate VM that pipes all traffic over a crappy commercial VPN service or Tor. That way if you piss off any script kiddies they can't DDoS you.