Hacker News new | past | comments | ask | show | jobs | submit login
Heap-based buffer overflow in Sudo (qualys.com)
400 points by ptype 6 months ago | hide | past | favorite | 317 comments



All you need to know about sudo and frankly most other pieces of the Linux userspace is that it is undertested. The commit that added this flaw to sudo claims to fix a parser bug but includes no tests. There is no reason for the author, the reviewer (if there even was such a person), or anyone else to believe that the bug existed or was fixed by this change. The pull request that supposedly fixes this CVE also includes no tests. There is no reason anyone should believe this fix is effective or complete, or that it does not introduce new defects. This is the result of people who stubbornly refuse to practice even the most basic good engineering practices, like testing and code review, while at the same time using the industry's most dangerous high-level language. As long as this type of thing continues, our tools will remain at a very low level of safety, reliability, and correctness.


"All you need to know about sudo and frankly most other pieces of the Linux userspace is that it is undertested"

Fair enough but what do you recommend?

Me, I try to keep people out of my systems that I don't trust. This particular snag needs local access but I will grant you that my web server or other exposed service might provide a local interface.

Instead of throwing your hands up and screaming "crap" you do your risk assessment and attempt to mitigate as best you can. I read a lot of blogs and have a fair amount of logging and analytics lying around the place (and that's just at home).

Fairly recently I found that my wife's car had loose nuts on the front nearside wheel. That was a change to fix a worn tyre for obvious safety reasons but for whatever reason the fixings were not done up properly. I think they were done up finger tight but a distraction caused the mechanic to forget to use a spanner (wrench) to finish the job to spec. The wheel seemed to work fine but you would get a low rumble sound on corners. It was not a trivial to diagnose fault because you had to notice it before failure - I'm a (non chartered) Civ Eng and IT bod but not a mechanic. There is a minimally screwed on plastic cover that stopped the bolts from flying out - not much.

A car wheel is a thing we can all look at and see that the four bolts are not working properly, once you remove the plastic cover and see them wobble.

Now that is what you can do to protect yourself (risk assess, mitigate etc.) However there should also be something that protects "civilians" and I think that is what is missing. I'm not too sure how we do that.


In short: There's nothing that you can do

Longer: One of two things -

1. Choose the most boring software possible, trust that the process will work as expected and that you're no worse off than anyone else. Update your software regularly

2. Choose the simplest, most robust software possible (Alpine, OpenBSD, etc). In this case, doas instead of sudo. Pray that works better than everyone else or that you get some benefits through obscurity. Still get surprised every so often. Update regularly

Either way, modern software has gotten complex enough that there's few options for the average person


> Fair enough but what do you recommend?

why do you need to have sudo? I'm perfectly fine without it. sudo is maybe useful in the case where others on the system don't need to know the a password to run as someone else (including root) but need to be able to do that anyway. sudo seems to have gotten a big installation bases through ubuntu and everybody thinks it's normal now, but really for me it's not.


I want to start and stop a couple of systemd services remotely. Currently I expose the command with sudo and run it under a remotely connected user. What are other options?


You don't actually have to be root at all to manage systemd services. If you give the user `org.freedesktop.systemd1.manage-units` in Polkit then they can just run systemctl as their user and it will work.

If you only want to allow specific units the authorizer is passed the unit under `action.lookup("unit")`.


In your case either use plain /sbin/su: https://man7.org/linux/man-pages/man1/su.1.html or login via root. Your use case sounds very sudo-like, thought. Probably stick to sudo.


It's a batch job. Ah, I see it mentions runuser, will check it.


sudo doesn't have to grant full root access to everyone in the group. It can be set up such that certain users only have the ability to run specified commands as root, which is handy for orgs where you might have a group of tier 1 techs that you want to be able to run certain scripts (written by tier 2 or 3, of course) that require root, but you don't trust the engineers enough to have root access to everything.


It is almost always the case that a sufficiently malicious user can find a way to turn that into full blown root access.


So that means they should get full blown root from the beginning?


IMHO yes because then you treat the access with the gravity that is required.


> Fair enough but what do you recommend?

MirageOS unikernels like were mentioned yesterday? Get away from running network services on Linux entirely?


Use Actually Portable Executable which enables you to compile textbook C programs as unikernels that boot on bare metal, as well as execute natively on all the existing operating systems too (without needing a runtime or interpreter). We've been working hard to democratize ring0 privileges since spectre has made the performance costs of having an operating system too high: https://github.com/jart/cosmopolitan/issues/20#issuecomment-...


Much of the point of suggesting Mirage was to get away from C - my position is that the correctness costs of C's undefined behaviour (as implemented by real-world C compilers), limited testing support, poor dependency management and so on are too high.


One of my goals with Cosmopolitan is to attract developers who build high-level languages. MirageOS depends on OCaml which depends on C. We need a sturdy foundation at the lowest levels that can enable these visions by helping high-level environments be successful. Cosmopolitan can be that foundation. For example, the codebase has 192 test programs. Much focus has been placed on using the Undefined Behavior Sanitizer and Address Sanitizer to vet everything. My past experience was working on projects like TensorFlow and I started security initiatives like Operation Rosehub which together helped us have the highest performance software infrastructure that's safer too.


> OCaml which depends on C

How so? The OCaml compiler is written in OCaml and bootstraps itself.


The libc is a runtime.


By runtime I meant to say external runtimes. Your cosmo binaries are statically don't depend on anything except stable kernel interfaces, when you run on mac/nt/linux/bsd. That means it doesn't need to link any .so files like the glibc runtime. Therefore you won't be impacted by things like linux distro versioning and incompatibilities.


Does it have full support for all the kinds of stacks people are running on Linux today? Is there a robust ecosystem of MirageOS users online able to help troubleshoot issues from the simple to the arcane? Is it supported by VPS providers the world over?

Personally, while I doubt these are the case, having never heard of MirageOS before, I'm willing to be proven wrong. However, if the answer to any of these is "no", I have a hard time seeing how "just abandon Linux wholesale for NewShinyThing" is a viable option for more than a tiny subset of users. (Even if they're all "yes", it's still a wildly unrealistic expectation...)


> Does it have full support for all the kinds of stacks people are running on Linux today?

No, of course not; the only thing that supports everything that Linux supports is Linux. But most use cases don't use every part of Linux.

> Is there a robust ecosystem of MirageOS users online able to help troubleshoot issues from the simple to the arcane?

I doubt it, but how do we get to there from here except by more users starting to use it?

> Is it supported by VPS providers the world over?

Yes, since what you build is just a VM image.

> However, if the answer to any of these is "no", I have a hard time seeing how "just abandon Linux wholesale for NewShinyThing" is a viable option for more than a tiny subset of users. (Even if they're all "yes", it's still a wildly unrealistic expectation...)

I don't disagree as such, but I do think that at this point building anything on these insecure foundations is throwing good money after bad / building castles on sand. Probably 95% of the time you build something that serves your present business purposes, accept a certain amount of insecurity, and get on with your life. But it's worth putting a bit of effort into looking for better ways to do things.


> The commit that added this flaw to sudo claims to fix a parser bug but includes no tests. There is no reason for the author, the reviewer (if there even was such a person), or anyone else to believe that the bug existed or was fixed by this change.

"The PR does not include tests" is not the same as "nobody performed any tests" is not the same as "nobody actually noticed a bug".

And of course, it's perfectly reasonable to form beliefs about code from reading it.


Tests you perform locally should probably be described, if not encoded into something that other people can run as well.


"You can't prove no one tested that!" is not really a good basis for robust software design.


In addition to the value of repeatability, it’s just being a good human to show and share your work.


> And of course, it's perfectly reasonable to form beliefs about code from reading it.

Broadly yes, but it would be hubris to claim you can tell the correctness of all code from merely looking at it.


For some code you should be able to do this, and if you can't then I don't think you should be writing code.

Coding is not brute forcing. I feel like this is taking an extreme position at the complete opposite end from not testing anything.

EDIT: Misread the comment I replied to. I agree that it is not likely that anyone can tell the correctness of all code from merely looking at it.


For some code, yes. What about a complete refactor of the core functionality of said program, with parallelism and the like?


I misread the comment I think, but either way, I am not arguing against automated tests, I was just trying to point out that for some fragments of a code base you should absolutely be able to tell correctness by looking at it.


Even if there were basic unit & regression tests, this bug might not have been caught. This bug should have gone through detailed security review and should probably also undergo fuzzing.


You can bet your bottom banana that the GRU, the NSA, Chinese state security, and the mob have all thoroughly fuzzed sudo and are sitting on the results.

It just seems SO EASY to add a test for this problem, literally the relevant test input is one slash by itself, or any string ending in a slash! So simple! If I sent a change like this at work, no matter how trivial, that said it fixed this bug but I didn't send any tests, the reviewer would reject it out of hand or, probably, just silently ignore my change. But that's the real problem here. This program is a monograph. There are no reviewers and there are, consequently, no standards.


It sounds like you'd find great pleasure in writing harnesses for oss-fuzz. Google has found and reported over 25,000 bugs in 375 projects, but they need people like you to help wire up new projects.

You can get started here: https://google.github.io/oss-fuzz/


Maybe expecting projects that mostly consist of a single guy working on it in their spare time to be "NSA proof" isn't really realistic?

Folk love to bring up "responsibility" and all of that, but you can't really expect people to bear the responsibility of the world on their shoulders for their spare time projects. It's neither realistic nor fair.


No, you can’t. I think the blame shouldn’t be on this author, but on those choosing to install it or, worse, choosing to ship it on their OS.

OpenBSD replaced sudo with doas (http://man.openbsd.org/doas) in 2016 (https://www.infoworld.com/article/3099038/openbsd-60-tighten...). It’s a safe bet it’s more secure than sudo.


Sudo was (maybe still is?) an OpenBSD project. I don't believe it originated there, but their code forms/formed the basis for sudo pretty much everywhere, much like OpenSSH.


doas is a a really good replacement.


Maybe something that’s sole purpose is privilege escalation, like sudo, should come with some expectation of responsibility.


No, but having tests is an acceptable baseline.


There are tests. Are there enough tests? Maybe not. But people can do in their spare time whatever they want, including writing code without tests.


The thing is, "do what thou wilt" is clearly indefensible as an engineering standard, and I doubt you would disagree. So yes, I can't force anyone to write tests.

However, I do want to see programming as a culture adopt a higher standard when it comes to checking their work, and I think the continued prevalence of bugs like this are an indicator that we actually need to do so. I'm not asking for NSA-proof because that's not reasonable. But memory safety is a solved problem, and we need to be putting in the legwork to make more of our stack memory safe.


People can do what they want in their spare time, true, but that it is their spare time does not make the action responsible or irresponsible, nor does it shelter them from responsibility.

Not wearing a seatbelt when at work or in your spare time is irresponsible.

Writing code, without tests, that others use (and for security at that) is irresponsible.


Y'know, I don't agree with you in general, and you did put this in general terms.

But this is frakkin sudo we're talking about.

It's a wonder that anything works, ever.


Yeah, this goes back to the whole "Open Source and corporate funding" story. As far as I can find, sudo doesn't get any direct funding at the moment, although various companies have undoubtedly contributed patches.


It’s nice when those with differing viewpoints can find common ground over their profound disappointment with the world (or at least their profession).

Reading the code of important open source projects is not for the faint hearted!


From https://www.sudo.ws/license.html

> THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

If you don't like this license, you're free to use other software.


What’s that got to do with what I wrote?

Nothing, as far as I can see. If someone writes some crappy security software and hides behind a licence that only means those relying on it have joined them in being irresponsible. Responsibility multiplies, it’s not zero-sum.


Parent's comment has got everything to do with yours: you are feeling like the software somehow owes you to be merchandable and fit , while the software's license explicitely rejects ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS .

There is a large disconnect here.


> Writing code, without tests, that others use (and for security at that) is irresponsible.

You can choose to run this code, or you can choose not to run this code. It's really up to you.

This is very different from a sealbelt, as I can't choose to not have an accident with you, potentially causing a needless fatality.


> This is very different from a seatbelt

You can choose to wear this seatbelt, or you can choose not to wear this seatbelt. It's really up to you.


This code is advertised as a security tool, is it not? The only reason anyone runs sudo is because it (supposedly) improves their security. I think some responsibility comes with that.


I don't use sudo to improve my security; I use sudo because it's what I've become familiar with.

I don't want to come across as pedantic - the point I mean to make is that I think a lot of people use sudo without thinking about it much. Sudo's just "the way to use linux" for a lot of people I know.

I don't think the sudo contributors should be labelled as irresponsible, because everything they've added to the project is available for the public to see and scrutinise. I don't think they've ever mislead people; rather that people have assumed things.

Maybe people who care about security will notice now that sudo doesn't have comprehensive testing, and will make their own alternative.


So people should be obliged to spend more of their free time?

I know this is not exactly what you're trying to say, but it is what it comes down to.


If they don’t like using their free time to write code they don’t have to, it’s free time not work.

So, what it actually comes down to is that they didn’t bother to write tests. There was no time pressure, there was no urgency or requirement, they just couldn’t be bothered to do that prior to release. If there’s a note somewhere saying “I know it’s not quite done...” then I’ll let it slide.

Have you seen something along those lines?


> So, what it actually comes down to is that they didn’t bother to write tests. There was no time pressure, there was no urgency or requirement, they just couldn’t be bothered to do that prior to release.

Kindly go to the source repository.

> If there’s a note somewhere saying “I know it’s not quite done...” then I’ll let it slide. Have you seen something along those lines?

That's what a TODO file is for. There is one you can browse to here: https://www.sudo.ws/repos/sudo/file/tip/TODO

There is also literally a dozen lines in the LICENSE file saying that the software is not quite production ready (THE SOFTWARE IS PROVIDED "AS IS") and should not be relied upon (THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS).

It's all spelled black on white, not sure what you would want more from the author?


From the comment I originally replied to:

> people can do in their spare time whatever they want, including writing code without tests

I did not refer to the Sudo project at all, so you might want to redirect your post to someone who did.


Hmm. Maybe we shouldn't let unlicensed hobbyists expose their software to the internet - plenty of other things are too dangerous for unlicensed hobbyists to do. Though frankly the standard of commercial code is no better at the moment.


That's fine, existing software comes first, decent software comes second, sudo is the former. That said, it's a good candidate for riir (safety, right?), but riir happened to grep instead.


> It just seems SO EASY

Then do it! On your own time, rather than complaining that someone else didn't do it on theirs!


Does the project have a testing framework in place? How is the project built and distributed? Can testing be integrated into its CI/CD pipeline? Is there a CI/CD pipeline? Adding frameworks and infrastructure to a project is one of those things that is a lot more work than it appears to be. They should do it, for sure, but I wouldn’t dismiss it as trivial on a decades-old system that was never built for it.


Single persons often have superhuman standards. It is easy to see that most of the best works in math or art were produced by an individual.

Code review can work, but often it doesn't. There are countless examples of projects with "strict" review requirements that have similar issues (whereas qmail only had one).

Writing tests is the important thing, it keeps you honest.


31 CVEs later I think it's safe to say that this particular author does not possess superhuman standards. How much more evidence would we need?


We’d all like that, true, but look here:

https://github.com/sudo-project/sudo/graphs/contributors

That’s one maintainer, not even full time according to his résumé. What you just described is multiple specialists and some supporting tools, so another way of looking at this is to ask how much value the IT world has gotten from sudo but not contributed back in support.

When something is this widely used, it’s easy to forget that the answer to who should do more isn’t the one person who actually steps up to keep it alive.


Wow he changed two million lines of the sudo codebase over the project history and made 10,548 commits. That's bonkers. Sudo is clearly doing a lot more under the hood than I thought it did. A simple security critical command shouldn't have that much churn. It should arc towards immutability like TeX, which has had like twelve changes in the last 40 years.


> Sudo is clearly doing a lot more under the hood than I thought it did.

There’s a number of reasons openbsd dropped it, and all of them are fundamentally rooted in size and complexity: https://flak.tedunangst.com/post/doas


OpenBSD is a fabulous project. I've been working on tool called Cosmopolitan which helps Mac/Linux/Windows/FreeBSD developers write software that's compatible with OpenBSD: https://github.com/jart/cosmopolitan/blob/master/libc/sysv/s... As you can see, I've studied these systems a lot and I've got to say that OpenBSD is the closest to the Bell Labs roots I've seen from community distros. It takes a certain degree of judiciousness to maintain that authenticity and the clairvoyance w.r.t sudo should be all the proof we need that OpenBSD is up to something good.


Sidebar: Wonderful post, but what an awful fake loading bar. Every time I switch from the tab / window to something else and switch back to continue reading I'm interrupted by it for no reason.


You're supposed to disable javascript in your browser.


doas is a wonderful alternative to sudo.

For one, the config file is actually easy enough to read properly.


Does he moonlight for the NSA?


This response is the “must be aliens” of security. If you remotely think this is the case, ask whether one of the top intelligence agencies in the world would be more likely to attract attention to a deep cover operation many, many years in the making or would invest in making sure that the bug was well concealed so nobody else would be able to use it on, say, .gov servers. If you remember Dual EC_DRBG I know which one I’d bet on…


It was meant as a joke. Lighten up.

Having worked with the IC for decades, I'm well aware of what's going on.


I figured it was a joke but it's about as original as a standup routine complaining about airline food.


A Turing Award winner once said, “Program testing can be used to show the presence of bugs, but never to show their absence!"


Sounds like a very smart person. Much of the value in the test is showing that the old revision fails it. If I were fixing this bug, first thing I would do is write the test and make sure the existing code fails. If it doesn't fail, that means I don't understand the problem yet. Then I fix the problem and commit both the fix and the test. I don't just discard the test, because it is valuable to know that in the future nobody re-introduces the exact same flaw (this regularly happens in open source projects).


I agree. Keeping the test around provides a safety net (a bug repellent of sorts) for the next person the comes along. The test also serves as a documented history of the bugs of the past. The problem is that test does not guarantee the injection of future bugs of a different type. I don't think any amount of testing will. That's not to say we shouldn't write test, we should just be aware of the limits of testing.


A journey of a thousand releases begins with the first unit test.

Everyone has decided to stay home this year, though.


> people who stubbornly refuse to practice even the most basic good engineering practices, like testing and code review

That is not fair. This software is maintained by Todd Miller, the well known OpenBSD developer, and has both tests and certainly discusses code proposals. Not everyone has to use github or the language de jour to be useful.


> This software is maintained by Todd Miller, the well known OpenBSD developer

Ironic that OpenBSD dropped sudo in favor of a much simpler utility.


I think the design of the relevant code is worse than the lack of relevant test coverage. The problem solved insecurely by the code instead seems like an obvious target for lexical and syntactic analysis (and this has been so since the sixties, I think).


Yes, please at least use re2c when parsing anything more complicated in C. The result is much more readable, and integration costs are pretty low, and you still keep a lot of flexibility in how you structure your code.


Did you just call C a high level language?


Indeed,

"C Is Not a Low-level Language, Your computer is not a fast PDP-11."

https://queue.acm.org/detail.cfm?id=3212479


Technically, it is. But usage of the term has shifted a lot over the last 50 years.


Stopped reading your comment after the first sentence because that’s all I needed to know.


Another question is who wants to maintain four decades old GNU C soup? It was written at a different time, with different best practices.

In some point someone will rewrite all GNU/UNIX user land in modern Rust or similar and save the day. Until this happens these kind of incidents will happen yearly.


Rewriting sudo is a weekend project*. Getting people to adopt it is a many-year political campaign.


You can bet your last dollar that if the "right" RedHat or Debian developer rewrote sudo with feature parity, it would be adopted by all major distros in a couple of months. It's the sort of thing nobody really cares about except OpenBSD (which wrote their own).

The problem is feature parity. Most rewrites cannot guarantee that off-the-bat, so they end up struggling to persuade people to switch - why break stuff that works just fine and lose features, in the name of some engineering purity?


Poettering to save the day with systemd-sudo?


Polkit-exec (pkexec) already exists, has much better security, and integrates with your DE instead of expecting passwords on the terminal.


This thing?

https://gitlab.freedesktop.org/polkit/polkit/-/blob/master/s...

Which also seems to not have tests. In fact, the only tests I'm seeing are from 2 years ago.

https://gitlab.freedesktop.org/polkit/polkit/-/tree/master/t...

> has much better security

It's using D-bus. My faith in D-bus security is close to my faith in seeing a fresh Linux install with zero D-bus error messages from apps. Which is to say, nonexistent.

> integrates with your DE instead of expecting passwords on the terminal.

There's nothing inherently insecure about passwords on the terminal, and certainly nothing a DE can do better. I have yet to see a display manager or lock screen app that knows what the hell PAM is doing. Try doing even the simplest things with PAM, such as getting a fingerprint reader or Yubikey working, and every single display manager simply chokes.

I'm not sure which is more of a Byzantine mess: Linux authentication and authorization, or Linux audio.


PAM is fun. https://github.com/systemd/systemd/issues/16813

With all the .so modules loading into some process, etc. Some questionable design in sshd makes it lock up completely for all incoming connections when used with PAM and when pam module ends up in infinite loop.

Nevermind that systemd pam modules pull in a shitton of stuff, including dbus, into any process that tries to use PAM for auth, these days.

I guess it all runs as root, too. sshd at least tries to fork a child for all this and waits for it and kills it, so that the parent process can't be polluted. It just has no timeout when waiting for result, and doesn't accept further connections when waiting.

Sometimes it's better not to look too deep under the covers.


How else would you implement PAM? The whole point is that you can write arbitrary code to implement whatever custom authentication policy you have, so it needs to be able to dlopen things. And if you don't like the PAM policy written by your distro, you're free to change it by implementing your own authentication policies, or dropping things you don't care about (like systemd, you can configure PAM to just use /etc/passwd and /etc/shadow if you like).

sshd needs to run as root (obviously) because it grants login shells to people, so it needs to run in a privileged context. And the PAM modules it executes also need to be run as root, because PAM modules need to do things like read /etc/shadow.


https://man.bsd.lv/login.conf.5#AUTHENTICATION documents OpenBSD's system, which revolves around running /usr/libexec/auth/login_<foo>. OpenBSD's system doesn't let you do the stack-of-libraries thing that PAM does, but having one binary is a lot simpler.

(In the past, OpenBSD had a login-locally-or-via-Kerberos binary there, which does show the downside of that approach over PAM's more flexible approach.)


Does it work without GUI?



That would indeed be a huge feat of programming skill! I refer you to the configuration file documentation to treat as a spec: https://www.sudo.ws/man/1.8.3/sudoers.man.html

Implementing a subset of sudo is a weekend project, for some values of useful.


That's part of the dark outcome of so many untested features: it makes it easy to cast FUD upon any potential replacements. As we have no black-box test suite that shows `sudo` implements all of these features, so we also cannot have faith that any replacement would. Ideally it should be possible to run sudo's black box tests against any potential replacement. To start with, we need those tests.


There are actually a bunch of tests for the sudoers file format.


> Implementing a subset of sudo is a weekend project, for some values of useful.

openbsd dropped sudo for "weekend project" `doas`, in no small part because it does not support most of that stuff.

Here's the configuration file documentation: https://man.openbsd.org/doas.conf.5


This is why OpenBSD has `doas`.


> It was written at a different time, with different best practices.

Not that I have any faith in modern "best practices". I'm imagining `grep` written in modern a modern language like JS and I shutter when I think of the hundreds of micro-dependencies like "left-pad" that would need to be downloaded to make it work.

Maybe it's fair to say that systems programming languages are in a better state in terms of modern best practices than scripting languages.


Try https://github.com/BurntSushi/ripgrep – it’s across the board better, and one of the best personal productivity boosts you can make if you use grep frequently.


> Not that I have any faith in modern "best practices". I'm imagining `grep` written in modern a modern language like JS and I shutter when I think of the hundreds of micro-dependencies like "left-pad" that would need to be downloaded to make it work.

You don't have to imagine: `ripgrep` exists and has all of 10 direct dependencies, 4 of which by the same author because they extracted features / systems from ripgrep (or developed them separately from the start) as they were useful on their own e.g.

* regex, for obvious reasons

* ignore to match and apply ignorefiles (gitignore and friends which ripgrep takes in account by default)

* bstr for conventional utf8 strings rather than guaranteed

* grep intends to be essentially "ripgrep as a library"


I actually like some modern grep alternatives, like `ag`.


> someone will rewrite all GNU/UNIX user land in modern Rust or similar

Or everyone just switches to musl + busybox.


Or doas. Much simpler.


[flagged]


Three decades of free software that generated billions upon billions of revenue and a big part of existing infrastructure is built upon is far from "wall to wall toxic waste".

I'm not a fanboy for the sake of it, but you're being a little over your head here.


Online Linux services are so powerful, that they can even ban the president of the united states of america.


Even in the literal sense revenue does not imply the absence of toxic waste. Indeed, historically speaking, revenue has been strongly associated with toxic waste.

I don't mind expressing this opinion. There is nothing anywhere in the GNU project which should be emulated, except the license.


How is it possible that GNU’s `sudo` is not GPL?

Unless it’s BSD, I guess?


It's not GNU's `sudo`, it's under an ISC-style license, and is maintained by an OpenBSD developer, so it's closer to "BSD's sudo" (though OpenBSD has been shipping `doas` instead for a little while now).


sudo even predates GNU by several years.


wow, I never realized sudo is from 1980[0]! I thought it was a product of the late '90s. It's actually incredible it has managed to stay around this long.

[0] https://www.sudo.ws/history.html


Pffft, `sed` is from 1974: https://en.wikipedia.org/wiki/Sed

And `ed` dates from 1969: https://en.wikipedia.org/wiki/Ed_(text_editor)


ah, yes, but those _felt_ as old as dirt.

On the other side, I remember howtos/manuals/READMEs in the early '00s referring to su(1) and only sometimes saying "..these days you should be using sudo(8)", so it always felt like something "new" :)


There is no such thing as GNU sudo.

The homepage for sudo is https://www.sudo.ws/

It is licensed under an ISC style license.


Just ten days ago on Hacker News, we had a C programmer claiming that “buffer over-runs are a rare class of bugs, and a class of bugs that are (at least on the heap, and often on the stack) trivial to find and fix” [1].

As a bonus, the person who wrote that turned out to have published C code containing multiple exploitable buffer overflows.

[1]: https://news.ycombinator.com/item?id=25806533


Of course not as secure as Firefox, which enables WASM by default. Or Chromium, which runs chrome-sandbox under suid.


Every now and then we all get a glimpse, for a flash of a moment, that the house of cards has already collapsed. Too invested in our current systems to face this truth, we just update and forget about it until the next time.


Yet another vindication for one of my long-standing practices. I try to avoid installing sudo at all cost on my systems because all it does is increase the attack surface.

Despite this, the wisdom of the crowd is that you should never su to root, for ... reasons? Fat fingering is a thing, but if you can accidentally be in a root terminal without realizing it you have done something horribly wrong.

Heck, from a certain point of view if you have someone in the habit of repeatedly typing sudo over and over again then all sudo has really done is open up every single terminal to be a gateway to the nether realm of super user privs. And in this case, more attack surface.


> Despite this, the wisdom of the crowd is that you should never su to root, for ... reasons?

`su` takes the password of the user you're becoming, while `sudo` takes the password (or not) of the user you already are. So using `su` to become root implies that there's a root password that multiple people (well, assuming there's multiple admins on the box) know.


I firmly agree that sharing passwords is bad, however I can't imagine a threat model where password sharing among multiple parties is more risky than giving multiple people sudo. As zests points out, practically there is no difference because anyone with sudo can change the root password, they are already root, so the auditing use cases is moot if the logging system can't distinguish between two users sudo sued into root at the same time.

Having multiple admins that need to be able to administrate a system might seem like another case where sudo simplifies things, but isn't that what the wheel group is for? Yes there is an M:N issue between administrators and root passwords and we all know that reusing root passwords across boxes is just asking to get pwnd, but if admins can already ssh into a user that has wheel on a system then that implies that there is another existing authentication system that surely could be used to provide the password in a centralized manner. There is complexity in such a service, but that is only required for the M:N case and if it has issues then it can be fixed once. Busted sudo? You have to push emergency updates to (checks notes) ... literally every EC2 image on amazon. If admin:server is 1:N then password manager, copy, paste, and cut sudo out of the loop entirely.


> I can't imagine a threat model where password sharing among multiple parties is more risky than giving multiple people sudo

An individual leaves the company and the security / compliance people say all their access to be revoked, which is kind of a headache with shared logins.

The security / compliance people want audit logs of who does what, which is harder to do with shared logins.

I think both of these things are encouraged (maybe required?) by various certifications that companies doing certain things might need.


sudo can be restricted to specific commands, so you can restrict non-root users to do VERY specific actions (ie, the webdev can reload apache2 but not restart or stop it or take any other action).

This means the webdev has the least amount of access necessary for their work without giving them straight up root or using setuid on a script, which can lead to easy bugs (did you check PATH?)


There are huge differences between running with user as sudo and running as root itself. Much more of an audit trail with sudo. You can trace which account ran which command. Sure you can cover your tracks with sudo but it's far harder than with bare root, even more so with shared accounts.


If a system is configured as you describe, that just means that it then effectively has an additional root/admin account, except under a possibly unpredictable name. So security through obscurity, really (if the privileged user name is secret). That might make sense for some small number of setups, but instead it seems to be enormously popular, and often times it is even used in the security-defeating manner of having the "usual" user be privileged.


sudo su

My favorite command.


This is so silly that it's absolutely ludicrous, but I've never known about or used that before....... I can think of all kinds of other permutations of the command that I've used and know but not this one......


su pulls in random set of currently configured PAM modules, due to requiring authentication.

Anyone checked what buggy horrors await in all those modules? And it's not a static set of old trusty modules either. The fresh new complicated stuff is being added, like systemd-homed, and so on.

pam_systemd and pam_systemd_home have by itself the size of all other 46 PAM modules combined (on my Arch system).


Is there any part of the Linux system that systemd hasn't penetrated yet? It controls boot, processes, authentication, time, dns, proxies, etc. The GNOME Foundation unilaterally required all distros adopt Systemd ten years ago without any community support. Yocto makes it exceedingly hard to create embedded devices that don't have Systemd, so it's probably lurking on all your IoT devices.


I can't possibly see how PAM was ever considered a good idea.


I mean, it's literally in the name: Pluggable Authentication Module. It's a very good idea to expose a common interface for user authentication to hide the vagaries of the underlying authentication mechanisms.

It's far more useful to explain why or how PAM is bad, because no one (sane) will agree that the idea of PAM is bad.


Module should be implemented as a separate process running under unprivileged user and communication should be done via pipes. It's UNIX-way. If I understand it correctly, currently module is implemented as a shared library executing under root sharing all the memory with other modules and main program. This exposes way too many opportunities to exploit any vulnerability.


PAM using app can fork a process for this, so it's not too horrible, but it increases complexity.


How do you measure the sizes of these modules? I counted lines from source code and systemd pam code was a 1k-2k lines of C total and pam is 30k lines of C.


ls -l in /lib/security

I mean last time I had sshd hanging and randomly losing access to remote machines due to systemd's newest PAM improvements (homed), when I put the process under gdb, it shown dbus stuff, so it's certainly more than 1-2kloc.


Here is a non .txt format with a great video explanation as well: https://blog.qualys.com/vulnerabilities-research/2021/01/26/...


Memory safety strikes again, it seems (overflow in a C string due to complex parameter parsing rules).


No, complexity strikes again. I haven't used sudo in years, preferring to use doas now. Its essential code is less than 500 lines and it does everything I've ever used sudo for, and that includes much more than `sudo <runupdates>`.

    $ man doas | wc -l
    58
    $ man doas.conf | wc -l
    101
    $ man sudo | wc -l
    741
    $ man sudoers | wc -l
    3254
And a bonus:

    $ man sudoers | grep -C1 despair
    The sudoers file grammar will be described below in Extended Backus-Naur
    Form (EBNF).  Don't despair if you are unfamiliar with EBNF; it is fairly
    simple, and the definitions below are annotated.
That only accounts for a small subset of sudo's complexity. It's easily 100x more complex than it needs to be to solve this problem. Now compare the two CVE lists:

http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/security/doa...

http://ftp.netbsd.org/pub/pkgsrc/current/pkgsrc/security/sud...

My reaction to this vulnerability was mild amusement, then later wondering if I should go discredit the inevitable Rust brigade. We don't have to rewrite everything in Rust to get better security. We just have to use simpler tools.


But if sudo were written in Rust, it could have the same level of complexity and not be vulnerable.

Yes, it would still be vulnerable to logic errors, like the last famous sudo bug where you pass -1 as the UID. But it wouldn't be vulnerable to this. (And this isn't the first memory safety bug to be found in sudo.)

Yes, sudo's complexity is useless for 99.99% of its users. But wouldn't it be nice if the result were merely a gross feeling rather than a security hole?


> But if sudo were written in Rust, it could have the same level of complexity and not be vulnerable

I'm puzzled that we don't have a memory-safe ABI (e.g. amd64-safe) and runtime for C so we could just compile things with

    clang -safe sudo.c
to avoid memory errors. I'm fine with sudo (or whatever) taking a 60% performance hit to be more reliable - processors are thousands of times faster now than they were in 1980 when sudo was written. If we had a memory-safe ABI for C/C++ in common use its performance overhead could probably be reduced significantly over time due to implementation improvements, and we might see hardware acceleration for it as well.

There are a number of proof-of-concept memory-safe compilers for C using fat pointers, etc., but memory safety hasn't made it into gcc or clang. 64-bit CPUs can help because you can repurpose address bits. Even Pascal (which is largely isometric to C) supported a degree of memory safety via array bounds checking. I believe Ada compilers also support memory safety. PL/I was actually memory safe and is why Multics never had any buffer overflows. Obviously Rust is memory safe, but for a lot of legacy C code it is impractical to rewrite everything in Rust but eminently practical to recompile it with memory safety turned on.


Encoding array bounds into fat pointers doesn't always work without changing code (e.g. code that uses funky casts, code that makes assumptions about data layout).

Also to ship this in a Linux distro you'd need two builds of many packages. Tons of tools would need to be updated to work with the new ABI. It would be a nightmare.

Furthermore, a new fat-pointer ABI would not address lifetime errors like use-after-free, so what would your plan be there? Boehm GC? More complexity, more overhead, more compatibility issues.

So in practice this is not that appealing, which is why we don't do it.


I think the practical issues you describe like rebuilds of packages and so on are very real if we're talking about general adoption. But if we're talking about recompiling a handful of SUID programs which make up a TCB then I think a proposition like that has a lot of merit and can't be easily dismissed.

Any C code that needs changing to deal with fat-pointers is probably already UB in C (or at best, has some implementation-defined behaviour).

That's because the representation of pointers themselves is undefined (so you can't get a valid result by looking at those). Pointer/integer casts (either direction) are implementation defined. And accesses via pointers to anything beyond their bounds is already UB.

There's some good and interesting discussion of what's involved in all of this on: https://www.ralfj.de/blog/2020/12/14/provenance.html

And there's already bodies of work within the Rust (and C/C++) communities around the concepts/technologies that would need to be developed to achieve something like a memory-safe UNIX TCB.


If we're talking about recompiling a handful of SUID programs, why not just manually translate them into Rust or something similar?


Yes, that's probably a good idea.

But it comes with costs. Someone has to learn Rust and then convert all of these programs. And it also has the issue that Rust programs are only memory safe if the unsafe keyword is not used anywhere in the program (correct me if wrong?). So it looks like the effort to do such thing, while noble, and valiant, is essentially an experiment with an uncertain pay-off that could turn out to be small or large.

Much more interesting (to my mind, anyway) is something like Miri. The rust interpreter, which uses fat-pointers to make things (more? completely? someone more informed can correct me..) memory-safe by inserting some relatively lightweight run-time checks.

And, then again, if such a thing could be compiled rather than interpreted (some things similar to this already exist, like C with fat-pointers). And if the source language was C (or something like it) or C++ (or some future C++) then the human aspect of re-training a generation of programmers goes from being a very big hurdle, to a much lower one.

At that point the benefits go up quite a bit, and the costs come down quite a bit. And I think that might be a promising path to overcoming the sort of human/political hurdles/inertia involved in rewriting the world :)


You can definitely implement sudo and similar tools without writing any new unsafe code.

You will use existing libraries that contain unsafe code, but you should be able to stick to popular well-tested libraries, which means it will be very difficult for an attacker to find a new exploitable bug in those libraries to attack your tools.


> But it comes with costs. Someone has to learn Rust and then convert all of these programs.

Someone has to learn compiler engineering and then design and implement a 'safe' ABI. Unlike learning Rust, this is probably worthy of a research paper.

> Rust programs are only memory safe if the unsafe keyword is not used anywhere in the program

If you use unsafe, then you take some of the responsibility for maintaining memory safety. However, you can audit the unsafe parts of the code, and it will compose with the compiler-provided guarantees for the rest of the code. Besides, one can easily avoid unsafe code for safety-critical tools like these.

> Much more interesting (to my mind, anyway) is something like Miri. The rust interpreter, which uses fat-pointers to make things (more? completely? someone more informed can correct me..) memory-safe by inserting some relatively lightweight run-time checks.

Miri does not support most interaction with the outside world [1]. It is focused more on detecting UB in unsafe code when it is exercised by tests, than on having your code running in production through Miri. Moreover, I wouldn't call a thousand-fold slowdown [2] "relatively lightweight".

[1]: https://github.com/rust-lang/miri#miri [2]: https://www.reddit.com/r/rust/comments/hosvqu/will_the_miri_...


> Someone has to learn compiler engineering and then design and implement a 'safe' ABI. Unlike learning Rust, this is probably worthy of a research paper.

Yes, all good points. What I'm getting at is that it seems like nobody has yet re-written sudo in this safe way. And it's not just a matter of re-writing it. If (when) someone comes along with this re-write there's "if this person goes away, will we find someone else to maintain it?" and all those other very conservative social forces at play.

I think any new programming language community has these sorts of adoption hurdles to face. And I'm sure the rust community is working hard to build up that pool of developers and I think that's all really positive so I don't want to sound like I'm subtracting from it at all. I'm just also an interested spectator of PL and systems programming research/new directions :)

> If you use unsafe, then you take some of the responsibility for maintaining memory safety. However, you can audit the unsafe parts of the code, and it will compose with the compiler-provided guarantees for the rest of the code. Besides, one can easily avoid unsafe code for safety-critical tools like these.

Thanks, yep. That's why I think that generally Rust is a good idea, and rewriting the TCB in it is a worthwhile project. In regards to safety it looks like a step in the right direction. We're just quibbling about the cost/benefit analysis of how big of a step it is compared to above-mentioned issues that all new programming languages face. Personally, I've no doubt that even with all that factored in, it's still a net positive.

> Miri does not support most interaction with the outside world [1]. It is focused more on detecting UB in unsafe code when it is exercised by tests, than on having your code running in production through Miri. Moreover, I wouldn't call a thousand-fold slowdown [2] "relatively lightweight"

Thanks for the clarifications, you're definitely more up to speed on that project than I am! But yeah, what I meant there was not that the implementation of miri was something to use as-is, more that it's an interesting direction in PL/systems programming research (imo). And some of the ideas there, especially where runtime cost _in principle_ can be made to be relatively lightweight are interesting. I've seen some other research where C implementations with bounds-checking have been implemented part-statically and where the remaining checks are done at run-time with fat-pointers.

OK, bounds checking isn't memory safety, but the paper was a while ago. Maybe it was this one https://www.comp.nus.edu.sg/~ryap/Projects/LowFat/ ?

So I mean, it sounds like you might be able to get to a place where you can use some bits of unsafe in rust, but maybe the program overall could still be safe because the compiler can have a mode where run-time checks (which can be statically eliminated in a lot of cases) are included.

But hey, I'm just a relatively amateur outside-observer of all this, maybe that's a totally impossible pipe dream? :)


> e.g. code that uses funky casts, code that makes assumptions about data layout

Maybe it shouldn't be doing that? Isn't that the whole point of something like a -safe flag? Increase security as you go along. Yes it is going to take time. The best time to plant a tree is 20 years ago, next best is today.


Maybe it shouldn't be doing that, but "recompiling C code in safe mode" only makes sense if you don't have to change the code much or at all.

If you have to make all kinds of changes to the code you might as well just translate it into a different language.


It's called AddressSanitizer. You enable it with the compiler flag -fsanitize=address. It's supported by clang, gcc and lately MSVC.


Address Sanitizer is not perfect, nor is it suitable to ship in production.


Asan binaries should never be shipped in security sensitive environments. It's not designed for that. It's unsafe.


You shouldn't use ASan in release builds since it may have exploitable vulnerabilities.


And used by around 36% of developers that bother to answer surveys.


Compile it for wasm and use a wasm runtime built in a memory-safe language? I believe some wasm runtimes allow for making raw syscalls.


It would not stop these sort of exploits if I’m not mistaken (ok, it could help since rewriting return addresses is not possible I think), but memory errors that cause logic bugs are still possible.


Yes, that's exactly right.


Rust would have prevented the -1 as a UUID too, because you would have used a sum type (Rust enums) instead of a sigil there. Its easier, its idiomatic, its more clear, and the compiler knows how to optimize the overhead away a lot of the time.


Well, in this particular case, the special behavior of -1 is baked into the setresuid system call, while sudo thought it was just an ordinary UID. So if you look at one of the Rust operating system projects designed from scratch from-scratch OS designs, it might not have this kind of pitfall. But if you literally just reimplement sudo for existing OSes in Rust – which I think would be a neat project for someone to take on – you’d be at risk of running into it.


It is a uid (as in user id), not uuid. Don’t think you can use sum type for that


It is a user id, but that bug happened because a -1 was being returned as an error code in one place, and then being accidentally passed in another place. The sum type would be used as the “this possibly errors” return type in the first function, making the bug effectively impossible to happen by accident.


If sudo were written in Perl, it would also not be vulnerable.


I thought about that. But when using a higher-level language (that comes with a runtime), you need to give privileges to the whole runtime, which arguably has a much bigger attack surface, unless I am mistaken?

The kernel won't allow you to setuid scripts, there is a reason for this: it's very easy to leave glaring security holes while doing so.


Perl had a fix for this called suidperl, which was a wrapper which enabled Taint mode and other strict checks (https://perldoc.perl.org/perlsec). I don't know of any other language or interpreter that go this far for security by default, so Perl might be the most secure language in this regard. However, suidperl was dropped in 5.12.

My main point was that you could rewrite sudo in all sorts of languages, but saying "just rewrite it in Perl" (assuming it worked) isn't a enough justification to make it happen. Nobody is going to re-create their own project in Perl, Rust, etc just to eliminate buffer overflows. If somebody wants sudo in Rust, they'll have to do it themselves, and it still might never replace the original.


Thank you for pointing that out :)

If someone would get serious about security, auditing every setuid binary would certainly be something on their list (if they use any). If they really want the functionality, rewriting it to cover just enough of the required functionality wouldn't be unheard of.


I agree. But really the security best practice is to remove setuid from all binaries, and rely on RBAC rules (such as with SELinux). That would solve more security issues than anything else, but this is way more effort than most people are willing to invest in security.


>But if sudo were written in Rust, it could have the same level of complexity and not be vulnerable.

This is not true. Complexity breeds bugs, including security bugs, and memory safety doesn't change that. Your example is a good one - here's another: doas once failed to limit the environment variables which are passed to the child process, which could be used to nefariously influence the program running (e.g. with LD_PRELOAD). How would Rust prevent that oversight? It wouldn't.

A simpler program will generally be more secure than a complicated one, no matter what language either is written in. Furthermore, rewriting an established program from one language to another will always introduce more bugs than it fixes, and more severely the more complex the program is. The single best way to improve security is to reduce the attack surface, and the single best way to do that is to reduce the complexity of your system.


If you go a little further with the quote:

"Yes, it would still be vulnerable to logic errors... But it wouldn't be vulnerable to this. "

I think you'll find in disagreeing with the comment on logic errors you just said the same thing the comment did about logic errors.

Also I think the generalization that rewriting an established bit of code in a new language in a secure language is a bit too general. clearly Firefox not only set out to make Rust for this purpose but it's not had an explosion in vulnerabilities with the modules it has replaced. Quite the opposite actually. Nor has every tool or app rewritten become a security failure compared to the original. I do think it's something that can easily be screwed up though, especially if someone rushes through by focusing on functionality duplication instead of building a more secure version of something.

Regardless, both "using a memory safe language results in a more safe program" and "having a minimum attack sufrace results in a more safe program" can be true. There is no need to make it a choice of A or B.


>I think you'll find in disagreeing with the comment on logic errors you just said the same thing the comment did about logic errors.

I think you'll find that my comment explicitly acknowledges this and expands on it with another example. Are we done telling each other to read the things we're writing?

>Firefox not only set out to make Rust for this purpose but it's not had an explosion in vulnerabilities with the modules it has replaced.

You're setting the bar pretty high with an "explosion" of vulnerabilities here. Rust programs have vulnerabilities, including rewrites. They also have other kinds of bugs, often ones which were not present in the code that they're replacing. You need only browse your nearest convenient RiiR bug tracker to find evidence of this.

Let me restate my thesis in mathematical terms. If we presume that 1 in 100 lines of production code has a bug in it, regardless of language (generous, I know), and that 1 in 10 bugs in C programs are memory corruption related, then saving 10% of those bugs by rewriting it in Rust would take a 10,000 line codebase from 100 bugs to 90 bugs. A 1,000 line codebase, still written in C and without the advantage of memory safety, would have only 10.

In today's example, sudo is a caricature of runaway complexity. Rust is often touted as a panacea, but C has very little to do with why sudo is insecure. Sudo is comically overengineered and that level of overengineering has no place in a security context. This is the larger issue that needs to be addressed, not Rust.


I agree Rust is not a panacea and that rewrites create their own set of problems, the only issue with this analysis is assuming 1/10 bugs are memory corruption related.

Both Chrome & Microsoft found about 70% of bugs to be memory safety related. I've heard similar numbers out of FB as well. The math looks a little different with that data.

https://www.chromium.org/Home/chromium-security/memory-safet...

https://www.zdnet.com/article/microsoft-70-percent-of-all-se...


Even if we run the same math with 7 out of 10 bugs being memory safety related, and assuming that Rust prevents all of them, those same example programs end up with 30 bugs in Rust and 10 bugs in C.

There's another argument I could make, too. Look at the bug tracker for the program you want to rewrite in Rust, examining the historical bugs. You'll find that there are often hundreds or thousands of mistakes that they made and already fixed in the original codebase. If you're rewriting it from scratch, can you be sure you won't make just as many? A stable, maintained codebase with a low throughput of changes tends to have fewer bugs over time, as the lack of churn avoids introducing new bugs and the application of time susses out all of the existing bugs. Rewriting the whole thing from scratch has a very high rate of churn, introducing a whole new slew of bugs on its own.

Now, a small codebase, focused on delivering its key value-adds without distractions, kept stable and at a low-churn rate over a long period of time: no matter what language you use, this is the best recipe for reliability and security.


So again why does it have to be "rewrite at 1/10th the complexity in <Language A>" (10%) vs "rewrite in <Language B> at full complexity" (30%)? What's preventing using Language B for the complexity rewrite and getting 0.1 * (1 - 0.7) = 3%?

Rewrites do bring the chance to Royally Screw it Up™ so it's certainly not simply a product of "it is now written in <Language X> therefore safe" but as it said not only have projects shown the security didn't fall apart but they have shown the opposite.

I agree you don't get there by a bunch of yolo rewrites to whatever is hip though, it has to be a planned effort that isn't rushed. Much in the same way quickly writing a small replacement utility does not inherently make it more secure or reliable than an existing significantly more complex utility. Even just trying to shave some functionality off the existing code is rife with "but how does removing this piece affect the app remaining logic" and takes time and effort to do right.

Both methods do have to be done right and both do greatly help security but there is nothing about picking a memory safe language or making a significantly narrower focused utility that preclude each other.


You can do both! But because simplicity has a substantially greater impact than the language choice, I think it's better to focus on that. Right now, the ecosystem is focusing more on the language choice, and hardly talking about simplicity at all. And particularly in the case of Rust, I think it fails simplicity a lot in its own ways - in the stdlib, the compiler and toolchain, the language design - and the trade-offs don't really make sense for a lot of use cases that people are pining for it over anyway.


helps that a 10kloc c program getting riir'd probably won't be a 10kloc rust program, because c doesn't have libraries and rust does.

it is literally impossible to write "a small codebase focused on its key value-adds without distractions" in a language that doesn't have strings and requires you to build a dictionary from scratch


>helps that a 10kloc c program getting riir'd probably won't be a 10kloc rust program, because c doesn't have libraries and rust does.

What? Rust has so few libraries of significance that it still depends on C for security-critical areas like SSL.

>it is literally impossible to write "a small codebase focused on its key value-adds without distractions" in a language that doesn't have strings and requires you to build a dictionary from scratch

Strings are misunderstood, I'm not going to get into it here. My dictionaries in C usually clock in at about two dozen lines of code. The complexity doesn't go away because your language does it for you.


Having written dictionary implementations in C, I would be very interested in seeing your implementation that fits in two dozen lines of code.


Threw together an example (untested, with obvious errors) to give you an idea of what it could look like:

https://paste.sr.ht/~sircmpwn/3122d4a27a8e5312462e2329bf7ed6...

Actually managed to get it to exactly 2 dozen lines of code, not including the header, which isn't bad for an off-the-cuff remark.

You'd naturally expand or shrink this with whatever subset of map functions you require, like key/value enumeration, object deletion, resizing, whatever. It depends on your use-case. I don't believe in generic code.


Ok, that makes more sense. I was considering a slightly more fully-featured table and including the header (see: https://gist.github.com/saagarjha/00faa1963023206a8ccd987798...) and I was a couple times larger than your number, so I was trying to figure out what you were doing that I was unable to replicate…


that's not true these days, rustls is a great TLS lib that has been through at least one serious external security audit.

https://cure53.de/pentest-report_rustls.pdf


For what it's worth, rustls relies on ring which has primitives written in C and ASM because getting constant-time operation guarantees from Rust is Very Hard. Though progress is being made on this area.


Except that Rust is also a much much more expressive language. Even ignoring things like solid module support and libraries you'll find your Rust programs to be much fewer LoC (assuming bug/LoC is the right metric) for equivalent functionality.

I agree that rewrites have the serious potential to introduce new bugs and the cost is rarely worth it if the codebase is actually that stable and low througput, but the reality is that most aren't. A one time high cost in exchange for introducing 70% less bugs over a period of N years starts to look like a good trade off.

Yes, complexity is the root of all evil. I can get onboard with the whole statement except the "no matter what language you use". If you have the ability to use any language that enforces memory safety, we should use it.


Lines of code is a poor approximation for complexity. Rust programs are shorter, but they are not less complex. The AST is similar and the graph of relationships between different parts of the code is much more complex than in C. Overall I'd say it balances at best, if not that Rust is more complex.


The sudo code in question is typical C: string processing with pointers and hand-rolled byte manipulation, size calculations, manual buffer allocation and freeing, and so on. The Rust equivalents of all this are far simpler.


Lines of code is great approximation for complexity, or at least how many bugs you're writing: https://softwareengineering.stackexchange.com/questions/1856...


Perhaps indeed! But a crucial distinction is that I consider the complexity in the langauge, compiler, and standard library to all be influences on your program's total complexity as well. Using std::List (or whatever you call it) has the same total complexity as writing your own little growable array.


From the point of view of bugginess, complexity in the implementations of massively popular libraries is far less of an issue than code you just wrote yourself, because the code in those libraries will have received much more testing than the code you just wrote yourself. So it doesn't really make sense to just add the complexity of components up like that.


sudo is quite a popular utility, by the same logic it might be expected to be well tested...


Yes. And I shudder to think just how many bugs there would be in a home-grown sudo replacement.


> "Even if we run the same math with 7 out of 10 bugs being memory safety related, and assuming that Rust prevents all of them, those same example programs end up with 30 bugs in Rust and 10 bugs in C."

Maybe but not necessarily; it's reasonable to assume that Microsoft and FaceBook put non-zero effort into designing around, programming around, testing, looking for and fixing memory safety related issues in their C code. It could be the case that not having to care so much about those frees up some non-trivial amount of attention and time which could be spent on the other classes of problems.


Similarly, it is possible that they would use the time to add new features with new bugs. I'd personally suspect that to be the more common outcome.


> "Furthermore, rewriting an established program from one language to another will always introduce more bugs than it fixes"

Here[1] is a link to a slideshow of a talk on the F# language, with a case study from EON PowerGen company rewriting the core of an application evaluating revenue due from their balancing services contracts nationwide in the UK. It was originally 350,000 lines of C# developed by 8 people in 5 years and incomplete. It was redeveloped by 3 people (2 had never used F# before) in 30,000 lines, complete in 1 year.

They claim zero bugs in the F# redeveloped system (page 29). This example also gets a mention in a Don Syme (F# language designer) talk in 2018[2] with the PowerGen employee in the audience.

The PDF cites a testimonial from Kaggle saying they're moving more and more of their application into F# which is "shorter, easier to read, easier to refactor, and because of strong typing, contains far fewer bugs".

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[2] https://www.youtube.com/watch?v=kU13g_noAQM


Incidentally, after inspecting doas for a few minutes, I found two near-vulnerability bugs in it.

The first bug lets any user cause doas to read out of bounds of an array, though not in a way that's exploitable.

Well, it's arguably a bug in libc. If you run doas with a completely empty argv (argc = 0, so not even an executable name; the two systems I tried, Linux and macOS, both let you do this), getopt will exit with optind = 1. Then when doas does;

    argv += optind;
    argc -= optind;
`argc` will become negative, and `argv` will advance past the null terminator. On most OSes, the `argv` array is immediately followed in memory by `environ`, so argv will now point to the list of environment variables.

doas will then dereference argv, and generally act as if you tried to execute a command consisting of the environment variables. However, the environment variables are not secret, and doas doesn't behave any differently than if you just passed the environment variables as normal command-line arguments, so this is not exploitable.

On an OS where argv is not followed by environ or a similar array of character pointers, doas might crash instead, although since it only reads from those pointers rather than writing to them, this still probably wouldn't be exploitable.

The second bug would compromise memory safety if things were slightly different. The bug is in configuration file parsing. Even if it did compromise memory safety, it would not actually be exploitable, because doas normally only parses the trusted systemwide configuration file. It can be asked to parse a configuration file passed on the command line, but it drops privileges before doing so. This is a good example of layered defense, so kudos to doas for that! Still, I thought the bug was worth mentioning.

The bug is a traditional sort of integer overflow. parse.y grows the array of rules with

    maxrules *= 2;
but maxrules is an int, so this will eventually overflow if the configuration file is large enough.

However, because maxrules happens to be signed, before doubling produces a smaller-than-expected positive value, it will first produce a negative value. This will then get sign-extended when converting to size_t (assuming 32-bit int and 64-bit size_t), and reallocarray's overflow check will trigger, causing reallocarray to return NULL. doas interprets that as out-of-memory and handles it cleanly.

(On a system where sizeof(int) == sizeof(size_t), things are a bit different, but it will just run out of memory before maxrules gets that high.)

Moral of the story? Well, as I see it:

Simplicity and layered defense, both featured in doas, are both effective ways to avoid vulnerabilities. But guaranteed memory safety, which would require a different implementation language, is also an effective way to avoid vulnerabilities. You aren't forced to pick and choose. Why not demand all three?


The argv += optind; is a standard pattern. I have never considered argc=0 case to be possible. I need to read some more on this.

As for your second find. It already got fixed: https://marc.info/?l=openbsd-cvs&m=161176698927944&w=2


Nice finds. I would agree that that's more arguably a bug in libc than in doas, but also note that the startup code for any language has to consider this case. As far as theoretical operating systems are concerned, this is a consequence of the System-V ABI, so any OS compatible with it would have the same issue.

As for the integer overflow case, it's also highly unlikely to be exploitable, even if it were unsigned - the system would have to, as I'm sure you can infer, have tens of millions of rules before this was an issue. It's quite within the realm of reason, in my opinion, to declare this an acceptable trade-off. The rest of your explanation shows that even if this weren't the case, the bug wouldn't be exploitable.

Anyway, I like your comment, but I'd recommend a different moral to this story: in the space of 47 minutes you were able to conduct a reasonably thorough audit on the doas codebase. Wanna give that a shot for sudo now?


> I would agree that that's more arguably a bug in libc than in doas, but also note that the startup code for any language has to consider this case.

This is true, but for a language where dynamically sized arrays are a standard data type, the most natural thing to do is to start by collecting the arguments into an array (maybe copying the strings at this point, maybe not). All further argument parsing is done with the array and is thus bounds-checked. I checked Rust's standard library and sure enough, it follows this pattern. Though, I could imagine some hypothetical startup code messing up the argc=0 case if it tried to separate argv[0] from the rest of the arguments while constructing the array.

> Anyway, I like your comment, but I'd recommend a different moral to this story: in the space of 47 minutes you were able to conduct a reasonably thorough audit on the doas codebase. Wanna give that a shot for sudo now?

Fair point. (And I didn't downvote you.) But in my opinion, that just confirms my view: ideally you want both simplicity and memory safety.


Aye, I agree. But if we consider that case, a similar mistake could be made: hard-coding argv[0]. The result is different, in that the program just aborts, but it's still the Wrong Thing To Do, and in both cases it never leads to anything exploitable. Bugs are bugs, no matter what language. We could come up with examples all day. Just head to your nearest Rust program's bug tracker :)


Aborting when argv[0] doesn't exist... is a perfectly reasonable thing to do? Someone called the program with arguments severely out of spec, crashing is fine.


It's actually within spec, in this case. Still reasonable?


It's within the C and systemv abi specs, but it's not within the implicit contract of how you call command line programs. I'm fine with it.


Right, but if it was within the specs, possible to craft a scenario for, and leads to a security vulnerability, then does it suddenly matter? A bug is a bug. If it doesn't matter for Rust then it doesn't matter for C.


> and leads to a security vulnerability

I have trouble imagining how aborting leads to a security vulnerability? That's literally running no code, the opposite of running arbitrary code.

Aborting is fine in any language. Criticisms of C here would come about because C doesn't abort when it should (null pointer deref, array out of bounds, etc), not the inverse.


A lot of your statements are pretty strong, and imo totally incorrect.

> Complexity breeds bugs, including security bugs, and memory safety doesn't change that.

Yes, memory safety changes that radically.

> A simpler program will generally be more secure than a complicated one, no matter what language either is written in.

Disagree, but the statement is really weak anyways, especially since 'complexity' is an ill-defined term. More features? Cyclomatic?

> urthermore, rewriting an established program from one language to another will always introduce more bugs than it fixes, and more severely the more complex the program is.

Should be obvious to anyone that this isn't true.

> The single best way to improve security is to reduce the attack surface,

Not true, but it's a great way to start.


>Disagree, but the statement is really weak anyways, especially since 'complexity' is an ill-defined term. More features? Cyclomatic?

I'm not sure of any definition of complexity you could appeal to which makes my argument weak.

>>rewriting an established program from one language to another will always introduce more bugs than it fixes, and more severely the more complex the program is.

>Should be obvious to anyone that this isn't true.

The opposite is painfully obvious: (1) Writing code causes bugs. (2) Rewriting an established project involves writing more code than leaving it would. (3) Writing all of that new code will introduce new bugs which were not present in the original.


Yeah I think that's an absurd reduction. Rewriting code means that you can solve fundamental architectural issues, that you can start fresh with better tooling, that you have the lessons learned without the technical debt, etc.


Yeah, but why the assumption that those things are an issue? We're talking about mature codebases. Rewriting it again in C would also give you a chance to start fresh with better tooling, lessons learned, paying back tech debt, etc. Even still, you're going to introduce new bugs in the process. You might fix a few hard-to-address architectural issues, but all of the other bugs would be easier to fix in the original codebase than by rewriting the whole thing.

I'm not saying that a rewrite is never justified, but rather that the argument that we should rewrite in Rust simply to avoid bugs has little weight.


> We don't have to rewrite everything in Rust to get better security. We just have to use simpler tools.

People don't add these features for the fun of it; they're present because they solve a specific use case.

I use doas as well; it's a neat little program that covers many of the common use cases, but it doesn't cover all of them. Usually you should use the simplest tool that solves your problem, but sometimes your problem is complex and thus your tool will be complex.

That is not to say that sudo can't perhaps be simpler; the first version was released in 1985 and there are probably things that can be improved, but sudo really isn't written by idiots who just add features because they have nothing better to do.


Adding features because they solve a specific use-case is grossly irresponsible. Solving a specific use-case is only one of many criteria that needs to be met for a feature to be justified. Others include "is it in scope?", "is it a maintenance burden?", "does it make existing features more unreliable?", "will its bugs affect people who don't need it?", "can it be done in a separate tool?" Anyone can come up with a use-case. It's often the maintainer's job to say "no".

If doas doesn't cover your edge case, then the responsible thing is to make a new tool which covers just that, and not to shove your complexity into a critical security component that the other 99% of the userbase doesn't need. Remember Heartbleed? The entire internet shat its pants because of a vulnerability in a feature that no one uses.

Failing to uphold that principle over and over again leads to broken, unreliable, insecure programs. This is why everything is on fire. Not C.


sudoedit is used by many people, and setting a different shell with -s seems like something that would cover a number of edge cases, yes, but writing a new tool just to add "-s" is obviously silly. Nothing in this particular CVE touches on anything that seems particularly obscure to me.

The last major sudo bug was in the PAM code (which lead to the creation of doas), which is something many people don't need, but also something that many others do need.

And writing separate tools would be the equal (or more!) lines of code and an equal amount of bugs in total (or probably more, since people will be reinventing stuff and there will be fewer reviewers per line of code). This isn't reducing complexity, it's just spreading it out.


For what it's worth, PAM is also not invited to my parties, for all of the same reasons as sudo was shown the door. And what people want PAM for is mostly solved with SSH certificates.

>writing separate tools would be the equal (or more!) lines of code and an equal amount of bugs in total

In total, yes, but crucially, not all on your system at the same time.


I agree about your point on complexity, that was my second thought. Especially for such critical pieces of infrastructure. Thank you for mentioning doa, I will have a look.

But the two are pieces of the same puzzle (defence in depth). Ideally, SUID binaries should be formally specified/verified (ada+spark?), though you could still have bugs in the specification.

And I'd argue that if you need more specific features than just `sudo`'s core functionality, you should probably just make your own setuid binary. That still exposes you to making the same mistakes, so better keep the complexity low, and still rely on a memory-safe language. Using proved, lightweight libraries helps getting an implementation correct.

As much as I like C (which is the language I'm most proficient with), it just gives you too many ways to shoot yourself in the foot, and IMO isn't really the best suited for something where:

- performance doesn't really matter

- memory safety, typechecking are critical.

You could always get away with a transpiler for a DSL, or a compiler that injects more checks, but better suited tools are available anyway.


Hi Drew, I agree with everything you said re complexity and rust. What we really need is a modernized C language, tools that help us catch bugs like this, and a better culture of testing and accountability.

I'm curious whether you run OpenBSD, since you mentioned you use doas. Do you have any thoughts on OpenBSD?


Rust is modernized C. You are looking for something that already exists. If C programmers would be looking for tools to help catch bugs like this and a better culture of testing and accountability they would be using Rust.


Yeah, Rust is about the simplest language that guarantees both memory safety + low-level control. Almost all of its complexity comes from having to satisfy both.


Rust is modernized C++.


I really don't understand why people donvoted this despite it's obvious. I don't align with you on anything else bc I don't hate C++ and Rust but I have to admit Rust and C++ suffer from the same problems.


Modernizes C language = Zig.

But I don’t see a point in using systems languages for the usual UNIX tools, they could be rewritten in a more secure language. In the rare case performance is important, there is FFI, but they are usually IO-bound so there is not much point.


I agree about secure languages. Last week I was daydreaming about reimplementing all non-RT parts in Lua.


I don't run OpenBSD, for reasons that have little to do with security. In my opinon, OpenBSD is not agressive enough at complexity reduction (doas being an outlier, I think doas is quite a good size).


I see. What do you run, if you don't mind me asking?


Alpine Linux.


Fair point re: complexity but how are CVEs in one codebase evidence of absence of bugs in another?

Or said another way, is the lack of CVEs for doas an indication it is more secure or just less (ab?)used?


More CVEs can also sometimes be a function of exposure; Joe Random's program probably has no CVEs but that doesn't mean it's more secure than Jane Popular's tool. In this case, however, both sudo and doas have sufficient exposure to estimate their relative security using CVEs. We can also use the CVEs to characterize the kinds of vulnerabilities each has internally, without comparing them to each other. In general, the CVEs that are discovered for sudo are more severe and damning than those discovered for doas, without respect to their relative occurance.


Sure, that makes sense. Would you also consider the re-occurrence of the same class of CVE? Like the environment variable parsing you mentioned before. If there were another CVE on that for doas would you consider it more damning than the first?


Yeah, I would.


Don’t know why this comment is getting downvoted. The code in question is complex and outdated. Improving the code and testing or switching to simpler tools can lead to a pragmatic solution without the political baggage.


I'm curious, is this one implementation of sudo really used everywhere?

I was under the impression that different Linux userspaces sometimes implement these common commands differently. Like "ls" sometimes actually being aliased to a bash script, or maybe BSD having one implementation and Ubuntu another. Is that not the case? Is "sudo" not maintained by an entity like gnu, bsd, etc?

edit - in other words, I always assumed "sudo" was a highly-dependent system-level tool, not just some useful helper binary that is maintained by one independent person.


Nope... sudo is just this sudo in 99.99% of the cases. There are some alternatives, such as *bsd's doas, and others, but all but doas and su are so non-popular and outdated that I would not recommend using them, as they probably have way more security issues.


doas is just OpenBSD. You can install doas from ports on NetBSD or FreeBSD, just like you can install doas on Linux.

OpenBSD dropped sudo from the base OS several years ago. sudo just became too complex, tailored to the feature creep demanded and required (PAM, ugh) by Linux users.


Briefly going through their website (sudo.ws) I am seriously wondering why anyone would want to put some of those features in a privilege management tool.


Todd Miller is a sharp developer and core OpenBSD contributor. I can only imagine the deluge of requests and pressure he faces to expand sudo. There's no end to the crazy stuff corporations demand, especially when it comes to integration--audit, logging, ldap, etc.


> Todd Miller is a sharp developer and core OpenBSD contributor.

I wonder why OpenBSD wrote their own version. Could it be that, knowing how the sausage is made, they thought it was better to have a salad...?


> I wonder why OpenBSD wrote their own version.

Wonder no more: https://flak.tedunangst.com/post/doas

> I started working on doas quite some time ago after some personal issues with the default sudo config. The “safe environment” was under constant revision and I regularly found myself unable to run pkg_add or build a flavored port or whatever because the expected variables were being excised from the environment. If I had been paying attention, keeping sudoers up to date probably would not have been such an ordeal, but I don’t like change.

> The core of the problem was really that some people like to use sudo to build elaborate sysadmin infrastructures with highly refined sets of permissions and checks and balances. Some people (me) like to use sudo to get a root shell without remembering two passwords.

> […]

> Talking with deraadt and millert, however, I wasn’t quite alone. There were some concerns that sudo was too big, running too much code in a privileged process. And there was also pressure to enable even more options, because the feature set shipped in base wasn’t big enough. (As shipped in OpenBSD, the compiled sudo was already five times larger than just about any other setuid program.) Hurray, tension. It wasn’t the problem I was trying to solve, but it was an opening from which to launch my diabolical plan.


Lol - now that you quoted it, I actually remember reading this post back then... but at the time I had just assumed sudo was a linuxism they didn’t particularly appreciate (openbsd people can be... petty), I didn’t know one of their core devs was maintaining it.


> I wonder why OpenBSD wrote their own version.

With most other projects, I would smell a major case of Not-Invented-Here, but the OpenBSD developers seem to have an impressive track record of actually learning from mistakes, both from their own and those made by others.

> knowing how the sausage is made, they thought it was better to have a salad

I love that phrase! (Coincidentally, an engineer working in food processing once explained to me how chicken nuggets are made (while we were eating!), I have mostly avoided them ever since...)


> There's no end to the crazy stuff corporations demand, especially when it comes to integration--audit, logging, ldap, etc.

Why should that be of concern to casual home use? Why do parts of a factory have to trickle down into my home? Wouldn't that be like the need to have a cow to drink milk, or a farm to have something to eat instead of a more apt product to buy for a reasonable price and in good quality?


The bigger a piece of software is, the more opportunities there are for bugs. And the correlation isn't linear.

For home users there is doas, also written by a OpenBSD developer. It's really simple, but I never found anything to be missing for my use case. All the logging and auditing and whatnot can (and imo should) be performed somewhere else.


doas has a much smaller attack surface, and is worth checking out.


> Like "ls" sometimes actually being aliased to a bash script, or maybe BSD having one implementation and Ubuntu another

It is true that BSD and linux sometimes have different implementations of posix commands.

The vast majority of linux distros are using the same gnu coreutils though. There are alternate implementations (like busybox, among others), but they're not often used in desktop distros.

I'm curious if you have any example of a linux distro that does treat ls so weirdly; that uses anything other gnu coreutils or busybox for it.


They probably didn't mean replace wholesale, but that `ls` in a shell is a wrapper around the underlying coreutils `ls` with some extra flags by default. Eg:

    $ (. /etc/os-release; echo "$NAME:$VERSION_ID")

    openSUSE Tumbleweed:20210121

    $ command -v ls

    alias ls='_ls'

    $ grep -A6 -B1 '_ls ()' /etc/profile.d/ls.bash

    bash|dash|ash)
        _ls ()
        {
            local IFS=' '
            command ls $LS_OPTIONS ${1+"$@"}
        }
        alias ls=_ls
        ;;


Wikipedia has a history section https://en.wikipedia.org/wiki/Sudo#History

But every tool has to be maintained by someone. Its not like GNU is a faceless corporation.


> introduced in July 2011 (commit 8255ed69), and affects all legacy versions from 1.8.2 to 1.8.31p2 and all stable versions from 1.9.0 to 1.9.5p1, in their default configuration.

It looks like this is pretty far reaching. All of my boxes were vulnerable to this before updating today.


How does this story not have a billion upvotes? HN should introduce sticky posts just for this bug and keep it at the top of the homepage for weeks.

> exploitable by any local user [...] without authentication

> introduced in July 2011 [...] in their default configuration

> full root privileges



Thanks for sharing this. I was looking for it on the mercurial repo at sudo.ws, but the commit didn't match. I found it here: https://www.sudo.ws/repos/sudo/rev/f666191a4e80


Developers: your moment has come at last to humble your local system administrator for wearing those "I read your emails" t-shirts. This is as day zero as day zero gets. Red Hat and Debian published their security announcements just two hours ago at the exact same moment this was posted on Hacker News. It would have been more responsible to keep something this bad under wraps a bit longer. Because all the people who still use things like cpanel virtual hosting are at risk.


cpanel is a web-based thing though, isn't it?

You'd need shell access to the host to execute `sudo` and attempt to exploit it.


CPanel is a web gui for managing Linux systems. It's mainly used to configure and resell apache virtual hosts. Shell accounts is one of the things it manages. These companies normally have like hundreds of customers per server since they charge ~$1/month for hosting. So anyone who pays one dollar a month extra for shell access can compromise a whole lot of people. I tried tweeting at these virtual hosting providers to bring the vulnerability to their attention, but no one's responded.


> introduced in July 2011 (commit 8255ed69), and affects all legacy versions from 1.8.2 to 1.8.31p2 and all stable versions from 1.9.0 to 1.9.5p1, in their default configuration.

It looks like this is pretty far reaching. All of my boxes were vulnerable to this before updating today.


Is it normal for a security issue of this magnitude to have a 12 day notification period for everyone? That seems... short.


Yes. This was coordinated on the distros mailing list, which has maximum embargo period of 14 days, with periods shorter than 7 days preferable:

https://oss-security.openwall.org/wiki/mailing-lists/distros...


Still no update for Centos 8, so I'm not sure that worked too well.


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: