
Buffer overflow when pwfeedback is set in sudoers - masklinn
https://www.sudo.ws/alerts/pwfeedback.html
======
masklinn
Just in from “C is fine just know what you’re doing”.

> Exploiting the bug does not require sudo permissions, merely that pwfeedback
> be enabled.

> the stack overflow may allow unprivileged users to escalate to the root
> account. Because the attacker has complete control of the data used to
> overflow the buffer, there is a high likelihood of exploitability.

~~~
stefan_
Best part is the fix:

[https://github.com/sudo-
project/sudo/commit/b5d2010b6514ff45...](https://github.com/sudo-
project/sudo/commit/b5d2010b6514ff45693509273bb07df3abb0bf0a)

Time to delete sudo.

~~~
akersten
Every time I see a fiddly fix like this where we're changing a buffer counter
to a postfix instead of a prefix increment, I always think "why are we doing
this shit ourselves?"

We should have standardized a C API to safely read/write buffers from files,
networks, etc. decades ago. But here we are in 2020 manually moving pointers
around - and what a surprise that we're finding "whoops, another off-by-one
error, you know how it goes."

No one should be writing new code to read/write bytes. This should be settled.
It's like a carpenter building their own nail gun each time they want to build
a house.

~~~
codeflo
The prefix/postfix change is a red herring, that doesn't do anything (since
the expression result isn't used). Which means that the security fix is
interspersed with code style changes. Maybe there's a reason to do this here,
but it's often not considered good practice because it makes it harder to
review the code.

AFAICT, the only real changes are in (new) line 411, resetting the cp variable
after the loop, and line 416, ignoring write errors. The former is probably
the relevant one.

~~~
kohtatsu
PSA: git add -p and git commit -p exist and are beautiful.

~~~
jwilk
FWIW, sudo uses Mercurial, not git.

[https://github.com/sudo-project/sudo](https://github.com/sudo-project/sudo)
is just a mirror.

~~~
masklinn
Doesn't really make a difference, hg offers similar facilities (formerly the
record extension, now hg commit -i). There's no index, but it's essentially
the same as git commit -p.

------
_-___________-_
This reiterates the idea that you should avoid rarely-used features of
security-critical software, and is perhaps an argument that those features
shouldn't exist in the first place. An extremely-minimal `sudo` alternative
would be a nice idea.

~~~
dharmab
There is an alternative: OpenBSD's doas:
[https://man.openbsd.org/doas](https://man.openbsd.org/doas)

There is a Linux port, too

~~~
quotemstr
Why change the name of the command just to switch to a different implemention?
It should be possible to make a drop-in replacement for sufo itself in Rust or
your favorite safe language, and a drop-in safe replacement is likely to get
much more traction than some completely new thing.

~~~
mikelward
What do you really need other than "mysudo <command to run> <command's args>"?

All of sudo's command line options and config options are part of its
complexity.

~~~
comex
`sudo -u username` is pretty useful.

~~~
dharmab
Which is one of the few options that doas also supports.

------
quotemstr
The thing about C that's always baffled me is the resistance to abstraction
that makes its lack of memory safety worse than it has to be. Nothing stops
you, even in C, from defining a "string" data type that maintains an internal
buffer size and read position. You can even give such a thing an elegant API,
as many people have when they've implemented things like this.

But all over the place in C code, you see people open-coding all sorts of
fiddly nonsense over and over again. You see people manually manipulate buffer
positions and lengths; they'll laboriously type "!strcmp(a, b)" instead of
defining a tiny little "streq" function. It's silly. However unsafe C-the-
language's memory safety is, the problem is exacerbated by C-the-culture's
resistance to using the abstraction features of the language. Why so little
imagination?

Maybe it's a kind of self-selection thing: maybe if you're the kind of person
comfortable with abstraction, you switch to C++, which is much better at it
than C and just as efficient, leaving only the open-code-all-string-
manipulation people writing raw C.

~~~
pjmlp
Exactly that, that is why every attempt to fix C security holes has failed,
the culture just isn't willing to accept it.

Even C++ community is aware of the flaws inherited from C, hence why standard
library supports arrays, strings with bounds checking enabled in debug builds
and eventually in release as well, if one so wishes.

Papers are going forward to reduce amount of UB in C++.

Meanwhile, on C's side Annex K was moved into optional in C11, instead of
trying to fix whatever implementation issues it might have had on C99.

~~~
loeg
Annex K has never been non-optional. No one has ever implemented a full
compliant Annex K, and it's a terrible abstraction. It predates basic static
analysis and suffers from more or less the same problems as non-Annex K
variants. E.g., if you pass in the wrong buffer length to the K variant, you
still get buffer overflows. How do you tell if the wrong length is passed?
Static analysis. The same analyzers work with existing libc functions. I do
think C or at least POSIX should have adopted strlcpy and strlcat, though;
strncpy doesn't do what most people think it does.

Source, and more info: [http://www.open-
std.org/jtc1/sc22/wg14/www/docs/n1967.htm](http://www.open-
std.org/jtc1/sc22/wg14/www/docs/n1967.htm)

~~~
pjmlp
I got that wrong, thanks for correcting me.

I do agree that Annex K doesn't provide a proper solution, as data and length
keep being separated.

However what I disagree is the way it was left to rotten without adoption,
instead of actually fixing it.

Static analysis is not a solution, given that all surveys point out that only
a minority cares to actually make use of them, or works in environments where
they are welcomed, with supportive toolchains.

Also they don't work across third party binary dependencies, heavily used in
the corporate environments.

So unwillingness to improve C language security in the standard shows how
little those driving WG 14 care about security.

At least C11 made VLAs optional, which never were a good idea to start with.

~~~
loeg
I'm not saying static analysis is a complete solution, just that Annex K
doesn't actually function without it. I.e., any criticism you want to leverage
at static analysis also applies to Annex K.

(The compiler is one form of static analysis that users can't really get away
from, though. Users can turn all the warnings off, but only the extremely
novice or foolhardy do so.)

I agree letting Annex K hang out optionally in the standard isn't especially
useful; in its current form it should be removed. It isn't clear how it would
be "fixed" without being something completely different from Annex K.
Unfortunately, C hasn't had an actual language update since C11; C18 was just
a clarification and errata incorporation update. The C2x committee is in
progress but hasn't yet made any binding decisions on formal language changes,
AFAIK.

I don't know that 3rd party binary dependencies are heavily used in corporate
environments. I find that a little difficult to believe. I work in an
enterprise environment with a huge codebase and we have no 3rd party binary
libraries. We have a number of 3rd party open source or NDA source libraries,
but we can and do use static analysis on them. It would be impossible to get
to the kind of reliability numbers essential to our business with unauditable,
unfixable 3rd party code in our product.

WG 14 don't view bounded string copying as some core part of the language, I
think. The C standard is about providing a minimal definition of a portable
abstract machine; this leaves implementors or more concrete standards, like
POSIX, free to provide their own safe abstractions. Quite a lot of the C
standard library is sort of legacy accidents that might not be put in the
standard language again if reinvented today. But there is obviously a lot of
motivation to retain backwards compatibility.

~~~
pjmlp
Binary dependencies are quite common in Windows products.

Problem with POSIX is that not all environments support it, and when, not
always the same version.

Agree with the rest of your points, though.

------
Santosh83
Just a naive question probably but, is there any particular reason why
security related executables like sudo aren't replaced with versions written
with a bounds checked language like say Go or Rust?

~~~
simion314
There are people doing that, there is one or more kernles written in Rust but
still even Rust evangelist are not using it daily but demand for people to
waste time rewriting in Rust.

A technical reason for GNU/Linux not to use Rust or Go is that C supports a
lot more platforms so you can't replace core components at this moment, also I
want to remind you that memory safe languages existed before Go and Rust and
the only project I am aware to create a safe OS and utilities was Midori by
Microsoft.

Firefox and it's dependencies is not yet 100% Rust so I honestly expect a new
browser started from scratch in a safe language to be done before Firefox is
"ported".

~~~
the8472
> I honestly expect a new browser started from scratch in a safe language to
> be done before Firefox is "ported".

That is a gargantuan project considering all the standards you need to support
to have a competitive web browser. Especially javascript engine, video codecs
and webgl are attack surfaces that is difficult to replace with code written
in a safe language.

~~~
simion314
>That is a gargantuan project

I think is less work then pressuring existing projects to be rewritten in
Rust. I would usggest Rust fans to pcik one of those dependencies like a codec
and reimplementit in Rust or gather money to pay someone to do it, I think it
would be faster and less hostile.

~~~
the8472
What do you think mozilla is doing? They're certainly not being pressured.

~~~
simion314
From what I read Mozilla fired one or more Rust developers related to the JIT
or the interpreter and IMO the JIT should have been the first thing rewritten
in Rust since that is one of the components that run arbitrary code. I really
hope they can finish the port and I can't wait to see how many zero days will
it have.

~~~
khc
most of the JIT bugs were probably due to generating unsafe code instead of
generating from unsafe code. Rewriting a new JIT will probably increase the
number of zero days.

------
vortico
I've always considered root escalations to be pretty minor because you can
just launch a phishing attack by adding the following to the user's .bashrc.

    
    
        alias sudo='sudo ./badscript; sudo'
    

Most Linux/Mac installations are single user anyway, so you can steal the
user's private information without even gaining root.

~~~
dooglius
Doesn't seem like a viable threat model--a user can edit the .bashrc of a
sudo-enabled user but isn't sudo-enabled himself?

~~~
vortico
Running sudo requires the user password, which a script might not know. Unless
of course sudo is configured with NOPASSWD.

~~~
loeg
This alias just prompts the user twice, as if the user made a typo the first
time. Unless you know you're a perfect typist, it's somewhat subtle.

~~~
vortico
Not if you have a sudo timeout enabled. Or, the proof of concept could be
changed to

    
    
        alias sudo='sudo ./badscript'
    

where badscript runs `exec $@`.

~~~
loeg
Good considerations. Thanks.

------
jimrandomh
Has anyone started a project to rewrite `sudo` in Rust or some other memory-
safe language? It seems like a prime candidate: very security sensitive, well-
defined functionality and API, low total complexity.

~~~
pjmlp
Long time ago there was a project to rewrite Linux userspace in Ada, but it
died due to lack of support.

Also there are userspace rewrites in Go and OCaml.

~~~
pdimitar
Could you point us at them?

~~~
vips7L
Core utils in Rust:
[https://github.com/uutils/coreutils](https://github.com/uutils/coreutils)

~~~
pdimitar
Awesome, thank you.

------
tyingq
_" Just a tiny change, password feedback is now enabled by default. So when
you type your sudo password you’ll see asterisks instead of wondering if
you’re typing anything at all ;)"_

Argh. [https://medium.com/elementaryos/loki-updates-for-the-new-
yea...](https://medium.com/elementaryos/loki-updates-for-the-new-year-
ff6b6ba65531)

~~~
staticassertion
Seems super reasonable tbh.

~~~
nitrogen
The standard for non-Windows/DOS terminals has always been to show nothing
during password entry. Does elementary show password feedback in all terminal
contexts, or just for sudo?

~~~
staticassertion
I think my response to all of that is "Who cares?". It's a totally fine and
reasonable UI change and it's incredibly unreasonable to think "Ah, this minor
UI change might lead to a buffer overflow because we code like it's 98".

~~~
tyingq
Fwiw, my "argh" was a "been there" argh. Not a judgey one. Unintended
consequences suck.

~~~
staticassertion
I see. Sorry for the misunderstanding.

------
floatingatoll
> _Sudo versions 1.7.1 to 1.8.25p1 inclusive are affected but only if the
> `pwfeedback` option is enabled in sudoers_

------
salawat
Uh...

>The code that erases the line of asterisks does not properly reset the buffer
position if there is a write error, _but it does reset the remaining buffer
length_...

Then the next paragraph: > _Because the remaining buffer length is not reset
correctly_ on write error when the line is erased, a buffer on the stack can
be overflowed.

So which is it? Does it correctly reset the buffer remaining length or not?

These types of contradictions existing in bug descriptions do not fill me with
confidence that someone has actually fixed the issue. Guess I'll have to dig
into this one myself. Just what I wanted to have to do. Excuse me while I go
and learn way more about the innards of sudo than I ever wanted to.

~~~
jlgaddis
> _Excuse me while I go and learn way more about the innards of sudo than I
> ever wanted to._

Or just disable pwfeedback, if you went out of your way to turn it on for some
reason.

------
musicale
Buffer overflows in sudo (seen on debian/ubuntu mainly) have bothered me for a
long time since they lead to crashes. I'm glad they are finally fixing some of
them!

------
yoningao
I want to make a presentation about this for school. How can I adapt that in
order to, for example, make the normal user that execute became su?

------
based2
[https://www.reddit.com/r/netsec/comments/exmr98/buffer_overf...](https://www.reddit.com/r/netsec/comments/exmr98/buffer_overflow_in_sudo_versions_171_1825p1_when/)

------
pdimitar
I'm willing to work on replacing most GNU / UNIX tools with Rust equivalents.
Anyone willing to fund such an effort?

Seriously though, as others are saying sarcastically, I'll join them right up:
"C is completely fine, you are just doing it wrong, it's a safe language when
you know what you are doing".

Yeah, about that. _< Glances the article again>_

It's not a matter of "if", guys. It's only a matter of "when".

~~~
cycloptic
>Anyone willing to fund such an effort?

No, not yet. The Rust ecosystem is still not mature enough. One of the big
problems is Cargo which sidesteps distro packaging and pulls dependencies from
some random server. This doesn't fly if you actually want reproducible builds.
The other problems are the complete disregard for dynamic linking and the
amount of churn in the standard library and compiler. Give Rust a few more
years for things to stabilize and for people to figure out a reasonable way to
handle these problems. The work will probably fall on the distro maintainers.

My opinion is that if you're looking for a short-term investment that someone
might want to make towards fixing these problems, try working on static
analysis security tools for C and C++. It could not be easier to make these
now with libclang. Alternatively, maybe get your company to fund a purchase of
PVS-Studio or something like that.

~~~
JoshTriplett
Distribution maintainers have already figured out how to package Rust in a
modular fashion and build without touching the network. Cargo specifically
provides facilities to help distribution maintainers provide dependencies
themselves. Based on those facilities, I built the initial prototype of Rust
packaging for Debian ("debcargo"), and the current Debian Rust maintainers
have made further huge advances in packaging.

So no, Cargo doesn't "sidestep" distribution packaging, it works with
distribution packaging.

Also, neither the standard library nor the compiler have "churn"; we're very
careful to make sure old code still compiles with current Rust.

~~~
cycloptic
Yes, debcargo is a great solution. Thank you for writing that. And really, I
mean it, it's awesome. But my point (which you seem to be agreeing with) is
that we have a while to wait while the distro people do their thing. Not just
Debian. We can't just take random packages off github and put them in any
distro, there is more to it than just running debcargo.

>Also, neither the standard library nor the compiler have "churn"; we're very
careful to make sure old code still compiles with current Rust.

When did this happen? Last I checked Rust still had no LTS policy and there
were no stability guarantees around the syntax, ABI, API, compiler flags,
Cargo file format, etc. If some policy has changed here, please link me to any
info.

~~~
JoshTriplett
> Yes, debcargo is a great solution. Thank you for writing that. And really, I
> mean it, it's awesome.

Just to reiterate: I wrote the initial prototype, and _massive_ credit goes to
the current maintainers who picked it up and ran with it into production.

> there were no stability guarantees around the syntax, ABI, API, compiler
> flags, Cargo file format, etc.

There are absolutely stability guarantees around syntax, around compiler
flags, around API, and around the Cargo file format. Code from old Rust and
old Cargo will build and run with new Rust and new Cargo. We take a great deal
of care to preserve that stability guarantee. The policies here have remained
essentially the same since 1.0 ([https://blog.rust-
lang.org/2014/10/30/Stability.html](https://blog.rust-
lang.org/2014/10/30/Stability.html)), though we've since taken further care to
specify them more precisely and to stabilize further areas.

(Source: I'm one of the leads of the Rust language team, and I'm on the Rust
cargo team. Other Rust teams, such as the Rust library team, follow the same
principles.)

~~~
cycloptic
That post doesn't have any concrete promises about what stability actually
means. Furthermore I don't see that in practice. To be able to say you're
stable you have to have a spec and stick to it over a number of years. Is
there one? Unit tests in rustc don't count because contributors just regularly
delete or change those. I still see statements coming out from other
contributors describing regular regressions and breakages to the core language
happening even in the last few months:

[https://www.pietroalbini.org/blog/shipping-a-compiler-
every-...](https://www.pietroalbini.org/blog/shipping-a-compiler-every-six-
weeks/)

[https://blog.ryanlevick.com/rust-2020/](https://blog.ryanlevick.com/rust-2020/)

Disclosure, I don't work on Rust or any other language, I just keep an eye on
it out of interest. I give it another few years before these big regressions
stop happening. Please keep at it and don't take any of my statements as being
discouraging.

~~~
jcranmer
Compilers are big, complex programs, which means they of course have bugs in
them. If you think rustc is unstable because of a half-dozen regressions in
the last release cycle, then gcc must seem to you to be a dangerously rickety
piece of software:

[https://gcc.gnu.org/bugzilla/buglist.cgi?cf_known_to_fail_ty...](https://gcc.gnu.org/bugzilla/buglist.cgi?cf_known_to_fail_type=allwords&cf_known_to_work_type=allwords&chfield=%5BBug%20creation%5D&chfieldfrom=6w&chfieldto=Now&order=Importance&query_format=advanced&short_desc=regression&short_desc_type=allwordssubstr)

That query lists all regressions filed in the last 6 weeks, and there are
nearly 150 of them.

~~~
cycloptic
I don't see how that's relevant. Most of those bugs are in GCC 10 which isn't
released yet specifically because those regressions are all still open.
Meanwhile, GCC 7 just got a bugfix release a few months ago. It just doesn't
make sense to compare C/C++ compilers to rustc; my point is that rustc doesn't
have a stable LTS release yet. The only way I could see this happening is if
developers of other Rust compilers put pressure on your team to make a spec
and stick to it for a number of years. Otherwise you're headed for a situation
like Haskell where there is only one real compiler and it's permanently in a
state of flux. Distro packaging for Haskell and its libraries is _still_ a
pretty complex task BTW.

~~~
JoshTriplett
I feel like this is combining several different things that don't inherently
need to be combined.

There's active work on Rust specifications with varying degrees of formality,
but I don't think that would address the problem you're describing.
Specifications don't make bugs go away.

There are, occasionally, regressions in rustc. We catch the vast majority of
those before they hit the stable branch, but bugs do happen. We try to
minimize them, and we also put out stable point releases with fixes.

I also don't think that having multiple Rust compilers would affect the
problem you're talking about. On the contrary, you'd then have _multiple_
different sets of occasional bugs to deal with, and multiple implementations
with differences that you'd have to cope with.

Speaking personally, (and _not_ wearing my language team hat), I personally
don't see value in having multiple compiler frontends for Rust, given that
rustc is Open Source. (I absolutely see value in having multiple code
generation backends, but not in having multiple frontends.)

Finally, regarding an "LTS" release: stable releases of Rust are supposed to
be stable, not stagnant.

------
awinter-py
that logo

~~~
malux85
that bikeshedding

