Hacker News new | comments | show | ask | jobs | submit login
Colorizing Stderr: racing pipes, and libc monkey-patching (repl.it)
71 points by amasad 40 days ago | hide | past | web | favorite | 62 comments



I came across stderred a few months ago while looking to do the same thing on my terminals. I ultimately didn't go with it because people have reported destroying their systems while using it[1]. Wrapping libc (which is itself wrapping the actual syscall) is a tricky thing to do safely.

I looked into accomplishing the same thing with URxvt's pre- and post-text rendering hooks, but couldn't get it to work.

[1]: https://github.com/sickill/stderred/issues/63


Not sure what update-initramfs does exactly, but if you ever close(2) your going to be in a world of hurt, because the next file you open is going to have a bunch of junk written to it.

At repl.it we don't put it in our .bashrc or anything like that, we call the wrapper program manually when we launch the user's binary.


Oof. We missed this issue -- good to know. So this is running in production right now for Repl.it beta users and seems to be working fine. But we only use it on the interpreter process and not globally.


Disappointed in the response over at Ubuntu: https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+b...

IMO this should be treated as a bug in initramfs-tools.


This isn't an initramfs-tools bug–if you're LD_PRELOADing into a binary, presumably you know what you're doing. The "bug" here is setting LD_PRELOAD as root (if accidentally through sudo -s) and running a sensitive operation, knowing full well that stderred is a (admittedly very useful, since I use it!) hack at best.


I suggest that it actually is an initramfs-tools bug. As pointed out elsewhere in this very discussion, the program in question must be using file descriptor #2 as its output file descriptor for the files that get corrupted, because stderred only inserts its hardwired control sequences on writes to that file descriptor. That's a bad thing to be doing independent of stderred, because there is potentially other (library) code in the relevant program (whatever it is) that assumes that file descriptor #2 is usable for diagnostics. Even without stderred in the mix, it is a bug waiting to be triggered via another route.


Argh. This is a turn-off for sure but it seems like the issue can be easily remediated by stripping the color codes from the initrd image


Ideally, but it might not be reversible: if the same bytes that make up an ANSI escape + red color code just happen to already be in initrd, then the strip would remove those as well.

I think the best solution here is probably to avoid `sudo -E`, and for `stderred` to be patched to not perform any monkeypatching when `uid == 0`.


LD_PRELOAD is not respected for setuid binaries, right? So why do we need a special case here?


Because sudo is the setuid binary here (so it's not affected) but everything else isn't setuid, they're just running under sudo with LD_PRELOAD set...


sudo -E should not pass LD_PRELOAD through anyways; at least, the one I'm using on macOS 10.14.2 and the Ubuntu 14.04 box I just SSHed into doesn't do this. It looks like the issue is that sudo -s causes LD_PRELOAD to be set again as root because ~/.bashrc is sourced again.


This works better if there are distinct escape sequences to mark the start and end of stderr output, as in DomTerm (http://domterm.org/Wire-byte-protocol.html#Special-sequences...). I created a fork (https://github.com/PerBothner/stderred) of stderrred: If DOMTERM is set, then it uses the DomTerm-specific escape sequences. This has two advantages:

- If the error message has ANSI escape sequences (like gcc does), then you get an ugly initially-red output line. - You can do fancier styling with CSS, since the DomTerm escape sequence creates an element explicitly marked as error output, which is more semantically meaningful than "red".


Neat! We were actually considering doing something like this with custom CSI sequences. Glad to see someone else has already picked some sensible ones.


Bash has process substitution. It supports sending stderr to a process. So...

$ somecmd 2> >(someutility)

Should do roughly the same thing. To highlight stderr with red color:

$ somecmd 2> >(perl -ne '$|++;print "\033[31m$_\033[00m"')

I assume you could roll that into a bash function or script easily enough.

Here's another approach too: https://github.com/dmoulding/hilite/blob/master/hilite.c


Unfortunately, as explained in the post, both approaches -- which are similar to red.c in the post -- suffer from the out-of-order issue.


could you prepend all messages with a timestamp (inc stdout), then reorder in your post-processing?


No, because we're running user programs we don't have control over and we're trying to avoid doing any magic in each language environment (which was the original approach that we decided to move away from).


i meant in a similar approach to capturing the output of each fd - i.e outside of the user’s programs


We're back to square one. If it's outside the program you've already lost precision.


Depends on the shell. The shell I’ve been writing actually does the red highlighting of STDERR output natively without needing to monkey patch libc and still keeps the order of precision.

Ironically it’s one feature I’ve never advertised because I thought it would irritate other users besides myself. I had no idea there was a demand for that kind of feature.


So what's your secret?


Ostensibly it’s the same as the Bash example above but because it’s baked into the shell itself I’m able to add a very minimal wrapper around the shells PTY to append some ANSI escape sequences.

As an aside, I also started work on a feature in the shell to strip all SGR escape sequences (colour and text formatting) for people who prefer their shell output text only. But that proved a little more problematic because it proved impossible to reliably differentiate between processes that expect a TTY (eg top, lynx, etc) and thus would need to bypass the SGR stripping wrapper, from the processes that don't.


In principle, there's no way to fix the interleaving issue. But maybe you can minimise it by reading both from a single process, without monkey patching libc.

The first solution (with the JSON adaptor) is the best solution, the only problem is the implementation (because it doesn't make sure that everything that goes to stdout/stderr goes through the JSON adaptors)

You could fix this problem by using a little C program to do the JSON wrapping.

- The C program would start by setting up two pipes, let's call them pipe_stdout and pipe_stderr.

- Then the C program forks

- The child replaces file descriptors 1 & 2 with the write end of the pipes and closes the read ends.

- The parent process closes file descriptor 0, closes the write end of the pipes, and calls select() on the read end of the pipes

- The child process calls execve to start the interpreter. All output to stdout/stderr now goes into the pipes to the parent process.

- the parent process reads data from the pipes, wraps it in JSON, and sends it to stdout.

Is there a reason why you didn't go this route? This way you should have minimal interleaving, and there's no way the interpreter writes data directly to stdout breaking the JSON stream.


We tried something like this. Minimal was still confusing a non-trivial percentage of the time. Testing locally it was around 0.9%, but in production under docker where execution is constrained to a single core, it seemed to happen much more often.

It is really confusing to the user to have the prompt in the middle of their output.


Interesting. I guess the specific issue with the prompt could be fixed by always printing data from the stderr pipe first, but then you probably get incorrect ordering in other cases...


Why don't you just pass different pipes for stdout and stderr? Then you can treat the two differently, you can do whatever tricks you want to prevent interleaving, you don't have to inject anything...


> whatever tricks you want to prevent interleaving

What tricks do you suggest to prevent interleaving? It's not clear it's possible at all.


Did you set PYTHONUNBUFFERED=TRUE?

Python by default starts buffering output when it's not connected to a PTY, which causes a lot of issues with interactive output.


We do!


Just line buffer what you read from stdout/stderr. Or even display them in separate sections in your UI.

There's no guarantee of ordering between stdout/stderr, so line buffering them, or displaying them in separate UI sections, or even doing some non-blocking reading to flush one fully before flushing the other, should be sufficient.


> Or even display them in separate sections in your UI.

That's a non-starter. We think barring a great UX improvement, the environment should look as predictable and as close to a local setup as possible.


Then line buffer stdout and stderr to the same UI element, and also provide an option to toggle showing just one or the other. Sounds pretty useful to be able to switch between just showing one or the other, and showing them line-interleaved by timestamp, even if you do want to default to the latter.

If line buffering isn't enough, you can use heuristics around read sizes and non-blocking reading to guess whether a given write to the pipe was intended to be done as a block.


> There's no guarantee of ordering between stdout/stderr

There absolutely is. If the two file descriptors refer to the same open file description or the same pipe/socket/character device, then the ordering of write syscalls is preserved--and that is the default for terminals.


oh please no. please don’t just simply set all stderr to red - it doesn’t help at all, and makes reading multi-line things near impossible (there is a reason why most image/video codecs use 1/2 as many bits to encode red colour data).

besides, what happens to colourisation of output from stderr when a process gives useful color output?

UX is not really my remit (you could maybe guess :) but i’d greatly prefer bold, or stdout as grey, stderr as white - only if no colour control codes are present in the written message.

still, kudos for monkey patching libc :)


You can actually set an environmental variable with any color code you want: STDERRED_ESC_CODE. As for color codes in stderr, they work in most cases. If all the color codes are in a single write(), it will more or less be unaffected by the extra red and reset color codes that get stuck on either end.


That would be much better for those of us who are red/gred colorblind if it was just bolded white.

It can be quite hard to read deep red text on a black background. You don’t always have enough access to systems to change the default coloring to prevent this.


Ansi red is not great for the reasons you state, but italic, reversed, underlines might work too.


This is probably a stupid question, but doesn't the PTY itself already have that out-of-order problem? I mean that it reads both file descriptors and does interleave them in the end. If it didn't they could still colorize things after the PTY.

So I guess my question is, why can't whatever it does to handle the ordering and merge the two be replicated in a small program that reads two names pipes and does the same but with extra colorizing?


Not normally, no. The file descriptors sort of act like pointers. During normal execution in a pty, stdrr and stdout both point to the same device with the same buffer. Nothing is handling the order because there is only one buffer being written to.


> Nothing is handling the order because there is only one buffer being written to.

Ah, that's what I was missing. Thank you, that makes sense.


First: Standard error is not just for formatted error messages. It is ironic that this is based upon dealing with REPLs when Unix shells, one of which is even available on the WWW site, use standard error for their prompts and for their interactive line editors. Users do like to put control sequences in their prompts.

* https://unix.stackexchange.com/a/434839/5132

Second: Even as an example that read() implementation is exceedingly bad. It is stack frame corruption waiting to happen.

Third: This is another example of hardwiring control sequences for a particular class of terminal and not respecting TERM=dumb. It probably won't be a problem for the WWW site, but it will for the underlying general-purpose tool.


Small nitpick, LD_PRELOAD will override the write() function from the libc, which is a wrapper around the system call, not the write system call itself.


This is significant for programs written in Go, which tends to make system calls directly instead of using the libc wrapper functions. LD_PRELOAD trickery often doesn't work with Go programs for this reason.


On macOS, Go just straight up prevents the use of DYLD_INSERT_LIBRARIES (the equivalent of LD_PRELOAD), because macOS runs dyld and this mucks with some sort of threading setup.


Might be nice to splice this kind of functionality into tmux, probably right about here: https://github.com/tmux/tmux/blob/486ce9b09855ae30a2bf5e576c...

Wouldn't need any LD_PRELOAD tricks...


Great post, but I'm not sure running a monkey-patched libc is a "small" sacrifice. I would be concerned this leads to very obscure behaviour down the line, especially when your main purpose is to be a REPL and run the code correctly without any "surprises" like these.


That's a valid concern. However, given our scale, we find those obscure behaviors, if they exist (like interleaving output), pretty quickly.

We're keeping an eye out in the beta. You can try it by going to your account and adding the explorer role.


stderr is not for errors! (at least, not necessarily) It's for messaging things to the user. These are often errors, but there are some examples like curl that use stderr for progress.

Leave it up to the CLI to decide what colors to use.


In Common Lisp, you have following streams defined in the spec:

  *debug-io* - bidirectional, for interactive debugging
  *error-output* - output, for warnings and non-interactive error messages
  *query-io* - bidirectional, for asking user questions and reading answers
  *standard-input* - stdin
  *standard-output* - stdout
  *trace-output* - for tracing functions and timing execution
You can redefine each of them independently. I think this granularity is much better than our usual stdin/stdout/stderr split we're used to. Note e.g. the query-io being separate from stdin/stdout, which means a properly defined way for a program to handle interactivity while simultaneously having data piped into stdin and out of stdout.

It was IMO a good idea, and I wonder how the world ended up adopting just three standards streams in the end.


Standard output for errors is for errors. If you need one more stream with your own rules, just open it.

  #include <stdio.h>
  int main(void) {
    /* Open third stream. Stream must be opened in shell using 3>... */
    FILE *user = fdopen (3, "w");
    /* If third stream is not open, then print to first stream. */
    if (!user) user = fdopen(1, "w");
  
    printf("Output.\n");
    fprintf(stderr, "Error!\n");
    fprintf(user, "Important message to user!\n");
  
    fclose(user);
    return 0;
  }

  $ gcc test.c -o test
  
  $ ./test 
  Output.
  Error!
  Important message to user!
  
  $ ./test 1>out.txt 2>error.log 3>/dev/tty
  Important message to user!


Well first of all the docs say it's for "diagnostic output" http://pubs.opengroup.org/onlinepubs/9699919799/functions/st... not "errors".

Secondly, it doesn't matter what's right or not, the reality is CLIs (that you didn't write) generally use stderr for progress so you can't assume everything on it is an error.


Actually, Linux manual says:

  DESCRIPTION
       Under normal circumstances every UNIX program has three streams opened for it when it starts up, one for input, one for output, and one
       for printing diagnostic or error messages.  These are typically attached to the user's terminal (see tty(4)) but might instead refer to
       files or other devices, depending on what the parent process chose to set up.  (See also the "Redirection" section of sh(1).)

       The  input  stream  is  referred to as "standard input"; the output stream is referred to as "standard output"; and the error stream is
       referred to as "standard error".  These terms are abbreviated to form the symbols used to refer to these files, namely  stdin,  stdout,
       and stderr.


Right, as I said: stderr is not only for errors


Generally, CLI turns out progress and colors when output is not connected to tty, so I can safely assume, in most cases. that everything on stderr is about errors. If not, I can always send my patch to upstream.


why does curl used stderr over say stdout? (and happy to read any links you have, trying to learn)


Because stdout is used for the output of the program, ie. the stuff downloaded from the URL.



I don't see out-of-order issues when using interactive SSH, and SSH still funnels stdout and stderr over a single TCP connection (keeping them separate at the client's side)

Seems to me that you could just look at how SSH does it ?


SSH only separates the streams when you don't request a PTY. For example try:

  ssh localhost -t 'echo hi 1>&2'  2>test
vs

  ssh localhost 'echo hi 1>&2'  2>test


all this complexity for what? seems like a lot for a little.


We care about user experience enough to justify the cost. Maybe some experienced hackers might not mind undifferentiated colors, novices surely do and since Repl.it is increasingly a place where a lot of people start their coding journey, this is important.


If you care about the user experience that much, then you should do as others have suggested and just separate the output into two different text areas. If you're concerned about your UI matching the local behavior then it just shouldn't be colorized at all. Dirty hack solutions are incompatible with both goals.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: