Hacker News new | past | comments | ask | show | jobs | submit login
How does apt render its fancy progress bar? (mdk.fr)
516 points by julienpalard 15 days ago | hide | past | favorite | 158 comments



APT's progress bar does have issues though. When you shrink the terminal when the progress bar is visible, you're going to have gibberish in the terminal. That's because the terminal commands to move the cursor by lines work on visual lines, not semantic lines, so they aren't invariant under the operation of resizing the terminal window. That's a really rough area of Linux terminals: when you resize a window, the terminal is going to rewrap the lines according to semantic lines (based on line feed characters), so it's clear that all major terminals understand this distinction, but the terminal API doesn't give applications access to moving by semantic lines, so we're left with bugs like these.

I think PowerShell has the concept of progress reporting specially recognized? That sounds like a good idea. It could be used to render a GUI progress bar and leave the pipeline alone, for the objects. That's just my vague idea, since I didn't spend much time with PowerShell.


PowerShell does have progress bars, but rendering them is (was?) on the critical path. I once had a script that was downloading a large file and took ages to finish, you could speed it up by well over 10x disabling the progress bar.

https://github.com/PowerShell/PowerShell/issues/2138


Are they synchronizing on the UI every time they update the progress bar? That's a rookie mistake.


I have a hard time believing real programmers would allow something like that in production code... But I guess if its corporate, nobody cares about the quality.


Outputting text to a terminal is synchronous. Try connecting your favorite program's STDOUT to a program that doesn't read its STDIN. The program will stop running after the output buffer fills, because "write" blocks. Obviously you can buffer internally, and have a "UI thread" for writing stdout... but literally nobody does this.

This is why people try to write the fastest terminal emulator -- a slow terminal emulator slows down apps. (And, it all makes sense. You have to apply backpressure when you are receiving data more quickly than you can process it. You can buffer to smooth over spikes, or you could drop data on the floor, but both of those would suck for output in a terminal!)


> have a "UI thread" for writing stdout... but literally nobody does this.

The Erlang runtime does it for you by default :)


stdout is by default buffered in almost every language. Outputting lots of non-critical stuff to stderr, or manually flushing stdout on every write, is definitely a rookie mistake. Raw terminal output should not be a big performance sink either if you do the sane thing and only write on every 100th or 1000th or whatever iteration of your main loop. No threads needed.


It's buffered, but buffers fill up. I wrote a small program [1] that calls os.Stdout.Write to write one byte, increments a counter by the number of bytes written, and records the time of the last write. Another thread then prints the byte count and the time since the last time Write() returned every second. Running "that program | sleep infinity" yields a buffer size of 65536 and writes stop returning after the first millisecond or so of program runtime. And that makes sense; nothing is reading the output of the program, and it is written to produce an infinite stream of bytes. There is no buffer you can allocate that stores an infinite number of bytes, so it has to block or abort.

[1] https://gist.github.com/jrockway/a5d96151e1c69407f491988df70...

Going back to the original context of the comment, we're calling some programmer at Microsoft an amateur because their progress bar blocks progress of the application. And indeed, that design could be improved (sample the progress whenever the terminal can accept however many bytes the progress bar takes to render), but it's a very common mistake. Any program that calls printf will eventually run into this problem. A fixed-length buffer smooths over quirks, but if your terminal can render 1 million characters per second and your program produces 1 million and 1 characters per second of log output, then there is no way your program can ever complete. You don't notice because programs don't output much information, terminals are fast, and 65536 is a good default. But in theory, your program suffers from the same bug as Microsoft's program. So it's pretty unfair to call them amateurs, unless the Linux kernel, GNU coreutils, etc. are all also written by amateur programmers. What the grandparent meant to say is "I've never noticed having made this mistake before" and nothing more.

This is something that programmers need to think about pretty much every time they do IO. Buffer, drop, or slow down. printf picks "buffer, then slow down", writing to syslog over UDP picks "drop", but what you really want depends on your application, and it is something that you have to explicitly think about.


Things weren't so nice in Windows before Microsoft added VT control sequences to Windows 10. Before then, if you wanted to draw fancy stuff in a console window, you had to call a synchronous out-of-band IPC API to conhost.exe. I don't know how PowerShell did it specifically, but you'd need to make the same sequence of slow IPC calls to draw the progress bar even if you did buffer them.


I've been around long enough to see programs which print lots of output to stdout get a lot lot faster by piping their output to /dev/null.


I've been around long enough to hit ctrl-o to make that happen


What does a programmer that's not a "real programmer" do? Only write code in films? Slap the keyboard on hackertyper.net?


I imagine the argument would be that the person that writes code at a corporation is not the same person who writes it for hobby. Even if they share the same body, one is just doing it for the cash and the other for fun.


npm was about 6 years old and im pretty sure a lot of corporates have adopted node by then.

I don't think it has so much to do with quality but rather we like to re-invent shiny "new" wheels and then the new tooling must iron out all the corner cases.

https://github.com/npm/npm/issues/11283


One solution is to install a signal handler for SIGWINCH and redraw everything.

In practice this works best for terminal apps that already make use of the entire screen (say vim or emacs) but not so well for mixed text and some visual gadgets.


APT already has a signal handler for SIGWINCH, in order to resize the progress bar. The problem is that after APT finds out about the resize it clears the last line on the screen, but it's a visual line, which is only a part of a semantic line which was wrapped after shrinking the terminal window. APT doesn't know how many visual lines the progress bar takes after being rewrapped by a resize, so it doesn't know how many lines to clear. It could try to compensate by remembering the old width in characters, looking at the new one and calculating what rewrapping would result in, but that would probably not be very reliable across different terminals.

Redrawing everything is annoying from UX point of view, because it breaks copying lines out of the terminal. Instead of getting a bunch of lines separated by line feeds corresponding to semantic lines, the user gets a bunch of lines corresponding to the visual lines in the terminal (so a lot of lines broken, some of them midword), including loads of meaningless whitespace.

I wish the modern terminal was less of an old terminal emulator and more of a GUI for displaying pipelines. So no ncurses-like applications, such as vim, more first-class support for progress bars (instead of the hacks that we have), support for jumping between the lines in scrollback where the user typed in the commands, no possibility of breaking the terminal state (such as what happens when I hit Ctrl+C in the password prompt of openvpn) forcing the user to run "reset", etc. etc.


The problem you’re highlighting is a simple design issue.

My problem with progress bars is they are usually nonsense estimates based upon random programmer choices.

I know former MSers who worked on progress bars for old Windows versions. Specifically I remember them saying the progress bar for copying files to USB was estimating how long until a USB hardware buffer was clear, without any idea how much more data there was intended to be jammed into the buffer once it cleared. Buffer would immediately refill and increase the wait time.

Given how slowly Windows deprecates code, who knows what subroutines any given progress bar is relying on.

They’re nonsense features meant to soothe users who seek progress. They don’t need to be interesting visually. I’d accept a simple countdown that made sense.

I’m hoping manufacture of application specific chips takes off. That we embed a 3D engine into silicon and this “software engineering is life” mind virus can go away.

We simply did not do it that way before because we lacked the manufacturing capabilities.

If manufacturing hopes and dreams of colleagues in the chip biz come to fruition, software is on its way out as a routine part of developing new technology. But I mean they’re biased; attention on hardware versus software makes them more valuable.


I've been starting work on a long rant title "Counting is Harder Than You Think". In general, I think most people think counting is one of the easiest things for computers to do because people learn counting in elementary school and just forever associate it with "easy". (Someone's never asked the elementary school teacher's opinion of that.)

"How hard can it be, it's just a Select Count() in SQL!" Uh, that Count() is possibly doing a ton of work in CPU/IO that the server could be doing for other things, and sure an Index might speed that up, but you can't really index an Index and eventually you get right back to where you can't afford the CPU/IO time.

People just assume computers have exact numbers at all times. Some of that is just a problem of bad UX design ("why are we showing a meaningless estimate number like 1,492,631 and not 'about a Million things to do'?"), but so much of it just seems to be that people think counting is easy.


If we are gonna bitch about progress bars, Microsoft’s are almost always the worst. So many of them get to 99% and them stall out… dunno how they get their progress bars so bad.


The problem isn't Microsoft. The problem is that progress bars are the worst way to indicate progress ever invented except for all the other terrible ways to indicate progress we've invented. Percentage numbers are always a lie and shouldn't even be shown, but some people like the soothing comfort of "number go big".


No, you want to know if the command spewing out text is going to be done in about 5 seconds (I'll wait), 5 minutes (time for coffee or whatever) or 5 hours...

They also are there to indicate progress if no other visuals are present (no, it hasn't crashed yet).

Progress bars solve that given some uncertainty in most cases. And that is very much appreciated. Everyone knows or learns that they aren't perfect, and that is fine.


I was relatively deliberate with my wording. I'm not saying "get rid of progress bars", I'm saying they are bad at their job but we've never managed to really make any thing better.

Take your time estimation problem as the direct and obvious example: a progress bar on its own only really gives you a sense of timing if they move at a deliberate linear pace, which most can't do/promise (hence all the complaints about progress bars "stalling out at a percentage" when the system hits an outlier or discovers a lot more work to do), even then people are really bad at estimating linear speed of a progress bar. So a progress bar alone isn't great for judging speed.

Other threads around here make jokes about trying to add time estimates near progress bars as another way to indicate progress. Again, they work some of the time, but also need assumptions that for instance past speed predicts future speed that are often hard to guarantee in practice.

About the best progress bars can do is that "no it hasn't crashed yet" and spinners are generally better at that particular task (because progress bars don't have the granularity to show very small progression below 1/100th or 1/1000th or even 1/10000th of the overall workload), to the point that Windows added a spinner animation over top of its progress bars way back in Vista to make them better at the one job most users count on them for (for something of a best of both worlds, kind of, if you squint).

Progress bars solve problems, they just solve them badly, and yeah, that was the point of my message that we also haven't found anything much better. I agree with you that they aren't perfect and are mostly fine. We just sometimes need to admit that they are bad at their jobs and we'd replace them in a heartbeat if we actually found a better progress indicator of some sort.


To take things in a more constructive direction, that said, it is an area I've experimented with/tried to solve.

My big idea was radial progress indicators to try a different "best of both worlds" approach to progress spinners versus progress bars especially for "composite" progress indication where you have an unknown number of subtasks all running at their own speeds and can be discovered/initiated independently (such as downloading files). People are worse at estimating percentages of circles than lines, which I see as something of a benefit (because the exact progress percentage should be fuzzy).

It's still not great at giving an indication of overall speed/estimated time, but it's potentially very great at "the application hasn't crashed and is busy".

It was fun to experiment with/prototype, but I don't expect it to replace progress bars any time soon. (I think it should, but it's trade-off space where every option has drawbacks and while it fits closer to what I think is my personal "ideal", it probably won't make everyone happy either.)

Demo: http://worldmaker.net/compradprog/

Source: https://github.com/WorldMaker/compradprog/

Blog post on intentions/thought process: http://blog.worldmaker.net/2015/03/17/compradprog/


If they are tied to actual progress, then at least you have an indication of whether a task is hanging or still (slowly) working.


Except 0/100 isn't a lot of granularity to indicate "still slowly working". (0/1000 or 0/10000 if you show percentages to the second or third decimal point aren't much better either, especially if the working set is in the millions or billions of things to do.)

> especially if the working set is in the millions or billions of things to do

That depends on how fast they get done.

I'm not saying it covers every case, which I think would be the thesis you're countering. I'm just saying they are sometimes useful.


I wasn't saying they didn't have their uses, just that they are generally poor at doing them. Though again we've also never really invented anything better. I'm not saying they aren't useful, just that they aren't great.

> I know former MSers who worked on progress bars for old Windows versions.

https://explainxkcd.com/612


Yep. That was around the time I worked with those folks.

I would not be surprised if the topic came up because of that comic.


I hate this in homebrew and macports: they use a progress line that mostly works, until your terminal is too wide or too narrow.

    #
    ##
    ###
    ####
    ######
    #######
    #########
    ############
    ,,,
    #####################################################
    #########################################################
    ###############################################################

    ##########################################################################################################


IMO this isn't a big enough issue to worry about. Most progress bars break when you resize the terminal, I just expect it at this point.


For those who wonder how these "\033[1A..." notations are specified, look up "ANSI escape code" on the net, e.g. https://en.wikipedia.org/wiki/ANSI_escape_code


And if you want to play around with escape sequences from your shell, you can do so using `echo -e "\e[1A..."` in most shells (the `-e` flag enables backslash interpretation, while `\e` stands for the ESC character, alias `\033`).

Note also that (traditional) escape sequences cannot execute arbitrary code, so you can't really break anything that closing and reopening the terminal won't fix.


> you can't really break anything that closing and reopening the terminal won't fix

Or typing `reset`, which is sometimes also necessary after accidentally piping binary to stdout.


`reset` won't work if you have used the escape sequence that disables raw mode, at which point the shell won't be able to process keyboard input anymore and you simply cannot issue commands. I'm not aware of any method for fixing this that doesn't involve restarting the terminal.


You should still be able to `reset` in line-buffering mode.


`stty sane` not work?


And if you really want to have fine, put them in your git commits and stand back as `git log` destroys your coworkers terminal.


You can be sloppy and hardcode escape sequences in your shell script, or you do it right and use the tput program to look up the right sequences at runtime.


ANSI escape sequences are a standard. Relying on that standard isn't sloppy, any more than relying on any other standard. Terminfo is today basically a hack to work around the problem that some uncommon sequences differ between terminals, but others (like cursor movement and SGR, which cover 99% of use cases) have been well-established for 30 years. I much prefer using such sequences directly over the complexity of shelling out to an external program. I'm not going to pretend that there is a chance my script might have to run on some obscure text terminal from the 80s some day.


Standardness notwithstanding, using tput is more readable and memorable. I can type this from memory:

    echo "I'm $(tput setaf 4)blue$(tput sgr0)."
but I couldn't get this one right without looking it up:

    echo -e "I'm \033[0;34mblue\033[0m."


Because you're using a lot more characters than you need.

    echo -e "I'm \e[34mblue\e[m."
However, `tput` is definitely the more 'correct' approach. What actual benefit it affords you over the hardcoded escapes is left to edge-cases and very, very old or otherwise niche terminal emulators.

By the way, the `3x` means foreground, the `4x` means background. `x` is a number between 0 and 7 (inclusive) that indicates the color. `x=8` means "extended"/non-standard and generally takes a few more control codes afterward, and `x=9` means "reset".

If you know binary, you can remember the 0-7 colors. It's a 3-bit code corresponding to BGR (where B is the MSB).

    BGR
    000 = 0 = black
    001 = 1 = red
    010 = 2 = green
    011 = 3 = yellow
    100 = 4 = blue
    101 = 5 = magenta
    110 = 6 = cyan
    111 = 7 = white
You can also add 60 to the code to make it "bright", e.g. red foreground (31) can be made bright red by adding 31+60=91. Same can be applied to backgrounds (4x+60=10x).

The bright codes are less supported though admittedly I've never seen a modern emulator that doesn't. It also gives you bright colors on old Windows cmd.exe prompts without needing the bold mode (1).


Still not particularly readable or memorable. You still need to remember that blue is color number 4 and that "sgr0" is "exit_attribute_mode", what you might call "reset".


Store the sequences in variables and put those instead. You get readability and locality!


I was about to post the exact same reply so I will post it as an example:

  #!/bin/sh
  BLUE=$(tput setaf 4)
  RST=$(tput sgr0)
  echo "I'm ${BLUE}blue${RST}."


  alias red='echo -ne "\e[31m"'
  alias yellow='echo -ne "\e[33m"'
  alias green='echo -ne "\e[32m"'
  alias blue='echo -ne "\e[34m"'
  alias cyan='echo -ne "\e[36m"'
  alias violet='echo -ne "\e[35m"'
  alias grey='echo -ne "\e[90m"'
  alias gray='echo -ne "\e[90m"'
  alias white='echo -ne "\e[37m"'
  alias bold='echo -ne "\e[1m"'
  alias flash='echo -ne "\e[7m\e[5m"'
  alias normal='echo -ne "\e(B\e[m"'


Back when termcap and later terminfo were developed, most terminals didn't support ANSI escape sequences, at all, and used completely different control codes. But that was 30+ years ago. It was for actual dumb terminals that could only display text. These days, it's a safe bet that any terminal emulator will support the ANSI escape sequences.


Is it though? What if I have my TERM set to 'dumb' because I'm running your CLI command inside a very limited console embedded inside a text editor or other tool? At the very least you should `test -t` in your script to ensure STDOUT isn't being redirected to a file so you don't fill it with garbage escape sequences.


> ANSI escape sequences are a standard.

Hi, Chalk maintainer here. They are most certainly not standardized, despite frequently shown with the "ANSI" nickname - at least, not by any widely accepted standard I'm aware of.

The only standardization I could ever find myself was that of a document format similar to PostScript, but it didn't describe the codes that XTerm originally adopted. You would have to ask the original implementors how/why they chose the codes that they did.

Serial/teletype (TTY) escape codes are one of the most archaic, still-widely-used functions of terminal emulators and as such sprung up before the dawn of widely agreed-upon standards. There are thousands of them, not just those for rendering, and they were historically used for serial transmission control for e.g. printers and other UART devices.

Thus, many (most, actually) escape codes have nothing to do with rendering and instead control the behavior of the two nodes at either end of a serial line.

Since most of these devices used proprietary codes - including those for rendering - a common subset of those related specifically to terminal emulation/TTYs were derived and a database of those codes was formed. This database is usually installed on *nix installations via the `ncurses` package and is generally referred to as the "terminfo database" or simply "terminfo" or "termdb".

Each file in the (very, very extensive) database is a binary file format that includes a complete stack-based scripting language for string-based escape codes (there are a few datatypes supported in the file format, some of which are purely to indicate capabilities and whatnot).

I wrote an implementation of a Terminfo parser with scripting language support in a C++ library if anyone is interested. I'd have to dig it up if someone would like to use it - just let me know.

The entry for the given terminal session is selected via the `TERM` environment variable in most cases, which ncurses-based applications (and other applications that support terminfo) reads in order to select and parse the correct terminfo entry.

Most of the common escapes we use today stem from the XTerm set of codes - namely, `TERM=xterm-256color` - since supporting Terminfo is a beast and can hurt performance when properly supported (since each 'invocation' of the escape is really an evaluation of a small script).

Since XTerm made the codes simple to render out (CSI(\x1b) <mode>[;<mode>[;...]] m), and since most terminal emulators chose to model their escapes off of XTerm, many application developers started to hardcode the XTerm escapes directly in lieu of using Terminfo. This was further set in stone when the often-referenced Wikipedia page[0] for ANSI escapes was written, claiming that those were the only codes - which is very far from the case.

XTerm in particular re-uses a lot of codes from popular-now-archiaic terminals such as the VT100, which introduced bold types among other things (but not color!). I believe it was the VT520 or something like that that had the first semblance of color codes, but my memory is fuzzy and I'm not in a position to research it right now. I'll leave that as an exercise to the reader.

For the most part this has worked out okay. Naive/toy emulators recognize these codes, some shells (not many, but some) actually transform the escapes, some libraries translate them to Windows API calls (e.g. libuv, the I/O and event loop behind Node.js), and most established terminal emulators choose to use the xterm-256color by default anyway.

There have been a number of closed-door discussions between terminal emulator vendors about modernizing all of this but, incredibly, PowerShell is the only effort I've seen widely adopted or used that has made strides in improving this whole debacle.

I'm sure there are some errors in what I wrote - feel free to correct me if I'm wrong. But this is my understanding of the messy world of "ANSI" escape characters.

[0] https://en.wikipedia.org/wiki/ANSI_escape_code


The standard is ECMA-48; for example this 1991 version[1] lists all the SGR codes to colour stuff (page 61), the cursor movement codes, etc. The 1991 is just the first version that came up, I believe some of this goes back to the late 70s.

Terminfo goes back to the early 80s, during the time of the Great Unix Wars when everyone was doing everything different for the fun and sake of it. It contains a lot of cruft. People are mentioning ADM-3A terminals here and that's all very nice, but these are machines from the 70s. No one is using them, except some people for the fun/historical value. Do people write software to be compatible with Bell Labs Unix from 1976 or 1BSD that Bill Joy was working on? Of course not.

AFAIK there are very few modern terminals (or rather, terminal emulators) that don't support that basic set of escape codes. I can't really find any in a quick glance at my system's terminfo database.

For basic operations it's pretty safe to rely on, at least if you care about Unix only (not entirely sure about Windows, I believe it's all different there). Once you go beyond that it gets a bit more iffy.

There are loads of programs that hard-code these things, including very popular once. The issue trackers of these projects are not getting filled with people reporting garbled output.

[1]: https://www.ecma-international.org/wp-content/uploads/ECMA-4...


ECMA-48 was what I was referring to, thanks. And no, I'm not convinced it was the standard many were conforming to. Perhaps Xterm, but not everyone.

Further ECMA and ANSI are two separate standards bodies. While ECMA-48 is listed as one of the standards on the Wikipedia page I've yet to find anything that claims to conform to that standard. The same with the other listed "standards".


Actually that Wikipedia page lists the history of standards:

"The ANSI standard attempted to address these problems by making a command set that all terminals would use and requiring all numeric information to be transmitted as ASCII numbers. The first standard in the series was ECMA-48, adopted in 1976. It was a continuation of a series of character coding standards, the first one being ECMA-6 from 1965, a 7-bit standard from which ISO 646 originates. The name "ANSI escape sequence" dates from 1979 when ANSI adopted ANSI X3.64. The ANSI X3L2 committee collaborated with the ECMA committee TC 1 to produce nearly identical standards. These two standards were merged into an international standard, ISO 6429. In 1994, ANSI withdrew its standard in favor of the international standard."

So ANSI X3.64 was the original, but things has since converged and it has been superseded by the ECMA/ISO one.

The document that many people use is probably the Xterm ctlseqs page, which is just the same as the ECMA codes with additional notes, extensions, etc. So I suppose that's the de-facto standard now.


Below is the original manual for a Lear-Siegler ADM-3A terminal, which did not use any of the VT100-derrived escape codes (and was much less expensive at the time).

The ADM-3A is notable as it was used by Bill Joy to develop the "vi" editor, and the key layout should be familiar.

http://www.bitsavers.org/www.computer.museum.uq.edu.au/pdf/D...


OpenGL is a standard too, but an OpenGL program will break on a system supporting only Vulkan. Terminfo exists for a reason and that reason is still there today. For example, I can run under TERM=dumb to disable fancy output for various reasons.

No, terminfo isn't a hack, ANSI escapes aren't ubiquitous, and every single time I see someone hardcode escape sequences in some script, my opinion of that person declines.


Imagine living in a world where OpenGL has been around since the 1970s, stable for a long time, and where there’s no sign of Vulkan on the horizon.

That’s the world of terminals. There’s no sign that the way terminals work is going to change. These standard escape sequences are old.


Until your program is run in a emacs window (lint-staged’s progress bars mess up magit’s output buffer, making it really difficult to figure out what the problem is) or is piped through grep.

tput will do the right thing because of terminfo, hand-coded escape sequences will make someone curse you.


To make things portable look up termcap(5) and tput(1) to ask the termcap database for the right control sequence for your terminal, and use those.


Interestingly apt don't do this, they hardcode sequences as I did.

Maybe they bet those sequences are the same on all terminals since 1970, espacially those marked "DEC Private" and it won't change soon?

I honestly don't know.


Most terminal emulations follow the ANSI terminal https://en.m.wikipedia.org/wiki/ANSI_escape_code , so these hard coded escape sequences simply happen to work on most virtual terminals, including windows console (running CMD) and windows terminal.


This in-band control signalling was mind bending the first time I learned about it. "How come other programs print text in colour, but mine is only plain? Wow - I can print special sequences and that controls color?! This means the console must be interpreting what I'm sending, not doing a simple lookup and display!".


... and in case you messed up your console, typing `reset` followed by enter will fix it.


I had to do it quite a few time when implementing this one ;)

I often type "C-c" or "enter" before typing reset to ensure I have nothing before it in the buffer.

A bit like I often do "Enter Enter ~ .` instead of just `~.` to kill a dying SSH session (I don't know why I hit enter twice here... a single time is enough.) to ensure the ~ is at the beginning of the buffer (if it's not, it won't work).


Or, if you managed to mess up the terminal modes and enter doesn't work, try pressing ctrl-j instead.


It's good to know the underlying low-level terminal commands that make this work like this article explains. But I guess most people will use a terminal abstraction library that has a higher level API to move the cursor, clean the screen etc.

Terminals are complex beasts and they can do powerful things. If you're interested in this, I can recommend reading the source code for Tmux, JLine3 and some other libraries shell formatting or TUI libraries.


Terminal abstraction libraries feel like a relict of the past to me, because there used to be a lot of diversity in command sets and capabilities. Nowadays you can pretty much just assume that most commands work everywhere, so there is a lot less reason to drag around ncurses and friends if you want to make a progress bar that doesn't break your output or have some colored text.


Oh, but you just can't. MacOS's zsh can't deal with my tmux shell when I ssh in. There's all kinds of great features that allow for basically using pixel buffers in a shell if you stray away from the default terminal implementations. Getting colors to work properly is also still a pain.


Ncurses and friends could be overkill but there is still value in small wrapper libraries that provide a higher level API on top of escape codes.

`setCursor(lines, 0)` is more readable than `write(f"\033[{lines};0f")`


There is still a ton of diversity (underline color, curly underlines, synchronization, and image support for example), but it's nearly impossible to figure out what features are available in a given terminal. I just finished adding vt400 support to iTerm2 this week but I doubt any of it will be used because it's not widely available enough for people to bother.


> some other libraries shell formatting or TUI libraries.

Any that you’d recommend?


Rich (for formatting) and textual (for TUI) from Will McGugan are great libraries:

https://github.com/willmcgugan/rich

https://github.com/willmcgugan/textual


I recently started using python click [1], which is awesome after wrapping my head around some things. Click also provides a useful abstraction for creating progress bars.

[1] https://palletsprojects.com/p/click/


I only have some experience with JLine (Java) and Termion (Rust). Heard great things about Textual (Python).

There are also smaller libraries focusing on just one aspect like coloring. For example the Colorize Ruby gem.


Bubbletea [1] for Go is quite polished

[1] https://github.com/charmbracelet/bubbletea


just don't read gnu screen!


tqdm in Python was also very nice and easy to use.


+1 for tqdm - not only does it take less than one line to "automatically" turn any loop into a progress bar, it also supports native progress bars in Jupyter Notebooks and even a basic GUI one


This kind of fancy terminal manipulation is usually an accessibility nightmare.

Real GUI frameworks usually provide semantic information to the OS, which allows screen readers to determine that a given control is a progress bar that is 50% filled, and then render that information by playing an appropriately-pitched tone. In a TUI, all they get is a line of fancy symbols changing quickly, with no context as to what those symbols mean. Same for i.e. ncurses based menus. When programs use colors to signify which menu option is selected, screen readers usually get lost.

Screen readers specifically designed for text-based systems (such as DOS) usually had ways to work around this issue, but modern, GUI-focused ones don't really offer those options any more.


All terminal applications should have a 'basic' (or to use Git terminology, 'porcelain') output mode by default, and an alternative renderer for more visually inclined people as an option.

Most recently, Yarn 3 went way overboard with their installer, to the point where it's... visually overwhelming? Tree views, colors, dotted underlines, multiple scrolling segments, etc. It's a bit better if running in CI mode (no coloring), but still not ideal.


On 'porcelain', the way Git uses that term is a pet peeve of mine. On the one hand, the intended meaning of it is that "porcelain" commands are meant for humans, not for other programs - scripts should use the "plumbing" commands (apparently, it's a toilet analogy). On the other hand, the flag that makes "porcelain" commands write machine-readable output is called... "--porcelain". Implying it does the opposite of what it does.


It makes more sense if you look at the whole option.

  ... --porcelain[=<version>]
The idea is to present the porcelain at a specific version. So really you should read it as "give me the porcelain (for v1)", e.g.:

  git status --porcelain=v1
If it was an environment variable instead it would be

  GIT_PORCELAIN_VERSION


But that's not what it does. Try it.

    $ git status
    On branch master

    No commits yet

    Untracked files:
      (use "git add <file>..." to include in what will be committed)
     foo

    nothing added to commit but untracked files present (use "git add" to track)
    $ git status --porcelain=v1
    ?? foo
    $ git status --porcelain=v2
    ? foo
    $ git status --porcelain=v3
    fatal: unsupported porcelain version 'v3'
No --porcelain option gives you the output in the format of the git status porcelain, either at the current version or at Git 1.x or whatever, i.e., "give me the porcelain" is the wrong reading of the option.

The correct reading is "I am implementing a porcelain myself, give me a format I can parse in order to do that."


It’s funny how wrong this is.[1] No, `--porcelain` is machine output format, and the machine output format is (sensibly) versioned (you of course need to version such output—the output of `git status` writ large is not versioned afaik).

This [2] answer by VonC even quotes a mail where someone said that it was “[their] fault, to some degree.”

[1] Because it proves the opposite point.

[2] https://stackoverflow.com/a/6978402/1725151


Gah, I'm beginning to think the git CLI is trying to be confusing on purpose.



> The novice thought for a few moments, then asked: "Surely some of these could be made more consistent, so as to be easier to remember in the heat of coding?"

> Master Git snapped his fingers. A hobgoblin entered the room and ate the novice alive. In the afterlife, the novice was enlightened


Is the zen of git to stop using git? Somewhat inspired by OpenBSD I spun up my most recent side project with cvs. I'm amazed how much less stressful code management has become. Makes me want to go further back in time and learn how to properly use rcs.


Now try a merge.


I've done a few, it's been fine. If you try and use cvs like git sure you're going to get burned. There's a reason Linux was using Bitkeeper and not cvs after all. But if your project organization works well with cvs, you can seriously avoid the massive wad of incidental complexity that is git.


I mean, when I was using cvs because cvs was the only thing I could use I don't think I was trying to use it like git and I was still pretty happy when better things came along. Subversion and perforce in particular improved merging massively long before git came onto the scene.

CVS was fine for small projects with a couple devs at most. Anything more than that and you needed someone on your project with far more domain knowledge in cvs than git requires now to avoid losing all your data in a botched operation on the thousands of independent rcs files a repo consisted of, so I really can't agree about avoiding massive wads of incidental complexity.

Git is, if anything, far simpler than cvs in just about every way imo. I could see myself using subversion for a lark, but I'm very happy if I never have to touch cvs again.


I think the author of that blog post prefers Mercurial.


> ... scripts should use the "plumbing" commands (apparently, it's a toilet analogy) ...

Ah this term finally makes sense to me, since it's meant to deal with human output.


Wasn't machine readable output is obtained via --terse historically? Some toolkits we use provide --terse flag to dump output in a pipe friendly manner.


Better yet, OS-wide flag to force most compatible output format. No need to rob everybody of fancy graphics.


> All terminal applications should have a 'basic' (or to use Git terminology, 'porcelain') output mode by default, and an alternative renderer for more visually inclined people as an option.

Also makes it easier to integrate them to scripts/pipelines.


The terminal is a visual tool. That makes it inaccessible, almost by definition. Terminals are text-based, but they can still be used to convey non-textual information (ie. progress bars, graphs, etc).

There's no sense in expecting a visual application (such as the terminal) to be somehow accessible out-of-the-box.

Trying to automatically translate a visual interface to an accessible interface is bound to be error prone and not work in all cases. What's accessible for a blind person, may not be accessible to a deaf person.

Applications should share the core business logic, but have entirely seperate "frontends" for every situation. A graphical frontend, and an accessible frontend. We need an open cross-platform standard for this, and enforcement.


This argument has been brought up in the accessibility community multiple times, and it's not as straightforward as you might believe.

The problem with accessibility-focused frontends is that they lack in features. There's a much smaller number of users able to detect bugs and a much smaller number of developers willing to fix them and add new features whenever they appear in the mainstream app. This works pretty well for i.e. accessibility-first Twitter clients, but it probably wouldn't for your local utility company that serves ten or so blind customers, none of whom are programmers.


> "The problem with accessibility-focused frontends is that they lack in features."

Isn't that a consequence of building different tools that fit people with different situations? We do exactly this with physical tools and constructions: we build a wheelchair-friendly ramp alongside a traditional stairway. It's going to be a trade-off either way, right?

If we "choose" (I fully understand this is not an actual choice done by anybody) to stick to tools which try to auto-magically translate visual-first frontends into accessible frontends - we must accept that there will be things that mess with that translation process and cause it to fail. It's unrealistic to expect a translation that works 100% of the time, reliably. Sometimes it's a minor technicality that can be worked around - but other times it's because data presented visually in a certain way, can't always be translated to something else (ie. a heatmap, a 3D multiplayer game, etc). How do we make the Mona Lisa accessible to a blind person? On that same strength - should TUIs not be allowed to have progress bars or other visual elements?


The solution here is to augment our tools with the information a screen reader needs.

To follow your example, we don't build separate bank branches for wheelchair users, we just extend existing branches with ramps, elevators and anything else they might need.

We already do this with other kinds of user interfaces. For example, web pages can be annotated with ARIA attributes, which don't influence how a page looks, but tell the screen reader that something is a checkbox which is currently unchecked. This is only required if you don't use the native HTML checkbox control, of course. Other platforms have their own ways of doing this, see iAccessible and UIA on Windows, ATSPI on Linux and AXElement on the Mac, for example.

Terminals were never meant for use cases like this, look at how hacky tty control codes are. If not for the fact that they're basically a bunch of ugly cludges that somehow work, there would probably be an aPI for that too. For now, though, we have to live with what we have.


> "The solution here is to augment our tools with the information a screen reader needs."

So we are in agreement that there should be different interfaces that suit each experience, we just disagree on the exact implementation details. I submit it's best to build entirely seperate experiences, but you submit it's enough to take the graphical-first experience and annotate it enough so that a screen-reader can generate an equivalent experience on the fly (using different kinds of annotation technologies). My response to this is:

1. Annotations and metadata (ARIA-labels, et-al) make it easier for the screen-reader to display relevant information in an accessible manner - but they create an unnecessary coupling between the visual-first frontend and the accessible frontend, when in reality they are built for different kinds of users.

2. Annotations are a decent starting point, but they are NOT a substitute for building an "accessibility-first" experience because they are too limited. You can't annotate a graph, or a progress bar, for example. But you could've built an entirely seperate experience which conveys the same data a graph would, in an accessible manner (given the right tools and frameworks).


On the other hand if we know something will break one use case totally, to provide marginal utility in another, then it's probably not a good trade off.


> but it probably wouldn't for your local utility company that serves ten or so blind customers, none of whom are programmers.

This is exactly why section 508 and ADA liability rules exist. If they don’t know how to make their programs accessible, a very expensive lawsuit will show them how. Also, why would a small utility be writing their own software? Shouldn’t they be using COTS?


"Applications should share the core business logic, but have entirely seperate "frontends" for every situation."

I think its clear from your post that you have never dealt with the visually impaired or even explored accessebility options of iOS/MacOS/Windows. We have working accessebility for all applications throughfully designed with native OS controls.

Having separate front ends will never happen, most businesse sproduce a single bloated JS app that crashes equally on all platforms.


> I think its clear from your post that you have never dealt with the visually impaired or even explored accessebility options of iOS/MacOS/Windows. We have working accessebility for all applications throughfully designed with native OS controls.

...which is a simple consequence of Windows people not being used to polyvalent software. You're putting horse before the cart.


> "We have working accessebility for all applications throughfully designed with native OS controls."

The comment I'm responding to says that progress bars in the terminals, and that TUIs in general are an accessibility nightmare.

Are you disagreeing? Are you claiming that it's a solved problem? Would you mind elaborating on what you think can be done to solve this?


You're misinterpreting the GP's post, TUIs are not "thoughtfully designed with native OS controls".


No, that's exactly my point.

The bulk majority of applications people use today are not "thoughtfully designed with native OS controls". Sprinkling some ARIA labels around is better than nothing, but the experience itself is extremely limited and is not on-par with what those people deserve.

I'm talking about Javascript/CSS-heavy websites in a web-browser, computer games, or software such as Blender/SketchUp/Google Earth to name a few. ARIA labels aren't good enough here - we would need to develop an entirely different interface to accomodate for each individual accessibility scenario. You can imagine a blind person and a deaf person using entirely different versions of Blender, each built with a different interface accomodating a specific disability.


Many command line tools have been adding 'tty mode' for years so that if you run the command through a pipe you get simpler output that is easier to parse and record. If you're at an interactive shell you get all of the 'special effects'.

You will still find many that don't do this properly, especially if you run the commands through a CI tool which is typically a dead giveaway. Browser based log viewers aren't going to handle VT100 escapes and so you see garbage in the output. In this regard using unicode emojis to pretty things up works more reliably.

I would presume that a screen reader should do the same thing. Turn off tty mode so that the stream is easier to parse.


This feels very short sighted, oddly. Your visual terminal interface is visual, sure. But there is nothing that says that has to be how you interface with a terminal. It is just a character/symbol stream connection. With interrogation commands that can go back and forth, there is nothing that says it can't be as accessible as you want it to be.


> "there is nothing that says it can't be as accessible as you want it to be."

And there is nothing that says that it should be accessible. Which is exactly why a terminal, going by the very definition you yourself provide, is very difficult to work with from an accessibility perspective. There are simply too many degrees of freedom, allowing you to represent data in too many different ways, making it very difficult for something like a screen reader to "parse" the buffer (which can contain anything, not just text) and convert it to something meaningful.

Using a terminal, one can implement a progress bar in hundreds of different ways - and it would be impossible for a screen reader to handle all such use-cases.


Out of curiosity, when you have a large amount of text flying by like with package installs, what is the expected accessibility behavior? I imagine a screenreader trying to keep up with the install logs in any way would be... chaotic.


It tries to read everything, but you can flush it with control. If you notice it stopped speaking, you usually use some review commands to look at the last few lines. If you're looking for something specific, you'd usually use something like grep, even in situations where a sighted person could just glance at the screen.


might sound a bit insensitive. But is there a video I can see of this? I cant even imagine how this would look in the hands of an expert.


Not exactly what you asked for but the "Designing for the Visually Impaired" episode of the Roguelike Radio podcast has a blind player play a roguelike. The intro has a bit of screenreading but the gameplay is starting at around 1:15.

While listening, you should put on headphones and concentrate very hard. It's quite fast and other sensory input might be disturbing so also close your eyes.

http://www.roguelikeradio.com/2012/10/episode-48-designing-f...


Saqib Shaikh, a blind developer at Microsoft, gave this short talk on using Visual Studio: https://www.youtube.com/watch?v=94swlF55tVc

I realize this isn't a console, but I think it is a good demonstration of an expert doing typical developer activities.


No, not to my knowledge. I could probably find a podcast or two, there are quite a few of those aimed at a blind audience, but that's about it.


A good point and one I hadn’t considered before (shame on me). Does anyone have a good reference on making accessible terminal programs?

As a web developer I’m intimately aware that there are many gotchas and nuances to be aware of - I have to figure the same is true of the terminal.


If it works correctly on an old-school TTY with a printer and a roll of paper, you're pretty much done. That's how you should view a terminal screen reader, except it talks instead of printing.

Using it with a non-monospaced font would probably be a good test too. Monospace makes some implicit assumptions (i.e. being able to see how things are aligned) which screen readers don't follow. Also avoid preceding important messages with long strings of text, in particular containing numbers, i.e. overly detailed timestamps in logs. A screen reader reads text line by line, so those usually make using the app less efficient.

If you want an actual user interface, not merely a command prompt, avoid the terminal like the plague. Exposing a simple web frontend might be a good idea here.


In apt at least the fancy UI can be disabled completely, see 'Dpkg::Progress-Fancy "0";'


apt is explicitly flagged as "shall not be scripted", and older tools (apt-*) shall be used instead. I'm guessing that they're also more accessible all-over.

Considering Debian installer has braille support, I don't think Debian could overlook something like this.


For apt the easy way around this is to just use apt-get since that has a traditional output.

But you make an interesting point. I've lamented the current practice of hard coding escape sequences in the past for novelty reasons: https://news.ycombinator.com/item?id=26013556 But a screen reader could theoretically implement a terminfo entry that could be checked by an application to see if cursor control is supported. If not then it could fall back to a plain output method which I believe should make following along audibly easier.


Accessibility isn't the only problem, terminal logs are also screwed up. Try scrolling up when apt is showing its fancy progress bar, you'll realize logs beyond a screenful is simply "eaten". Overall not a fan of the unnecessary TUIzation.


No problem with tmux...


Nice. I also looked into this a while ago and came up with a very simplistic (few lines) Python implementation that does this and even allows for some customization: https://gist.github.com/fladd/36c422f1c0e9bf02f41f9fad19609d...


There's https://tqdm.github.io/ to do nice progress bars in Python.

But my goal was not really the progress bars, but the placing of it below the logs.


Yeah, when I was reading the article I started out thinking "What? It's just simply rewinding the line", and then you mentioned that it's always at the bottom of the window, and I thought "Now that is a good trick, I wonder how it's done".

And then I didn't wonder. Good work.


For building rich terminal UIs in Python, I recommend Rich the library

https://github.com/willmcgugan/rich


Good explanation. It is a nice progress bar, but not particularly unique; many command line tools use progress bars like this.

I've always been curious how the Heroku CLI renders its progress bar, which is unique as far as I've seen. It does not seem to use characters, it's seemingly a smooth, pixel-based fill like you'd get in a non-terminal app. Downloading a pg backup is the most common place I see it.


They use this package under the hood: https://github.com/jdxcode/smooth-progress

It uses glyphs of boxes that have varying widths. When joined together, they appear as a solid bar.


In Python you can use tqdm for this:

    python -c 'import tqdm; i=[j for j in tqdm.trange(int(1e8))]'


There are characters that are boxes of varying width, but maybe you've confirmed it's not even that?


I implemented something like this in bash. I called this status bar sort of implementation as "message bar" to show the ongoing status of the script on a reserved lines on bottom of the screen - I had to implement progress bar one for each task inline (and not on the bottom bar) - and constantly had to switch "states" to save/restore the cursor positions on screen.

This message bar would also become a prompt and take in user inputs on certain events apart from being a static status bar - super useful for the use case.

Cursor positions, scrolling screens, events, progress bar updates, managing child sessions, handling screen resolution changes, taking user inputs...everything had to be catered for in this.

I thoroughly enjoyed developing this and still think it was a bit over the board for what I was developing at the time but I was sent on this path of constant learning so couldn't look back - was worth it.

Here's the project: https://github.com/PrajwalSD/runSAS


I wouldn't call it fancy: I can't change the color so on light backgrounds I can't read it. And I've seen certain error messages slice it in two.

I'd call it annoying.


Neat.

This is one of those things I always wondered about but never cared enough to actually go and look into myself


You can't imagine how many times I wondered about it before testing it and writing the article ;)


A lot of terminals support Unicode, so fancy can be actually fancy in such cases using block characters that produce smooth progress animation. I.e. using those from Left one eighth block to Full block.

https://en.wikipedia.org/wiki/Block_Elements

I once added such progress bar for lgogdownloader.


I use a similar approach in libmish [0], ie I "reserve" the bottom 2 rows, and output the scrolling text in the top parts. Pretty straightforward.

[0]: https://github.com/buserror/libmish


I'd prefer a "Double Debian" distro that does the following:

1. When I run an apt command, it goes ahead and runs an automated version of the command on a "shadow" apt on the same machine.

2. It measures the time it took to do that operation on the shadow apt.

3. By the time I respond "y" to the command, it uses the duration from step 2 to display a time-based progress bar that animates smoothly over said duration.

4. If the shadow apt command hasn't finished by the time I respond "y" to the command, it brings up Ms. Pacman for me to play for a bit.

I bet I'd be a much more satisfied user!


> std::cout << ... << std::to_string(nr_rows - 1)

> to_string

What if I told you, iostream can output numbers?


terminal rendering is fun. it's often limited but the few it offers gets you going nicely.


Nitpick about the website, but please don't use PGP short fingerprints like 0x46EBCD72F08E6717. They can be brute forced [0] very fast to get a different private key with the same fingerprint. You can also use them to generate keys with vanity fingerprints, which is pretty funny [1].

[0] https://github.com/lachesis/scallion [1] I have 0xDE4444AAAAAAAAAA, took me 5 minutes of bruteforcing on a laptop.


How come? The fastest GPU in scallion's README has hash rate 11646 MH/s. To brute-force a 64-bit key ID at this rate you would need, on average, 50 GPU-years.


To brute-force a specific 64-bit ID, sure. But if your only requirement is that the ID consist of two random characters, then a run of one character, then a run of another character, the time can be cut dramatically, since there are tens of thousands of 8-byte sequences that fit this pattern, and you can simply stop as soon as you find one of them.


50 GPU-years is fairly cheap, certainly within the realm of days for a 3 letter agency or a botnet.


What fingerprints should be used?


According to this guide (outdated though, but this still applies) [1], `--with-fingerprint`, as even the long ID can be spoofed.

[1]: https://riseup.net/en/security/message-security/openpgp/best...


Shouldn't the title be without question mark or "How does apt render it fancy progress bar?"?


Yeah except you dropped an s, so maybe we shouldn't be so critical.



Not everybody is a native English speaker.


Including many native English speakers.


I'm not a native English speaker myself, but the phrasing of questions in English is one of the most basic things, English 101 if you will, so if you consider your English good enough to write a blog post, you should be capable of doing that...


I'm french ;)


Fixed, thanks! (I'm french :D)


Wasn't meant as a critique or nitpicking, just an honest question. I am not a native speaker either.


Ooh so I hope you were right ;)


Yes.


I hate it when programs hardcore control sequences like this. Just look them up at runtime with terminfo --- it's really not that hard.


How would you do the DECSTBM without hardocindg it? Honest question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: