
Lsd: The next gen ls command - tambourine_man
https://github.com/Peltoche/lsd
======
lifthrasiir
Every time I see a project using icon font sets (this particular project
recommends Nerd Font [1]), I feel something is very wrong. We already have a
sort of way (or two) to represent an image in the terminal, but icon fonts are
a different beast because it essentially supports a vector representation.
Code point collision is already happening (Nerd Font is already coping with
this [2]), and in a long run this seems unsustainable. So... is there any way
or proposal to encode a vector image in the terminal? (And no, iTerm doesn't
seem to support SVG in its inline image support [3].)

[1] [https://github.com/ryanoasis/nerd-
fonts](https://github.com/ryanoasis/nerd-fonts)

[2] [https://github.com/ryanoasis/nerd-fonts/wiki/Codepoint-
Confl...](https://github.com/ryanoasis/nerd-fonts/wiki/Codepoint-Conflicts)

[3] [https://www.iterm2.com/documentation-
images.html](https://www.iterm2.com/documentation-images.html)

~~~
pcwalton
Unfortunately, the Unix terminal is a gigantic pile of hacks accumulated since
the late 1960s. It's a miracle the software stack manages to stay upright at
all; adding features and getting the entire ecosystem to support them would be
tough. Introducing the concept of an external resource that has to be fetched
doesn't seem impossible to me, but there are a lot of issues that would have
to be thought through.

Anyway, one nice thing about TrueType/OpenType fonts is that the vector
representation is extremely well compressed. SVG or similar would be an
efficiency loss. In this light, fonts are actually a fairly decent and
practical way to represent vector symbols that will be cached and reused.
Maybe a minimal practical solution might be to add an ANSI extension that
requests an appropriate font for a particular area of the screen, perhaps with
a URL from which a specific webfont can be fetched.

~~~
hollerith
>Unfortunately, the Unix terminal is a gigantic pile of hacks accumulated
since the late 1960s.

Agreed, and I don't understand why most of the things still done in the Unix
terminal have not transitioned to newer, less hacky environments.

I managed to replace the vast majority of the things I was using the terminal
for with uses of about 400 lines of Emacs Lisp code that I wrote about 5 years
ago.

A large fraction of my uses of the terminal since then have been for the
purpose of my getting my Emacs environment and my data files installed on a
fresh install of my OS and for recovering from those times when I
inadvertently introduced a bug into my Emacs environment that my regression
tests did not catch.

The 400 lines of Emacs Lisp code submits a string I typed to bash with the -c
flag then asynchronously (i.e., without blocking Emacs's UI) waits for output
from the bash process. I also added a way for the user to arrange for a bell
to ring when the process finishes. Notably, there is no provision for the user
to cause anything to be sent to the process's stdin after the process is
started, which allows the 400 lines of code to use a pipe instead of a pty
(which is relevant to this comment thread because if OSes didn't need to
support the pty abstraction, they could be significantly simpler). In the rare
case when I want to send a string to the stdin of some process I use the shell
to set things up:

    
    
        echo "here is the string" | some process
    

That whole command line becomes one argument to a call to bash. The 400 lines
of code I wrote cannot be used to interact with a REPL. E.g., if you use my
400 lines of code to run "python" by itself as a command line, the python
process waits for user input, which will never come because there is no way to
write to the python process's stdin.

The closest analog in standard Gnu Emacs to the 400 lines I wrote is "shell
mode" (shell.el) which consists of about 3 or 4 thousand lines of Emacs Lisp
code. My code re-uses shell mode's code for interpreting (ignoring in my case)
the escape sequences for ANSI color and for interpreting the escape sequences
for "carriage control", but (like shell mode) does not handle the escape
sequences for cursor addressing. IIRC the main reason I was dissatisfied with
shell mode was that it contains hacky code to keep the bash process's
"opinion" on what the current working directory is in sync with the shell mode
buffer's "opinion" on what it is. In contrast, in my code, the buffer's
"opinion" is the only one that matters (because every command line is
processed by a fresh bash process on that bash process inherits its current
working directory from the Emacs process when it is created).

If I ever were to start to need to issue a great many command lines on a
variety of remote Unix servers, I would probably take the time to write a
replacement for ssh along lines similar to what I just described.

~~~
jcrites
I've been wanting to prototype the idea of an HTML-based terminal. Basically
take the graphical user interface of the web and connect it to the system
administration capabilities of the Unix command line. I'd probably draw
inspiration from PowerShell: have programs produce some kind of data structure
as output that can be rendered into HTML and displayed using stylesheets.

It would not be a trivial amount of work to build this since, to take good
advantage of the UI capabilities, you'd need an entire new CLI OS - something
to replace GNU Core Utilities. I haven't been sure where to start since the
choice of programming language or expression language will significantly
affect the overall experience.

~~~
actondev
you have it already: [https://hyper.is/](https://hyper.is/)

~~~
iforgotpassword
Great so now in addition to having to support everything the terminal does
(the whole pile of garbage hacks that were added since the 60s) you have this
electron crap running which on its own is probably more complex than 99% of
software out there. This basically took the terminal and made it a hundred
times worse.

OTOH, if you do what your parent suggests and start from scratch (no terminal
support) you have a platform that isn't supported by anything yet, so unless
you port or reimplement everything you need you have to keep using both in
parallel which sounds just cumbersome.

------
cjauvin
Even though of course nothing beats the power of a set of standardized and
ubiquitous tools that you can always count on whenever you find yourself on a
new machine, I must admit that I'm favorably inclined toward this trend (or
what I perceive as one) of rewriting classic CLI tools with sometimes radical
performance improvement promises, but most importantly (for me at least)
better UIs: `grep` has always been acceptable for me, but I prefer `ag` these
days, and I recently discovered `fd`, which seems like it will replace `find`,
as I could never remember its weird syntax, even for the most basic cases.

~~~
rlue
Try rg (ripgrep) over ag:

[https://github.com/BurntSushi/ripgrep](https://github.com/BurntSushi/ripgrep)

~~~
krick
Never managed to switch to ripgrep over ag. Performance difference is not
_that_ much of a dealbreaker in practice, and rg is not totally backwards-
compatible to ag. I don't exactly remember, what was the problem. I guess it
has this annoying practice to ignore everything that's in .gitignore, while
for ag I could set up an additional .agignore per-project.

~~~
burntsushi
For ripgrep, if you want to use PCRE2 regexes (assuming that's what you meant
by backwards compatible), then the -P flag will do that.

Otherwise, ripgrep's support for .gitignore should be substantially better
than what's in ag. You can use .rgignore just like .agignore. Both ripgrep and
ag also support .ignore. These can override whatever is in your .gitignore.

See the guide for more details:
[https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#a...](https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#automatic-
filtering)

> Performance difference is not that much of a dealbreaker in practice

This is true for smallish inputs. But the difference should become larger for
bigger inputs. Hell, sometimes ag just refuses to search inputs that are too
big. e.g., For a ~9.5GB file:

    
    
        $ time rg '\w+\s+Sherlock Holmes' OpenSubtitles2016.raw.en -c
        3003
        
        real    1.608
        user    1.105
        sys     0.496
        maxmem  9473 MB
        faults  0
        
        $ time ag '\w+\s+Sherlock Holmes' OpenSubtitles2016.raw.en -c
        ERR: Skipping OpenSubtitles2016.raw.en: pcre_exec() can't handle files larger than 2147483647 bytes.
        
        real    0.016
        user    0.004
        sys     0.011
        maxmem  6 MB
        faults  0

~~~
krick
Thank you for your answer.

What I mean, it doesn't seem to be possible to make it forget about .gitignore
files completely. For me, it doesn't make any sense for my grep tool to pay
any attention to .gitignore files (let alone info/exclude and such), these are
2 completely separate tools. I might occasionally want for some project to put
into .rgignore the same line as I put in .gitignore, but that's an explicit
choice. Actually, most of the time I have a few global ignores (which is
possible to do with ripgrep) and an occasional .agignore file, while pretty
much every git directory has .gitignore of some sorts. This is not something I
can get around with, since while .agignore (.rgignore) is totally personal,
.gitignore affects everyone who works on the project and serves explicitly to
indicate that you ought not commit these files.

If I put "\--no-ignore" in my .ripgreprc, it skips both .gitignore and
.rgignore files, which isn't obviously what I'm trying to achieve.

One more thing (even thought it it possible to get around it by writing my own
bash function that would invoke rg) is that unlike the ag, rg doesn't seem
have "\--pager" option, which I always use heavily, by setting half a dozen of
parameters for `less` when grepping.

So these are what I consider a deal breakers and reasons why I didn't switch
to rg: while it's true that in some cases I would appreciate a better
performance, it's extremely rare that I grep something over a 9.5GB file. And
conveniently search for a substring in a project is something that I'm in need
multiple times a day.

BTW, noticed one more minor quirk right now: export
RIPGREP_CONFIG_PATH="~/.ripgreprc" doesn't work (but
RIPGREP_CONFIG_PATH="/home/$USER/.ripgreprc" does, so it's probably almost
never an issue).

~~~
burntsushi
You want --no-ignore-vcs. There are several other --no-ignore-* flags. ripgrep
is a lot more flexible than ag there. And its implementation of the gitignore
matching rules should work a lot better than ag. (Search ag's issue tracker to
see just how many bugs there are. I'd be surprised if you weren't hitting any
of them!)

As for a pager, that's pretty easy to solve: `rg foo | less -F`. Add `-R` to
make colors work.

You don't need to search a 9.5GB file to make ag upset. All it takes is ~2GB.
A key advantage of ripgrep over ag is that you don't have to just use it for
code searching. It actually works just as well any place you'd use standard
grep. ag just doesn't have an implementation that's good enough to be that
robust.

~~~
krick
> \--no-ignore-vcs

Oh, thanks, I've missed it somehow.

> rg foo | less -F

Yeah, that's what I meant by writing my own wrapper, since I wouldn't want to
type that in every time I search for something. And since I use bash, $@ in
the middle of an alias is not possible, so: rg() { /usr/bin/rg $@ | less
-LFXRqn }

It doesn't seem to be working as expected for me without explicitly adding
--pretty though (obviously, rg sees it doesn't operate in a terminal and tries
to be smart). End even then it's... weird. I don't know if that's `less` issue
of `rg` issue, I'm guessing something about it being multi-threaded: it
sometimes doesn't display some found items at the top of the terminal, but if
I scroll screen down and up again -- it's there! Cannot reproduce with ag.
Gonna play with this for some more time later, but it surely doesn't work as
solidly with a pager, as ag --pager='whatever' does.

EDIT: actually found a bug report that seems to describe exactly the same
problem.
[https://github.com/BurntSushi/ripgrep/issues/513](https://github.com/BurntSushi/ripgrep/issues/513)

~~~
burntsushi
Interesting. I can't reproduce that problem. Piping to less works perfectly
fine on both my Linux and macOS machines.

If you're willing, I'd definitely appreciate it if you could update #513 with
your environment. In particular, by answering these questions:
[https://github.com/BurntSushi/ripgrep/issues/513#issuecommen...](https://github.com/BurntSushi/ripgrep/issues/513#issuecomment-308542044)

If it's convenient and you could try it in a different environment (perhaps a
different terminal emulator or a different shell), then that would be great.

Also, this doesn't sound like a multi-threading issue to me, but if you want
to rule that out, then `-j1` will force ripgrep into single threaded mode.

------
chmln
One major showstopper I have with this utility is that it doesn't obey my
terminal color scheme and instead hardcodes some colors that probably just
look good on author's terminal. [1]

Would love to see this resolved, cause it seems rather nice otherwise.

[https://github.com/Peltoche/lsd/issues/90](https://github.com/Peltoche/lsd/issues/90)

------
xwdv
This project becomes pretty ridiculous when you realize you can already
accomplish the same features with regular ls, if you just bother to read a
bit. That means the target audience for this is very junior engineers.

~~~
tambourine_man
Can ls use nerdfont? Does ls color coding display anywhere near the
granularity that lsd and colorls feature?

So please drop the dismissive patronizing tone.

~~~
ddeokbokki
OP's point is that if you know how to use ls you don't need icons and extra
granularity in color scheme to understand the output.

~~~
moomin
The same is true of reading a hex dump of the inode table. Improved UIs can
matter.

------
caymanjim
This is pretty and all, and there's nothing wrong with it, but...why? Do
people run ls enough that they need something more than a plain ASCII list of
filenames? Are peoples' directories so disorganized that this adds enough
value to justify it over /bin/ls?

I probably run ls a few dozen times a day, but it's usually just to answer
"what did I call that file again?" or "did I remember to copy that over?"

I can't imagine a scenario where I'd require too many extra visual cues. I use
color ls, mostly because I find the optional symbols like '/' to be
distracting, and differentiating files and directories via color is about all
I care about.

There's overhead in becoming too attached to non-standard tools: you have to
go out of your way to install them, your muscle memory needs retraining, etc.

Most importantly, tools like this break a bunch of core command line
standards, like doing one thing and doing it well, and producing clean output
that can be pipeline-filtered with all the other Unix standards.

~~~
JdeBP
One need only consider that Orthodox File Managers have been in existence for
decades to realize that yes indeed people do like to work with more than plain
lists of filenames.

Playing the "doing one thing" card when it comes to ls is rather risky, as ls,
with its options for doing things like sorting its output, is one of the
favourite examples that people point to of tools that do not adhere to this
philosophy in practice.

* [https://news.ycombinator.com/item?id=4023265](https://news.ycombinator.com/item?id=4023265)

* [https://news.ycombinator.com/item?id=9673975](https://news.ycombinator.com/item?id=9673975)

* [https://news.ycombinator.com/item?id=8484718](https://news.ycombinator.com/item?id=8484718)

... and so on.

------
anderspitman
Has anyone made an "awesome" list of new CLI tools designed to replace ancient
ones?

~~~
h1d
Found some links.

[https://github.com/alebcay/awesome-shell](https://github.com/alebcay/awesome-
shell)

Only for rust but not a bad list I think.

[https://lib.rs/command-line-utilities](https://lib.rs/command-line-utilities)

Some blog post introducing cli tools.

[https://darrenburns.net/posts/tools/](https://darrenburns.net/posts/tools/)

~~~
bmn__
[https://altbox.dev/](https://altbox.dev/)

------
nikeee
How does it compare with exa?

[https://the.exa.website](https://the.exa.website)

------
hyh1048576
I've noticed a lot of command line utilities are being rewritten in Rust. Is
there a reason for this?

~~~
gnulinux
Possibly because

1\. People predict Rust will be more popular in the future, so they're trying
to learn/master the language by practicing it.

2\. People think speed and memory safety are important guarantees for unix
programs.

3\. People are testing whether Rust deserves the fame it has

~~~
megous
> 2\. People think speed and memory safety are important guarantees for unix
> programs.

I don't think any of the typical unix tools ever segfaulted on me in the last
15 years I use Linux.

~~~
coder543
Segfaults are the ideal case for memory errors, and those are the most easily
caught and fixed, so you're least likely to see them. But, often those memory
errors result in silent corruption which can be exploited, and that's harder
to detect, especially if it relies on very specific corner cases. `curl` has
had a number of these vulnerabilities over the last several years, for
example.

Something as simple as `ls` is probably so small and battle tested that it's
not an issue, but if you're writing an all-new, not-battle-tested tool, why
wouldn't you want stronger guarantees? The new tool is being written for the
features, but it's not fun to write vulnerabilities into something that should
be simple and "just work."

Languages like Go and Ruby are also mostly memory safe, so those are generally
fine picks here too, but every language has trade offs. In this case, the
author clearly cares about performance, which Ruby does not care about.

Rust also has a built-in testing harness, which is a lot more convenient IMHO
than using whatever C testing framework you might have a predisposition
towards.

~~~
hermitdev
Regarding segfaults, when I was interviewing devs for C++ roles, I'd ask
questions about a simple function like this:

    
    
      std::string foo(bool flag)
      {
        if (flag)
          return "true";
      }
    

Questions I'd ask: * Is the function well formed (Yes - functions need not
return a value on all paths due to C ancestory, even if the return type has a
non trivial actor - not sure if this is still considered well formed, but I
think it was as least to C++11) * what happens if 'foo' is called with true?
(Returns "true" as one would expect) * what happens if 'foo' is called with
false? (Undefined or implementation defined behavior, but generally nothing
nice - segfault, acces violation, etc)

* if it crashes, where, when and why does it crash? (Technically since its undefined, nearly any answer here suffices, if it can be backed up. Since practically, most optimizing compilers assume UB can never happen, when you return nothing from a non-void function, the compiler will attempt to invoke the destructor of a non existent object instance (assuming non POD) and boom.)

I asked this because it was a distilled example of a real world rare crash
that was extremely difficult bug to track down because the crash location is
often know where near the offending function.

I remember getting into a heated argument with a coworker when I claimed it
should have been a compilation error. IIRC, he claimed it to be a halting
problem and that the compiler couldn't determine that all paths didnt return a
value. I called BS, citing at the time (circa 2004) that the new compiler on
the block for C# could reliably emit errors when not all return paths returned
a value.

I also like it that in C++, it's a very rare example of a very terse example
dealing with a number of topics such as undefined/implementation defined
behavior, debugging, compilation settings (warning levels, etc) all in a mere
4 lines of codes. With 4 LOC, which is straight forward and simple for the
candidate to mentally parse, I can gleam a lot about their understanding of
the language (and it's potential pitfalls).

Sorry if this got a little long winded and ranting.

~~~
im3w1l
> he claimed it to be a halting problem and that the compiler couldn't
> determine that all paths didnt return a value.

Theoretically we can't determine whether a function will return a value or
not. In practice, heuristics get 99% of the way and the last 1% you can make
the programmer put in a possibly redundant return statement.

~~~
adrusi
There are two different questions: "do all paths lead to returning a value?"
and "will the function return a value?"

Answering the second in the general case is equivalent to solving the halting
problem. But answering the first question is much simpler. Static analyzers
aren't using a heuristic approach to the second question, they're solving a
completely different problem.

~~~
mortb
Don't get me wrong, but if whole classes of errors, can be avoided by not
choosing C++, why should I choose to use it?

~~~
monsieurbanana
Because you already know C++ and don't want or can't justify learning another
language. That's a very valable reason.

------
estomagordo
Given the name, I was surprised at how much sense the choices of colours made.

------
empath75
How does this compare to exa.

~~~
tambourine_man
I don’t think exa uses nerdfont for one.

It’s also a lot faster, according to the benchmarks.

~~~
h1d
exa does have a fork which does but exa's development seems to have paused.

[https://github.com/ogham/exa/pull/368](https://github.com/ogham/exa/pull/368)

------
ksherlock
Wish it would disable ANSI stuff (--classic) if it's not a vt100/xterm/etc
TERMinal.

------
gremlinsinc
wow this is amazing... I love the file icons and it's beautiful... ls --tree
ran almost instantly on a laravel folder which has a huge number of
packages....

------
jenhsun
I use below sometimes. Hope you guys like these.

A modern replacement for ls, written in Rust.
[https://the.exa.website/](https://the.exa.website/)

ls with coloring and icons
[https://github.com/Electrux/ls_extended](https://github.com/Electrux/ls_extended)

------
lucasmullens
Is there a list of features? Or can I assume it implements the same features
as colorls?

------
sridca
Why would I use this over exa?
[https://github.com/ogham/exa](https://github.com/ogham/exa)

------
jasonhansel
We really need a way for terminfo/termcap to indicate whether a user has
Powerline fonts. Would be great for programs like this.

------
fao_
Looking at the example -- do the symbols really help any more than just normal
highlighting, and, `x --> y` and `/` at the end?

------
thatguy1
There is an easier way to do this - Use File Manager UI

~~~
fouc
Is that something you can use from the terminal?

~~~
vidugavia
Of course [https://ranger.github.io/](https://ranger.github.io/)

~~~
oblio
At that point, just use MC... [https://midnight-
commander.org/](https://midnight-commander.org/)

------
reddotX
to install "sudo snap install lsd --classic"

------
superconformist
It doesn't install a man page for itself, requires weird-ass fonts, has a
fraction of the features of the real ls [0], and runs about half as fast ...

However it's written in our lord and savior RUST and has lots of colors so
this shit is definitely "next gen."

0:
[http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ls...](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ls.html)

~~~
saagarjha
I'd call this a "shallow dismissal"
([https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)):
perhaps you could reword your response to be a bit nicer?

~~~
paulddraper
I think there was a number of reasons given.

~~~
saagarjha
It wasn't particularly nice, though, and I'm not quite sure all of the
comments given qualify as non-shallow.

------
incanus77
They’re gonna need a better list of reasons to try it out than:

1\. It’s written in the author’s preferred language

