Hacker News new | comments | ask | show | jobs | submit login
Nnn – a terminal file manager for programmers (github.com)
298 points by max_sendfeld 88 days ago | hide | past | web | favorite | 193 comments

Show HN is for sharing your own personal work. We got a complaint saying that this is not the submitter's work, so I've taken Show HN out of the title.

Posting as a Show HN something you didn't create is effectively taking credit for someone else's work, so please don't do that!

Please read the rules: https://news.ycombinator.com/showhn.html.

Hi, author of `nnn` here. I didn't submit it but I am actively responding to the comments and it is actually helping me to get feedback and feature ideas.

I am grateful OP posted it. Thanks for fixing the title.

Thanks, that's good to hear.

Sorry about that, I wasn't aware. As you said, I didn't create NNN - it's just something I found, used, really liked and wanted to share.

Thank you for the compliment!

A lot of good features in nnn. Unfortunately, for me, the huge deal breaker is the (sorry, but I have to say it) terrible choices for one of the most important hotkey-functionality combinations.

I am not talking about some rarely used keys/functions, but the problem lies at the very core of the experience: the simple hjkl, vim-style navigation.

You see, as in vim, k/j ('up'/'down') obviously move up/down in the list of dirs/files. h ('left') moves up the directory hierarchy ('Parent dir' as nnn's help aptly describes the function.) Of course, now l ('right') moves down the directory hierarchy: IF you are sitting with the highlight on a DIRECTORY entry, obviously if moves down into that directory.

But here is the kicker: what if you have as your current highlight something OTHER than a directory, that is a FILE ?

It OPENS the file. (The open operation is done through xdg-open, so whatever that is configured to recognize) If xdg-open is not configured defensively, in the case when the file is executable it runs the program. Or if the file has some associated program, it is launched with the file as an argument. (this was happening even on my machine, and I have a relatively conservative xdg-open config) At the minimum it launches an editor, opening the file in read/WRITE. In nnn's help, listed among the keys activating this 'Open file/enter dir' is the <Return> key, in addition to the usual move right keys (so 'l' and right arrow). A clear expression that in the authors intention those keys have equivalent functionality.

So imagine you're browsing around the file system as fast as your fingers and eyes allow (after, that's the beauty of the fast text/ncurses interfaces, yes?). You just want to examine stuff, look at files, poke into configurations, maybe for a archive you just downloaded. Archive that might or might not have executable stuff and/or potentially harmful things. You've located the dir path you want to go into, you dive in and before you realize you need to hit the brakes on that 'l' key, you end up moveing RIGHT on SOMETHING (it happened so fast you're not even sure what it was), and that SOMETHING was executed/loaded, now some application is opened, and on top of that you were continuing to type so that application receives your keys and does who-knows-what (well, depending on what was launched).

WTF ??? The first time it happened, I could not believe such a UI design decision: mixing NAVIGATION and OPEN operations on the SAME KEY. OPEN is a sensitive, potentially disruptive operation: there SHOULD be some minimal friction provided in the interface before one is allowed to proceed.

It moved this nnn from potentially one my favorite tools, into a totally love/hate relationship. Yes, "love" because indeed it does some things well. But the mistake of this RIGHT/OPEN snafu is enough to more than cancel any other qualities.

That's the way it works in most file managers... Usually a click would open a directory or a file just the same

First of all: when using a mouse 'a click would open a directory or a file just the same'. A mouse/trackpad has pretty much one button (or equivalent) dedicated to interaction, so it is expected to overload some functions on it. Besides, "navigation" with the mouse is a different thing: you don't need to "move" the selection so opening is a clear voluntary decision, decoupled from navigation.

But focusing back on keyboard operation... It is true that many text based file managers do this 'open file on right navigation move'. In my opinion that doesn't make it right: I personally consider a bad thing that can lead to startling, unnecessary and potentially sensitive application launches; a separation between movement (hjkl, arrows) and opening (obvious choice here, <Return>) is a sensible design.

Another point of view: the unwanted file-open-by-mistake situations are a form of involuntary file previews. As nnn's own 'design considerations' state [0], previews are to be avoided: "previewing ... is a wastage", potential "slow/faulty ... access makes the experience very painful" and "Users should see only what they explicitly want to see".

Lastly, there is an important aspect that mitigates the problem for 'most file managers': customization.

Here's a quick glance at the players in this text-fm arena:

- ranger and vifm, the larger managers: a cursory look in their help/configs hints that they can be heavily customized - out of the box ranger displays a preview of the selected file; that acts a big visual hint that you don't need to 'navigate' further; however I agree that default previews are bad - ranger and vifm remember the position along the path, so you don't get shunted moving back down - noice has about the same behavior as nnn (nnn was forked from it), but at least it has a standard suckless style config.h header allowing customization by recompiling - speaking of suckless style, a last resort solution is to do just that with nnn, making your own fork: its header largely keeps the structure of the original

And here are examples that work more along the lines I prefer:

- rover has the best behavior IMO: it can be similarly configured using environment variables, but by default it uses hjkl simply for navigation and NOTHING ELSE; it has separate, dedicated keys for open, preview or other things - midnight commander has the same separation: arrows (actually just up/down) strictly for navigation (in local dir ONLY, no less); explicit directory navigation and opening with <Return>; there is no implicit 'file open'

[0] https://github.com/jarun/nnn/wiki/nnn-design-considerations

> It OPENS the file.

Oh, don't worry! I can drop an env var to disable the right-> open file thing.

However, all dirs are listed on top separately and in a clearly different color. How often are you ending up pressing the right on a file? I am surprised because `nnn` has many users and no one ever mentioned this as an issue before!

Thanks. I look forward to give nnn another spin then. While you're at it, please add more customization for mapping keys to operations. Nothing heavy, just a bit of freedom for the most used operations. (e.g. customizing the key for the 'preview in PAGER' would be great!) Also, I guess I owe you a bit of an apology: today I noticed noice has the same behavior and you simply carried it forward in nnn when you forked it (in the header, all those keys mapping to SEL_GOIN). So if there is a blame, it should be shared with the original design.

> dirs ... in a clearly different color

Hey, unless you launch nnn -c 10 I guess ? ;-) Or nnn -l -c 10 and there is even less to distinguish them other than that slash.

> How often are you

Maybe it's the file hierarchy on my machine, but I end up a lot, somehow. One easy example is a deeper path ending with, say, an .odt or .xcf as first entry in the last, leaf directory, e.g. a/b/c/d/e/bigfile.xcf; oops! one extra l and the gimp or libre office start up (for fun, try that on a RPi ;-) As someone else mentioned in the discussion, the intuition is that going up and down the SAME path will take you to the SAME positions in each directory: the example given was "/a/b/c/d -> /a/b/c -> /a/b -> /a/b/c -> /a/b/c/d;". That is not the case in noice/nnn and it can trip you off. To make it more clear, consider this tree:

You are inside 'c'; you quickly go up 3 levels ('hhh') to glance at the content of 'tmp'; then you go back down 3 levels by quickly tapping 'lll', intuitively expecting to end up in the same place, inside 'c'; instead you'll go in 'a', but then in '1b' on the 'huge.odt' file , then the 3rd l launches libre office...

> many users and no one ever mentioned

selection bias or how do you call this effect ? People that are bothered by this simply moved away and didn't mention a thing ?

Anyway, thanks again.

I have added a new env var DISABLE_FILE_OPEN_ON_NAV to handle this case. Thanks for bringing it out!

Hey no problem! Would it be possible for you to raise a PR? There's a case for regular files.

You make it sound like people are just hammering the keys hjkl at random to explore a directory. This certainly isn't the case for me. It's probably the case that this file manager just isn't for you, but have you considered being a little more careful and deliberate with your keypresses? I can't see this being a problem for most users.

I do not use a terminal based file manager - I find that a better workflow for me is to use sshfs to create a local mount on my laptop and then just use the Finder to browse ... which gives me preview, etc.

HOWEVER, there is a specific use-case - renaming a bunch of files in a directory - where I do use a "file manager" and that file manager is 'vimv'.[1]

vimv is a very simple tool - you run vimv in a directory and suddenly the vi editor opens up with the contents of that directory. You then use the vi editor to edit the filenames any way you like. Upon :wq, the files in that directory are renamed.

Very useful - and in many cases, much faster than crafting a sed/awk one liner to do the renaming (especially if some of the renaming is arbitrary or random).

[1] https://github.com/thameera/vimv

There's renameutils[1] which has qmv

And as someone else mentioned nnn integrates well with vidir which does exactly what you describe. In most distros it's installable through moreutils [2]. vidir uses $EDITOR , so it works with pretty much anything.

[1] https://www.nongnu.org/renameutils/

[2] https://joeyh.name/code/moreutils/

The terminal has its own advantges. But surely it's subjective.

Coming to your use-cases:

> use the Finder to browse ... which gives me preview, etc.

Preview is not available, and there are other emerging file managers which agree that preview is redundant for a fast workflow. People may not want to have a jumpy experience to preview a 8K res image all the time. And that image can be compromising as well. It's easy to say make it optional, but it adds tons of deps on other tools which `nnn` doesn't want to bear. It integrates with sxiv, so you can open a dir of images in sxiv and browse at will.

> renaming a bunch of files in a directory

`nnn` supports in-place renames and it integrates with `vidir`, which allows you to batch rename in vi, exactly the same thing which your favourite utility does. I mentioned this earlier but it got lost among other responses. Probably you haven't read the features section or given `nnn` a try which is why you missed `vidir`. I recommend at least a quick at the features list. If you want to try it, `nnn` is available on Homebrew.

Obligatory mention of emacs wdired-mode, which sounds very similar. I mention it because I used emacs for almost a decade before noticing that there's a wdired mode as well as dired mode.

I like the idea but it's often confusing and frightening to edit shit like that in emacs, especially if you start automating through keyboard macros

C-x C-q from dired-mode to enter editable/writable mode, then C-c C-c when done.

There also sunrise mode, which combines (w)dired with ortodox file managers. Easily one of the best ways to deal with files. Particularly in combination with Tramp...

What is sunrise mode? Are you referring to https://www.emacswiki.org/emacs/Sunrise_Commander ?

Sunrise Commander is my favorite https://github.com/escherdragon/sunrise-commander

> I do not use a terminal based file manager - I find that a better workflow for me is to use sshfs to create a local mount on my laptop and then just use the Finder to browse ... which gives me preview, etc.

I had no idea you can do that, thank you for sharing!

Ranger (command line file manager) also has a :bulkrename command. Useful.

Does anyone know of a vimv equivalent for vscode?

`nnn` integrates `vidir` for batch renaming.

The search is extremely cool and fun to use. Directories all seem to load very very quickly. It's got vidir integration, giving it bulk-renaming. I wasn't familiar with vidir, but the ":bulkrename" command in ranger was my favorite part of it. After a few minutes of poking around with nnn, I ended up installing it on all my machines. The only weird thing I noticed was that I couldn't find a straight-forward way to delete files. The first thing I came up with was to hit ! to go into a shell and then just use rm. Not sure if I was missing something or if this was deliberate to make sure you don't accidentally delete files.

In the just-released version 2.1, you can just select a file and press `^X` to delete it.

I am testing Nnn via Homebrew and I am also missing the ability to delete. The in-program help lists '^X' as Quit.

Is anyone else who installed via Homebrew able to delete?

The DEL key wasn't handled in rename prompt. The fix is available on master branch.

Just noticed you mentioned 2.1 and Homebrew installed 2.0. That is probably why I am not seeing it.

It's not available in 2.1 either. Last night a user reported this after release of 2.1. :)

This looks incredibly cool, I love the speed and minimalism. It doesn't seem as configurable and feature-packed as vifm[1], so I'll probably stick with that, but it certainly fills a good niche.

> nnn vs. ncdu memory usage in disk usage analyzer mode (400K files on disk):

I assume that's because nnn isn't keeping the entire directory structure in memory, which means that browsing to a subdirectory involves rescanning the entire directory. That's a fair trade-off, but an unfair comparison.

1. https://vifm.info/ - I'm surprised this hasn't been mentioned yet in this thread.

Author of `nnn` here. Thanks for the appreciation.

Please let us know what you like in vifm which isn't available in `nnn` and we will consider the features. However, I must say the last thing we want to see in `nnn` is feature bloat. You can extend it as you wish through scripts which `nnn` supports in any number.

No, `nnn` scans the complete dir tree otherwise du won't work. It rescans because data on disks keep changing e.g. on our server where 17 people run VMs. It can be easily changed to static but the rescan is very fast and users specifically asked for it. The memory usage is less because `nnn` uses much less memory for all operations in general.

I appreciate your work, but you're not being very honest with your claims.

nnn is not keeping information about 400K files in memory in that benchmark. As a result, the rescan is necessary when changing directory. The rescan may be fast in many cases and in some cases it may even be what you'd want, but I can also name many cases where you certainly won't want it (large NFS mounts being one example).

Sorry for the pedantry. I spent a fair amount of time optimizing ncdu's memory usage, so I tend to have an opinion on this topic. :)

I think we are saying the same thing in different lingo. I am trying to say, you do not need to store it if you can have fast rescans.

Coming to memory usage, if you store the sizes of every file you need 400K * 8 bytes = ~3 MB.

Now `ncdu` uses ~60 MB and `nnn` uses ~3.5 MB. How do you justify that huge gap?

> but you're not being very honest with your claims

No, I am completely honest within the limits of my technical understanding. Your tool uses 57 MB extra which would be considerable on a Raspberry Pi model B. To an end user, it's not important how a tool shows the du of `/`, what's important is - is the tool reasonable or not? I don't know how `ncdu` manages the memory within, I took a snapshot of the memory usage at `/`.

In fact, now I have questions about your very first line beginning with `This looks incredibly cool` and then the comparisons of it with different utilities in negative light. (I must be a fool realizing it now, I should have seen it coming.)

And I'm saying you can't have fast rescans in all cases - it very much depends on the filesystem and directory structure.

I'm not trying to downplay nnn - I meant it when I said it's a cool project! I'm saying each project has its strengths and weaknesses, but your marketing doesn't reflect that (or I missed it).

ncdu's memory usage is definitely its weak point - that's not news to me - but it's because I chose the other side of the trade-off: No rescans. If you're curious where that 60MB goes to, it's 400K of 'struct dir's: https://g.blicky.net/ncdu.git/tree/src/global.h?id=d95c65b0#...

I honestly don’t understand why you’re getting down voted when all you’re doing is explaining the design decisions behind your own utility.

You’re being very snarky considering how quick you to start the debate, where you boasted about how much better optimised your tool was in your GitHub README.

I grow rather tired of comparisons where one tool tries make itself look better than another based purely on a solitary arbitrary metric like memory usage. It’s not a fair benchmark and really it’s just an excuse to make yourself look better by bad mouthing someone else’s code.

What’s to say the other tools haven’t employed the same algorithms you vaguely stipulated you had (I say “vaguely because you don’t even state which highly optimised algorithms you’ve use)? Have you read the source code of the other projects you’ve commented on to check they’re not doing the same - nor even better? Because your README is written like you’re arguing that other projects are not optimised.

What’s to say that the larger memory usage (which is still peanuts compared to most file managers) isn’t because the developer optimised performance over literally just a few extra KB of system memory? Their tool might run circles around yours for FUSE file systems, network mounted volumes, external storage devices with lower IOPS, etc.

But no, instead you get snarky when the author of one of the tools you were so ready to dismiss as being worse starts making a more detailed points about practical, real world usage beyond indexing files on an SSD.

It wasn't a debate. You asked, I answered. And if you read carefully, there is _not_a_single_comment_ on the quality of a single other utility in the README. We recorded what we saw and I have shared the reason why.

I am not going to respond any further and would appreciate it if you refrain from getting personal with "not being completely honest", "being very snarky" etc. Please don't judge me by the project page of a utility which is a work of several contributors. That's all.

I’m not related to the GP.

Let me explain the point further:

Your readme has a performance section, that section focuses on nnn vs two other tools. You only benchmark against Memory usage under normal circumstances (ie no other performance metric, no other file system nor device types, etc). Then you have a whole other page dedicated to “why is nnn so much smaller” which is directly linked to from the performance comparisons. There’s no other way to take that other than you’re directly comparing nnn to other tools and objectively saying it’s better.

So with that in mind, I think the developers of the other tooare totally with in their right to challenge you on your claims.

Edit: the “multiple contributors” point you made is also rather dishonest too. It’s your personal GitHub account for a project you chiefly contribute too and the documents in question were created and edited by yourself (according to git blame). Yes nnn has other contributors too but it was yourself who wrote and published the claims being questioned.

> totally with in their right to challenge you on your claims

Yes, and within the limits of common courtesy.

The other utility does only one thing - reports disk usage so there's not much to compare. The dev did mention that `ncdu's memory usage is definitely its weak point`.

> no other performance metric, no other file system nor device types

because lstat64() is at the core of the performance metric of the feature we are comparing here and with the same number of files on the same storage device the number of accesses are exactly the same. The only metric that differentiates the utilities is memory usage.

> Edit: the “multiple contributors” point you made is also rather dishonest too.

Not really, I prefer to edit the readme myself because I want to keep the documentation clean. You will see features contributed by other devs for which I have written the docs from readme to manpage. Regarding the metrics, sometimes I have taken the data and sometimes I have requested someone else to collect it. Or doesn't that count as contribution?

What I actually care most about a file manager is how they perform on mounts with low IOPS and how gracefully they handle time outs and other exceptions.

RAM is cheap and any file manager will be snappy on an SSD. But edge cases are where most file managers fall apart yet are situations where you might need to depend on your tools the most.

However now I understand the point of this project was purely to optimise against memory usage, I can better understand the arguments you were making.

> or doesn't that count as contribution?

Not in this case, no. You published it, so you’re still ultimately accountable for it.

You cannot request figures then play the “nothing to do with me guv’” card when someone queries the benchmarks that you subsequently published. At best it comes across as an unlikely story; at worst you’re still complicit.

>RAM is cheap

This is the wrong mindset. RAM is only cheap if you don't use it. As soon as you go just 1 byte over the maximum RAM it turns into the most precious resource of the entire computer.

If an app uses more memory than another then it is not better because RAM is cheap. It is better because it provides more or higher quality features at a reasonable cost of increased memory usage. But at the same time it is also worse for people who do not need those features.

Here is an example: When I launch the graphical file manager nautlius it consumes roughly 26 MB of RAM showing my home folder but when I go to my "Pictures" folder it suddenly shoots up to 300MB. There is a reason for that and it is not "RAM is cheap", if that were the case it would always use 300MB regardless of what I do with it (electron apps are a major offender of this). Nautlis consumes that much RAM because it has more features like displaying 1000 thumbnails of all those pictures.

Now this feature would get in my way if I set up something like a Raspberry Pi Zero to take a photo every hour. Nautilus will crash because it needs too much memory to display the thumbnails.

I agree from an idealistic point of view (I've often make the same argument myself with regards to non-native applications and self-indulgent GUIs) but you're missing the context of the argument here.

We're not talking about burning through hundreds of megabytes (nor even gigs) on a pretty UI that adds no extra function; we are talking about only a few megabytes to save stressing low bandwidth endpoints.

It isn't 1990 any more, sacrificing a few megabytes of RAM in favour of greater stability is very much a worthwhile gain in my opinion. Hence why I say RAM is cheap - we don't need to cut every corner just to save a few bytes here and there.

In the example we were discussing, the idle RAM usage was higher because it doesn't hammer low bandwidth endpoints with frequent rescans. Caching is actually a pretty usage for spare RAM - your kernel does it too. So we are not talking about anything out of the ordinary here and we're certainly not talking about wasting RAM for the sake of wasting RAM. We're talking about trading some cheap RAM for the sake of potentially expensive (computationally speaking) access times.

However I do feel I need to reiterate a point I've said throughout this discussion: there is no right or wrong way; it's just a matter of compromise. eg the goals for an embedded system would be different to the goals for an CLI tool on a developers laptop.

> then play the “nothing to do with me guv’” card when someone queries the benchmarks that you subsequently published

No, you are cooking things up. I did respond as per my understanding. My statement was very clear - "_Please don't judge me_ by the project page of a utility which is a work of several contributors."

I had problems with the _personal remarks_. And I am not surprised you chose to ignore that and describe it in the light that I have problem with someone challenging the benchmarks.

I am yet to come across figures that can challenge the current one.

I have no idea how people convince themselves to contribute to open source and/or participate regularly in online discourse. It’s basically working really hard for the easiest-to-offend and least-likely-to-appreciate-anything-you-do people on earth, so they can throw shade at everything you do, and then get mad because you did something different than they would have done.

FWIW, I appreciate everything you all are doing, even the stuff I’m not using right now. You all don’t hear it enough, and you’re certainly not getting paid enough for what you’re doing.

I contribute loads to open source too.

It’s actually not that hard to do so without talking trash about other projects. In fact I find those kind of comparisons are often the laziest and least researched ways of promoting a particular project as anyone who’s spent any time dissecting other peoples work in detail (not just running ‘ps’ in another TTY) will usually gain an appreciation for the hard work and design decisions that have gone into competing solutions.

But that’s just the opinion of one guy who has been contributing to the open source community for the last two decades. ;)

> talking trash about other projects

Cleverly fabricated and blown out of proportions. Yes, 2 decades of rich experience sure teaches that!

To clear the context for others, you are talking about a list of performance numbers here, and as I said, I am yet to come across figures that can challenge the current one.

The author of the other project did challenge them though. Hence this entire thread ;)

Yes, and he has received pointers on how he can reduce memory usage in his program (an issue he mentioned exists).

Actually what he said was he knew memory was a problem but it is designed for a different use case. The key part being the bit you’ve conveniently left off.

Your comments come across as very arrogant (particularly considering you’ve not reviewed the other projects source code) and intentionally deceptive too. While I’m normally the first to defend open source developers - because I know first hand how thankless it can be at times - you’re doing exactly to others as you ask not to be done to yourself. Which is not fair at all.

Anyhow, this is all a sorry distraction away from what should have been the real point of this HN submission; and I’m angry at myself for allowing myself to get dragged into this whole sorry affair. so I’m going to log off HN for the rest of the day to avoid any further temptation.

I do sincerely wish you the best with this and your other projects. It should be the code that speaks for itself rather than any other politics and that’s not been the case here for either your project nor the others it has been compared against. But let’s move on and get back to writing code :)

> Actually what he said was he knew memory was a problem but it is designed for a different use case. The key part being the bit you’ve conveniently left off.

No, read the other thread where I said his use case can be satisfied at much less memory usage. There's nothing arrogant in suggesting there's a more efficient way to achieve something.

Yes, let's get back to work. Loads of feature requests to cater to, thanks to this share. :)

> What I actually care most about a file manager is how they perform on mounts with low IOPS

Purely technical questions (let's put `being very snarky` and `dishonesty` and my other irrelevant personal traits aside):

- How does "low IOPS" affect readdir()/scandir() and lstat64() in two C utilities differently?

- What else would be affected?

It’s about how and when they get used. Eg if you’re running on a lower performing mount then you might want to rely on caches more than frequent rescans. What I would often do on performance critical routines running against network mounts was store a tree of inodes and time stamps and only rescan a directory if the parents time stamp changed. It meant I’d miss any subsequent metadata changes to individual files (mtime, size, etc) but I would capture any new files being created. Which is the main thing I cared about so that was the trade off I was willing to make.

There’s no right or wrong answer here though. Just different designs for different problems. Which was also the point the other developer was making when he was talking about his memory usage.

> There’s no right or wrong answer here though.

The processor cache plays an important role which you are ignoring.

External storage devices: most of the time they are write-back and even equipped with read-ahead. Yes, I know there are some exceptions but if you are write-through non-read-ahead you _chose_ to be slow in your feedback already and this discussion doesn't even apply.

Network mounts: cache coherency rules apply to CIFS as well. And again, if you _choose_ to ignore/disable, you are OK to be slow and this discussion does not apply.

If `nnn` take n secs the first time, another utility will take around the same time on the first startup (from a cold boot).

Now the next scans where you go into subdirs would be much faster even in `nnn` due to locality of caching of the information about the files (try it out). The CPU cache already does an excellent job here. And if you go up, both `nnn` and the other utility would rescan.

> point the other developer was making

Yes, he was saying - my memory usage may be 15 times higher because of storing all filenames (in a static snapshot!!!) but you are dishonest if you show the numbers from `top` output without reading my code first for an education of my utility.

I’m not sure what you mean by processor cache here. The processor wouldn’t cache file system meta data. Kernels will, but that is largely dependant on the respective file system driver (eg none of the hobby projects I’ve written in FUSE had any caching).

Different write modes on external hardware also confuses the issue because you still have the slower bus speeds (eg a USB2 for an older memory stick) to and from the external device than you might have with dedicated internal hardware.

> The processor wouldn’t cache file system meta data

What _file system meta data_? The processor doesn't care what data it is! I am talking of the hw control plane and you are still lurching at pure software constructs like the kernel and its drivers.

All the CPU cares is the latest data fetched by a program. The CPU cache exists to store program instructions and _data_ (no matter where it comes from) used repeatedly in the operation of programs or information that the CPU is likely to need next. If the data is still available in the cacheline and isn't invalidated, CPU won't fetch it from an external unit (so bus request is not even made). _Any_ data coming to CPU sits in any Ln cache, source notwithstanding. The external memory is accessed in case of cache misses. However, the metadata these utilities fetch is very very less and the probability is greatly reduced. Moreover, your hypothetical utility also banks on the assumption that this data won't change and it wouldn't have to issue too many rescans to remain performant.

It's the same thing you see when you copy a 2 GB file from a slow storage to SSD and the first time it's slow but the next time it's way faster.

You can see it for yourself. Run `nnn` on any external disk you are having (with a lot of data preferably), navigate to the mounpoint, press `^J` (1. notice the time taken), move to a subdir, come back to mountpoint again (2. notice the time taken). You would see what I mean.

> none of the hobby projects I’ve written in FUSE had any caching

On a side note (and though not much relevant here), all serious drivers (e.g. those from Tuxera, btrfs) maintain buffer cache (https://www.tldp.org/LDP/sag/html/buffer-cache.html). They always boost performance. If our Ln misses, this is where we would get the metadata from and _hopefully_ not from the disk which is the worst case.

Yeah, that’s the kernel caching that (as I described) not some hardware specific thing the CPU is doing. Not disagreeing with you that L1 and L2 cache does exist on the CPU (and L3 in some instances) but it is much too small to hold the kind of data you’re suggesting. It’s really the kernel freeable memory (or file system driver - eg in the case if ZFS) where file system data - inc file contents too - will be cached. The CPU cache is much more valuable for application data to fill with file system meta data (in my personal opinion, you might override that behaviour in nnn but I’d hope not)

However regardless of where that cache is, it’s volatile, it’s freeable. And thus you cannot guarantee it will be there when you need it. Particularly on systems with less system memory (remember that’s one of your target demographics).

If you wanted though, you could easily check if a particular file is cached and if not, perform the rescan. I can’t recall the APIs to do that off hand but it would be platform specific and not all encompassing (eg ZFS cache is separate to the kernels cache on Linux and FreeBSD) so it wouldn’t be particularly reliable nor portable. Plus depending on the syscall used, you might find it’s more overhead than an actual refresh on all but a rare subset of edge cases. As an aside, this is why I would build my own cache. Sure it cost more RAM but it was cheaper in the long run - fewer syscalls, less kernel/user space memory swapping, easier to track what is in cache and what is not, etc. But obviously the cost is I lose precision with regards to when a cache goes stale.

While on the topic of stale caches, I’ve actually ran into issues on SunOS (or was it Linux? I used a lot of platforms back then) where contents on an NFS volume could be updated on one host but various other connected machines wouldn’t see the updated file without doing a ‘touch’ on that file from those machines. So stale caches is something that can affect the host as well.

> but it is much too small to hold the kind of data you’re suggesting

My toy has a 3 MB L1 cache, 8 MB L2. You'll notice from the same figures that the resident memory usage of `nnn` was 3616 KB. And in the default start (not du) right now the figure is:

      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                
    23034 vaio      20   0   14692   3256   2472 S   0.0  0.0   0:00.00 nnn -c 1 -i
So it required around 350 KB for `du`. Oh yes, it can be cached at will.

> check if a particular file is cached and if not

I am a userland utility. I don't add deps on fs drivers (and maintain redundant code... I discourage code bloat as much as feature bloat) when standard APIs are available to read, write, stat.

> However regardless of where that cache is, it’s volatile, it’s freeable.

Yes!!! The worst case! And the worst case for you is where all dirs have changed. There you issue rescan, `nnn` issues rescan. `nnn` is accurate because it doesn't omit independent files for _personal_ optimization preferences. You do. You show _stale_, _wrong_ info in the worst case.

3MB is not enough to store the kind of data you’re talking about. In your example you were coping files from an SSD to a removable device; what happens if that file is 4MB big? Or 20MB?

Some program executables are multiple megabytes in size these days, your CPU cache is much more valuable storing that than it is storing a random 20MB XLSX file you happened to copy half an hour earlier. ;)

> You do. You show wrong info in the worst case.

Yes, I’d repeatedly said that myself. However I was making a case study rather than giving a lecture on how your tool should be written. Remember when I said “there is no right or wrong answer”? (You have ignored my point that you cannot guarantee the kernels / fs drivers cache isn’t also stale though. That does happen in some rare edge cases).

My point is you shouldn’t be so certain that you’re method is the best because there are plenty of scenarios when accuracy and low memory usage is more costly (eg the fastest file copying tools are those that buffer data - costing more RAM during usage). Plus let’s not forget that syscalls are expensive too. But that’s the trade off you take and that is what is advantageous for the perpose you’re targeting nnn for.

So there really isn’t a right or wrong answer here at all and memory usage is in fact only a very small part of the equation (which is what the other projects author what trying to say).

> Some program executables are multiple megabytes in size these days

I have shared the figures from my system right now in my earlier comment. The memory required for the meta info is extremely cache-able.

And yes, let's call it a day now. In fact, I am happy we had a very informative discussion towards the end. This is what I expected from the other dev and you from the very beginning. I am always ready for a productive non-abusive technical discussion. And if I find someday `ncdu` takes comparable/lesser memory than `nnn` I will update that data verbatim as well. But I wouldn't take uncalled-for shitstorm in public forums from strangers lying low.

But that’s what you’re storing in the system memory, not CPU cache. Ok let’s say the kernel does cache recently accessed memeory in the L2 cache, you’re still competing with everything else - particularly in a multi-user or multi-threaded system. So that is not going to stay in L2 for long.

You’re making so many assumptions about the host platform and they simply don’t stack up for the majority of cases - it’s just that most systems are fast enough to hide your incorrect assumptions.

Also your understanding of how file system caching works is completely off. Granted that’s not something most normal engineers need to worry about, but given the claims you’re making about nnn, I would suggest you spend a little time researching the fundamentals here.


> I am happy we had a very informative discussion towards the end. This is what I expected from the other dev

That was exactly what the other dev was doing. You just weren’t ready to listen to him - which is why I started making comments about your “narky” replies. ;)

Well, I shared the numbers, I gave you ways to test it. We are done here.

but You’re not proving that the data you are expecting to be in L2 actually ends up in L2. Let alone persists long enough to be recalled from L2 when you do your rescans. Your test is crude and thus doesn’t prove anything. Which is what we have all been telling you right from the start!!!!

You cannot just throw numbers up and say “look I’m right” if the test of flawed from the outset.

But yes, it’s probably better we do call it a day.

I expected you would run the test. Anyway, here's the data and it aligns with my expectation and figures.

  ## Fired `nnn` in my home dir.
     - all cached, no major page faults
  ~$ /usr/bin/time -v nnn
  	Command being timed: "nnn"
  	User time (seconds): 0.00
  	System time (seconds): 0.00
  	Percent of CPU this job got: 0%
  	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.77
  	Average shared text size (kbytes): 0
  	Average unshared data size (kbytes): 0
  	Average stack size (kbytes): 0
  	Average total size (kbytes): 0
  	Maximum resident set size (kbytes): 3400
  	Average resident set size (kbytes): 0
  	Major (requiring I/O) page faults: 0
  	Minor (reclaiming a frame) page faults: 274
  	Voluntary context switches: 6
  	Involuntary context switches: 1
  	Swaps: 0
  	File system inputs: 0
  	File system outputs: 0
  	Socket messages sent: 0
  	Socket messages received: 0
  	Signals delivered: 0
  	Page size (bytes): 4096
  	Exit status: 0
  ## Fired `nnn` on an external disk root in du mode, went into a subdir, came back to root.
     - no major page faults
     - File system inputs: 14680
     - 220 KB cached extra
  ~$ /usr/bin/time -v nnn -S /media/vaio/49462fdf-010c-40cc-89db-ef125e7dae99/tmp/
  	Command being timed: "nnn -S /media/vaio/49462fdf-010c-40cc-89db-ef125e7dae99/tmp/"
  	User time (seconds): 0.02
  	System time (seconds): 0.27
  	Percent of CPU this job got: 3%
  	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:08.30
  	Average shared text size (kbytes): 0
  	Average unshared data size (kbytes): 0
  	Average stack size (kbytes): 0
  	Average total size (kbytes): 0
  	Maximum resident set size (kbytes): 3620
  	Average resident set size (kbytes): 0
  	Major (requiring I/O) page faults: 0
  	Minor (reclaiming a frame) page faults: 337
  	Voluntary context switches: 576
  	Involuntary context switches: 2
  	Swaps: 0
  	File system inputs: 14680
  	File system outputs: 0
  	Socket messages sent: 0
  	Socket messages received: 0
  	Signals delivered: 0
  	Page size (bytes): 4096
  	Exit status: 0
  ## Fired `nnn` again (around 2 mins later, I was formatting the above text) on the external disk root in du mode.
     - File system inputs: 0
  ** So even between 2 distinct instances of nnn, _all_ the data fetched in the first instance remained cached.
  ~$ /usr/bin/time -v nnn -S /media/vaio/49462fdf-010c-40cc-89db-ef125e7dae99/tmp/
  	Command being timed: "nnn -S /media/vaio/49462fdf-010c-40cc-89db-ef125e7dae99/tmp/"
  	User time (seconds): 0.00
  	System time (seconds): 0.01
  	Percent of CPU this job got: 0%
  	Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.31
  	Average shared text size (kbytes): 0
  	Average unshared data size (kbytes): 0
  	Average stack size (kbytes): 0
  	Average total size (kbytes): 0
  	Maximum resident set size (kbytes): 3616
  	Average resident set size (kbytes): 0
  	Major (requiring I/O) page faults: 0
  	Minor (reclaiming a frame) page faults: 329
  	Voluntary context switches: 5
  	Involuntary context switches: 0
  	Swaps: 0
  	File system inputs: 0
  	File system outputs: 0
  	Socket messages sent: 0
  	Socket messages received: 0
  	Signals delivered: 0
  	Page size (bytes): 4096
  	Exit status: 0

Your calculation (`400K * 8 bytes = ~3 MB`) is way off. What would be the point of storing only the size? You need to map it back to the file.

60MB gives you about 150 bytes for file path or file name and its size, which sounds plausible.

Maybe you shouldn't store the file path but just the name, and a parent pointer. That brings you down to 8 bytes size + Parent pointer + a short string. Regarding the string you can go for offsets into a string pool (memory chunk containing zero terminated strings).

So I think 50 bytes per file is easy to accomplish if (name/parent/path) + size is all you want to cache. For speed-up, I would add another 4 or 8 bytes index to map each directory to its first child.

I can think of at least 3 possible algorithms to use much much less memory even with a static snapshot of the complete subtree. And all of them are broken because the filesystem is supposed to stay online and change. It's realistically useful to scan an external disk to find the largest file etc., but not accurate on a live server, a desktop with several ongoing downloads, video multiplexing etc.

Earlier in the thread you suggested that it's hard to justify ncdu using 60MB while it takes only 3.5MB to store 400K * 8 bytes numbers. The number you came up with is just silly and overlooks actual complexity of the problem.

Given that you are making an implicit judgement about the other program, don't be sloppy with your estimates.

> don't be sloppy with your estimates

I'm not. You can, and I'm sure eventually you will arrive very close to the approximation.

I'd been a big time fan of `ncdu` for years and even wrote in an e-journal about it once. Maybe that's why the sharp adjectives became more difficult to digest. Anyway, good luck!

About vifm (don’t know if nnn has this):

- split screen (files left, file contents on the right) - customise file viewers - quick file search - customise key-bindings

Every time I see interesting file managers it makes me want to try and like them but I always end up abandoning them after either a few minutes or a few hours. I don't know why. I'm quite sure I could be more productive using a file manager, even a graphical one. But it's been more than ten years that the only file manager I actually use is Bash and the GNU coreutils.

Author of `nnn` here. It would really help if you let us know what you are looking for in a file manager. `nnn` is under active development for more than a year now and we are very open to constructive feedback and feature requests.

I actually don't know what I'm looking for. I'm quite happy without one. It just feels strange as file managers seems to be most people main entry point to their computer.

two of my most used shell commands are "j" (https://github.com/wting/autojump) and "fd" (https://github.com/sharkdp/fd)

Autojump makes it easy to quickly jump into any directory I've visited and then I can open my editor or Finder (file explorer) from there. Would be great if I can be inside nnn and jump to directories using autojump.

fd is very useful when I want to just directly open a file. I might be in a project's root directory and I'd want to edit a file a few levels nested. Currently I either use fd to locate the path of the file, and then copy paste it into the shell. If `fd` is combined with nnn, I can type in a few letters of the file I expect, and jump to its parent directory, with the file highlighted.

Is this workflow possible in nnn, or can it be written using its scripting?

autojump: `nnn` comes with bookmarks for frequently visited dirs. You can look up the list anytime.

fd: `nnn` integrates seamlessly with fzy (another blazing fast C utility).

Moving or copying files a single time might be a bit faster using a GUI (or TUI). But when you factor in starting the file manager and navigating to the right place, it's not a big difference.

And when you need to do a more complicated operation you're looking at a shell oneliner vs. multiple (possibly repetitive) manual steps in the UI.

If simple copying of files made up a great deal of my daily work, I'd certainly use a file manager (if I couldn't write a script to do it).

The primary benefit of any GUI over a functionally equivalent CLI is the improved discoverability, especially for rare, one-off tasks. CLIs over GuI primarily offers better repetition.

Any comparison between the two systems should stem from there: you’ve only offered half of it.

Eg getting a remote virtual disk set up in OSX’s finder is much easier than on terminal, without googling/general research, for a one-off drive. Getting 50 such disks up and going is far easier on terminal. (CLI has a better O(), but worse constant factor.)

I somewhat disagree. Yes, if the GUI has five possible actions and you're only familiar with one of them, it's much easier to learn to do the other four things with the GUI. But it's tedious to do something the GUI wasn't designed for.

Once the GUI reaches hundreds of actions it's no longer easy to find the functionality you're looking for.

I disagree partly here: hitting tab twice in a shell usually lists available commands, possibly with some prefix. A GUI has the advantage of context, in that eg menu items of the disk utility make sense as you know they’ll be some operation on disks.

On CLI eg I don’t remember the mkfs command for a file system, I type mkfs and hit tab twice.

It could be better though, of course.

That is the problem right there, you need to know mkfs exists to start with.

apropos file system

You'll be searching (or using previous knowledge) for what you need to format it as, anyway.

Well one needs to know the Disk Utility exists as well, or disk management thingy in Windows.

Sure, but that is available on a simple right mouse click on the device.

In any case you are focusing too much on mkfs, I can think of plenty of other examples, many of each aren't even portable across UNIX environments.

While GUIs at least can offer navigation and visual cues as means to discover features.

One issue is that cli apps expect a man page to be available, and that you’ll be willing to read it, so options by name alone tend to be more obtuse. Better if you already know it, worse when you dont.

Eg ffmpeg is confusing no matter what, but the cli is particularly obtuse in how options interrelate (or even the format string required for them!). A gui for ffmpeg will naturally have fewer options (less expressive), related features to an option will be grouped by windows, and might even do things like visibly disable options based on current flags set, or even a preview mode.

You could probably get by to a certain degree, when new to the program, on an ffmpeg gui for simple operations; even the most basic action of cli will be difficult just reading the options appearing by 2xtab.

There’s simply a lot more a gui can do to help the learning/interactive process than a cli can (tui is a different story: its just a gui that happens to be in your terminal, and offers the same positives/negatives that any other gui does).

A major issue with GUIs is that they are not scriptable [1], at least in the way CLIs are, like with a bash-like shell and all the stuff of the Unix philosophy [2] (I/O redirection, pipelines, wildcards, other Unix special characters that expand to, or mean, something (all of which you can use in your command pipelines).

[1] I mean, I know there are some tools to automate GUI tasks, like AutoHotkey, AutoIt, etc., but due to various window-related issues, they are fragile, I've heard.

Not saying that this makes CLIs superior for all uses, of course. Just better for some.




I am surprised how fast I have become when using Double Commander. I really needed to learn the keyboard shortcuts, but that payed off. It is really made for keyboard control, not mouse based interaction. Seemingly minor details like searching by typing selection being a toggle (space bar) help me a lot.

Very occasionally I use ranger to explore directories whose contents I've forgotten; but it is so infrequent that I'm often halfway through exploring with the shell before I remember it is better and faster with ranger

I agree, after 20 years end up with Bash and binutils.

Some time I did a survey for a neat a nice terminal fm. Trying a lot of different kinds I ended up with vifm.

Copying multiple files regularly I recommend using Python Fabric.

I always feel the need to have an ncurses based sql database explorer. With list/search/add/edit/delete utilities.

Always miss it. But never had the time to write it myself.

maybe try visidata (http://visidata.org), it handles sqlite and postgres among others. But personally I've only used it for csv/tsv/excel files, it handles them beautifully, with vim-style bindings.

Use midnight commander. It has the command line embedded in it, so you can still use all your shell wizardry. You get the best of both worlds.

You can fallback to the subshell in `nnn` with `!` or `^]`.

In case of mc, ^O switches to same background shell, even from editor. So, variables, functions, and command history is same. Moreover, I can run command in shell and return to mc or mcedit to continue work when shell command still works (with limitation that current working dir will not follow selection in mc). It feels like real shell, not a subshell.

In `nnn` selection works seamlessly between multiple instances.

The shell in `nnn` is also a real shell with your complete env settings. So it remembers everything.

I can't think of a workflow where you are working both in the shell and the file manager when you have selection in place. It's a convenience, yes. But both are exclusive in nature.

Will the subshell have all your shell aliases?


Look into ranger. I occasionally use it to quickly look at file contents. It's a nice thing to use for 10 seconds a few times a week

I can recommend ranger, however I am not consistent with it.

If you're open for a graphical option, try "gentoo" (not the distro, the GTK+ file manager).

Disclaimer: I wrote it.

Very nice! I often end up running MidnightCommander but this would be enough in most cases. I really like the idea!

+1 for mc! Using it for almost 2 decades now. First on linux, now on macOS.

I use it on my RPis, but now I am going to try nnn if it runs on ARM. MC is legendary though and the fact that supports multiple transfer protocols is a very helpful.

`nnn` integrates with lftp. I have added a wiki page on how to automate transfers or copy selection easily.

Just a minor note, `nnn` is available on Homebrew.

My friends have been using it for a few days and they seemed quite happy with it. The only thing that stops me from using it is that I need thumbnails, so I am stuck with spacefm for now.

Any particular reason spacefm stands out to you over the rest?

I tried multiple file managers in the past. Dolphin is really good but it has a lot of KDE dependencies. Thunar had quite a few bugs that annoyed me, for example when I configured backspace to go to the parent directory it activated even when I was editing the path. In the past I used to use Nautilus until it replaced the quick search feature with a recursive search thing, in addition to removing the ability to change keybinds, plus I kept getting crashes and getting a lot of lag when visiting a directory with 10k+ images with it later on. Most of the other file managers that I tried either did not have thumbnails or they did not support tabs.

Spacefm does not have any of the above issues, which is why I am currently using it.

I use Thunar and avoid Nautilus pretty much for the same reasons. I like Thunar because it keeps things simple, and I have bash for anything complex, which a lot of my file operations tend to be. I'll check out spacefm and see what I think, thanks for the recommendation.

doublecmd, other than two panels also has tabs and thumbnailed directory listing.

Thumbnails? Perhaps “imgcat” can help?

Will look into spacefm though.

Emacs Dired + Tramp -> no need for any other tool.

One super awesome Dired feature is C-x C-q to edit a folder like a file (all Emacs commands work here, like macros which are easier & faster to get right than most shell commands). To save your changes after editing, press C-c C-c.

`nnn` is not tied to a single editor. It can work with anything. Yesternight I ended up writing a plugin for vim as one of the users requested. So probably someone can contribute a plugin (?) for emacs as well. Unfortunately my understanding of emacs is limited. And that's probably one of the problems with editor-specific plugins.

Well, emacs used like the GP suggest is not really an editor, and is more akin to a text-based application platform (with modes to browse the web, read your mail, play music, etc)

Aka EmacsOS!

I would love to hear other dired features that people use. I'm just now getting a comprehensive understanding of how great dired is.

Does it support file transfer over ftp, sftp, s3, etc?

Yes, that's where tramp comes in, although s3 might need a separate plugin.

For me the best part of dired is "do-what-i-mean" mode, dired-dwim. You basically get the output of `ls -la` and edit it as a text file. When you save, dired figures out changed names/permission bits/etc and does what you mean to those files.

Does dired have some sort of picture browse mode/functionality? That would be awesome!

I agree, that solution has endless of possibilities, I can’t think of any feature it doesn’t have

`vidir` does the same in vim and readily integrates with `nnn`.

Except I have to open emacs in one full second

Why would emacs ever be closed?

Anyway, if you run emacs as a server, you can connect to it with emacsclient in less than a second.

Even starting emacs from scratch can take less than a second, if your config is small enough or optimized enough.

... never close it :)

Lack of file previews makes it a deal breaker. Ranger is a bit slower, but it does so much more.

I agree. Would be very nice if this was an option at least (that could be turned on by configuration). On the design considerations [1] they state:

> Previewing large files on selection is a wastage of processing power and time. A slow/faulty disk or over the network access makes the experience very painful.

You only need the first lines of the file, not the whole file, so this argument is not valid.

> It's also a risk to privacy and confidentiality. Users should see only what they explicitly want to see.

Not valid if preview is off by default. A hotkey could toggle preview mode.

Yes, you can open each file in a pager. Exit the pager, go to the next file and repeat. That is slow and painful. Being able to browse a code repository with file preview is the main reason I will probably not use this and stick with ranger. It's a deal breaker for me.

I really like that it's fast (I've tried using ranger on the USB Armory – it was very slow), and if preview was added as an option I would seriously reconsider.

[1]: https://github.com/jarun/nnn/wiki/nnn-design-considerations

Yes, `nnn` doesn't have file previews. It needs the user to explicitly open the file to view it. Or open a dir of images in something like sxiv.

> nnn is probably the fastest and most resource-sensitive file manager you have ever used.

Volkov Commander is written in assembler, binary has 32KB with zero deps, memory requirements far bellow 1MB. It has better UI and more features.

It's for DOS and is a shareware.

> better UI

I doubt that from the screenshots. But it's a personal preference.

How does this compare to https://github.com/ranger/ranger ?

I started writing `nnn` because of 2 reasons:

- I needed something that _performs_ on the Pi which I was setting up as a _media player_ for my 3-yr old. - `ranger` wasn't an option because of slow python deps and I found ranger's changing/shifting panes very confusing. Please don't get me wrong here, I have several popular utilities in python. But when it comes to low-power, low-freq devices, the performance of interpreted or scripting languages is questionable.

`nnn` also has the `du` and `navigate-as-you-type` modes which aren't readily available in `ranger` (AFAIK; I don't use it). Then you get to copy (un)quoted file path(s) even without X, `vidir` integration, terminal locker support and finally, all the painstaking optimization.

The binary size is ~60KB (much lesser than the C source file that generates it). The requirements are minimal too - a c (say, musl) and a curses library.

Copying file path(s) between two instances of `nnn` works seamlessly (with or without clipboard).

I think the philosophy behind `ranger` and `nnn` are different and they were written to cater to different use-cases and a different audience at the very beginning. I had to write `nnn` on my desktop and at some point I noticed it works like a charm with my drop-down terminator session (I never lose any context anymore) so it became a desktop software.

more minimalist, smaller, a tiny bit more responsive. Probably a lot less capable since ranger seems quite featured.

In fact every operation would be slower in an interpreted or scripting language. It becomes evident on a Raspberry Pi, Linux subsystem for Windows (even on i7, yes), when opened within vim as a file chooser, or on Termux env (Android). `nnn` finds its users in all these constrained environments.

Also `nnn` is not a feature by feature replacement for ranger. The last thing we want is a feature bloat. And we are very open to useful feature requests. Let us know if you are missing something.

We really should all develop on slow machines.

develop > test

that too. I did so for website. If it loads fast and render neat under elinks Ill keep it.

Why doesn't nnn remember where you are in a directory? It seems natural to me that "hl" should preserve your state, e.g. "hhll" should take you back to the same place you were in.

Was it an intentional design choice to make "l" put you back on the first entry in the directory? Why?

Yes, we didn't want to remember the last file selected in every directory visited. In other words, it's like remembering the state of every directory visited. `nnn` selects the parent dir when you come out of a child dir.

You don't need to remember the state of every directory visited. The functionality GP is asking for is similar to the forward button in the web browser. I.e., if you're starting in `/a/b/c/d`, to be able to go

  /a/b/c/d -> /a/b/c -> /a/b -> /a/b/c -> /a/b/c/d
However, if you instead go into `/a/b/x` after the first two steps, the "forward history" would be lost.

Ah, OK! I thought every file in every dir. To do this you would have to remember each absolute path which is 4K max per path (or allocate dynamically and free every time when changed). We try to keep `nnn` light on memory.

I must admit this is the first time I am hearing it's a problem. Probably because of the abundance of other navigation options available in `nnn`.

Reminded me of xtree gold


You would have my attention if it could rename files like I'm used (on Emacs) with Dired, specifically the Wdired mode... ^__^;

`nnn` supports in-place file renaming. It also comes with `vidir` integration.

Briefly reading at the description vidir seems similar to Emacs' Wdired and I suppose it's an useful and valid choice, but I've decided long time ago that for me Vi/Vim is an hard pass... ^__^

How is this better than Midnight Commander?

Author of `nnn` here. I'm not very familiar with MC (sorry about that) as I don't use it so can't provide a meaningful/unbiased feature comparison. However, you can find the list of features in `nnn` here: https://github.com/jarun/nnn#features

In addition, last night I added a vim plugin to use `nnn` as a file picker within vim.

I understand you haven't used it, but for many, mc is the gold standard when it comes to features and ease of use. I've been using mc/nc for 30+ years and have never found a file manager as good (although I'll admit I stopped looking at them a decade ago).

It is the file manager many will compare against. I would suggest you request volunteers to put a comparison page for it. That alone will give you ideas on features you may want to add.

Thank you for your reply, and no worries for not being familiar with mc. I addressed the question generally to HN, hoping for some feedaback from anyone who happens to have used both.

EDIT: One nice thing that mc has is that it lets you open a remote shell via ssh. I use that a lot (I find sc syntax a big pain) so I'd miss that a lot in nnn. Sorry if that feature is included and I failed to spot it.

> Sorry if that feature is included and I failed to spot it.

Don't be, it's not in yet. It's on the ToDo list. I would really appreciate a PR.

A PR? Well, I wouldn't completely write it out, but my C is really rusty and I am being buried under a ton of stuff I'm expected to do (I'm doing a PhD, so I basically have no free time).

Is it just you on this project?

It was for anyone who reads the comment.

I went through the homepage of mc to understand the features and it seems the features available in mc but no in `nnn` is - Remote protocols (like FTP or SSH) integrates and file+dir compare.

However, I do not see several `nnn` features in mc as well:

- left, right arrows don't work

- relies heavily on mouse (menu driven), `nnn` doesn't need the mouse

- couldn't open a text file without extension

- filter workflow needs more key/mouse presses

- no du mode,

- no navigate-as-you-type

- no terminal locker

- no vim plugin

- no shortcuts like `-`/`&`/`~`

- no pin and visit dir,

- no media info

- no cd on exit

I might have missed certain things due to my lack of familiarity with mc.

I think you can do most of those things in mc. The following is from my experience running mc 4.8.18 on Win 10, inside powershell (the executable is from cygwin).

-- Arrow keys work in the menus- press F9 to bring them down, then you can navigate around them with the arrow keys. In the main view, tab moves you from one pane ("window") to the other.

-- I don't think you can use the mouse in mc. It's text based. The menus are all operated by key presses.

-- I can open a text file without extension, on the setup above. I navigate to the file, then press F3 and it opens (in the mc file viewer which I don't generally use).

-- If I understand navigate-as-you-type correctly, yes, mc doesn't have that.

-- Yes, I don't think there's a vim plugin.

-- (No shortcuts) Well, the shortucts are all in the function keys. Also, after F9 each menu item has its own shortcut (its first letter, which is highlighted).

-- (No pin & visit dir) There's the possibility to bookmark paths. Is that what you mean?

-- (No media info) I think you're right about that.

-- cd on exit is configurable; e.g. instructions:

-- Sorry- I don't know what "filter workflow" and "du mode" are and I'm not sure what you mean by terminal locker.

mc's been around for a while. It's a good piece of software. I think I might have been a bit unfair asking for a comparison, after all nnn is not an Orthodox File Manager, so it occupies a completely different space than mc.

Apologies again. I didn't mean to make you defend your work, and I didn't mean to attack it.

I remember using the mouse a lot in MC, for activating the Fx actions, the command history and some dialogs (...I think in the virtual consoles you needed something called gpm to make it work, uff. But in X works, too).

There's one more thing. I was trying mc to find out more about the features. Unfortunately, I found the multi-level pop-up menus in several workflows counter-productive.

One of the important goals behind `nnn`: extremely smooth workflows that don't get in your way.

TFA has a benchmark comparing Midnight Commander to NNN.

I'm sorry, what is TFA? The internet is no help.

Alternatively, could you please link to details about the comparison?

"TFA" = "the <expletive deleted> article".

wenderen is indicating that the question you raise is addressed on the nnn Github page itself: https://github.com/jarun/nnn#performance

Performance alone doesn't really answer the question, or e.g. we'd all be using assembly for our programming.

As explained in another comment, I'm interested in the experience of users of both programs. It's clear that nnn is not offered up as a specific alternative to mc, of course, but I consider mc a staple for file management in the terminal and I think that's not unreasonable of me, or controversial.

The most used in Eastern Europe orthodox file manager [1] for DOS, Volkov Commander [2] is purely written in assembly language actually.

[1] http://www.softpanorama.org/OFM/index.shtml

[2] https://en.wikipedia.org/wiki/Volkov_Commander

Urban Dictionary is generally pretty good when a word/term isn't defined in a more general dictionary, though in some cases what you're looking for might not be the first entry.


Aside: a quick shortcut url for the free dictionary;



Why _for programmers_? What secret features support programming in particular?

I see it as using commands vs direct manipulation, keyboard vs mouse, and - as a result - prioritizing efficiency at the expense of slower learning curve. Pretty much like vim vs notepad.

Works great in Termux on the Gemini. Plus there's a decent man page

Thanks for the appreciation! I myself have the habit of using `nnn` on Termux.

Does anyone who has used both nnn and ranger (the one I use, and am very happy with) care to explain what are the selling points of that fm compared to this one?

Check other threads and replies above

How launch executable file from nnn? For example:

  > 2018-11-15 09:44   167.8M* Krita-4.1.5-x86_64.AppImage*

There was a feature to run executable scripts in `noice` (on enter) but it was dropped in `nnn` as potentially dangerous. At this point you can spawn a subshell and launch it manually.

Is there any way to remap the key bindings?

Not without re-compiling.

Thanks. So it's not much use for non-qwerty'ers as is, I'll have a look at patching it.

For several operations, there are multiple keybinds. But I can think of something interesting. If you can come up with a keybind profile for non-querty, we can have a separate config for that set.

i'm stuck with long-ago-forked version of vfu https://github.com/cade-vs/vfu-dist Works well, as old-school as it may look.

Anyone have any idea of what I am supposed to use as a file opener on OpenBSD?

If you can confirm please share so I can update the program to use the default opener on OpenBSD.

I think it might be open(1): https://man.openbsd.org/file.1

same as in iOS. But i need the confirmation before making the change. Please raise an issue in the project page for the same.

Is xdg-open available?

Any chance this has/might get support for Windows natively?

`nnn` needs ncurses. AFAIK, it's not available on Windows yet. I use it daily on Linux subsystem for Windows and it supports Cygwin.

Excellent point, thanks for the reply, I'll give it a spin on WSL.

Have fun!

Is there a browser-based file manager somewhere?

Local or remote?

Konqueror and w3m are both web-aware local file managers.

Still have not done anything better than mc.

Does mc have a navigate as you type mode? Just curious.

Recommend vifm or why not tmux which is very configurable.

... and much much slower, yes.

I see you vouching around for alternatives in every comment thread here. No issues. But please explain what you are seeing in other utilities that's missing in `nnn`? I would be repeating myself, but we are very open to reasonable feature requests.

Done that in main thread about vifm above.

Sorry, but my comments were invincible for long time than suddenly appeared. Thought it was something wrong then all comments was visible. Hard to know if you don’t know HN comment system.

I didn't realize that. Thanks for the note!

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact