Hacker Newsnew | past | comments | ask | show | jobs | submit | tstack's commentslogin

> Kinda neat but I had trouble using it. Not sure what it is doing or what it is even showing me.

Can you elaborate a little more? lnav behaves like a pager with the conventional hotkeys for basic stuff. I'm not sure what else you are expecting.

> Also a nitpick but the colors are quite garish

I enjoy colors, so there's a lot going on by default. There are several themes builtin. You can configure the "grayscale" theme by running:

    :config /ui/theme grayscale

Thanks, I didn’t know what logs it opened, or how to open others. It had menus and drop downs but I didn’t understand what they were listing.

Need to read the manual I guess, not a big deal but it should be obvious for a log viewer. Why I recommended CUA, though I understand it is not so common on Unix.


Oof, sorry you had such a bad experience.

> but there is no obvious way to exit. I tried Q,q

It's not very responsive during initial indexing, which is something I need to improve. Pressing `q` should work to exit in general, though. Pressing CTRL-C three times in quick succession will force quit it.

It would help to know which version you tried. Things have gotten better over the years.

> I tried `man lnav` in separate terminal - but no man page is provided.

A man page exists, but only contains basic information. The builtin help text is much more extensive and can be viewed by running:

    lnav -H
There is also the documentation website: https://docs.lnav.org/

> `ps` shows 3 processes which would not die with SIGTERM, have to `kill -9`.

Older versions of lnav would use readline for the prompt and had to run it in a separate process because of "reasons". More recent versions have a custom prompt and don't require the extra processes.


I've installed from snap store

re: man page - It looks like there is no support for man pages from the snap infrastructure. So, there's not much I can do.

The "stable" version of the snap is really old (circa 2023) at this point because I have been shy about bumping it. The candidate and edge versions are more recent and should be more usable.

Thanks for your time.


Yep, I would say the stiffest competition for lnav is the old tools[1]. I would just hope folks could have an open mind and give "new" things a chance (although lnav has been on github for 17 years).

[1] - https://lnav.org/2013/09/10/competing-with-tail.html


> At that time lnav basically just kept everything in memory.

lnav has never really kept the contents of files in memory. It does build an index of every line in a file. One exception is that it will decompress small gzip files and keep them in memory as a tradeoff from decompressing on the fly.

The memory consumption has never been a problem for me. So, it's not something I've ever focused on.


Speaking as the author, I too wish it was written in Rust. But, I started it in 2007 when I needed to get practice with C++ for work. At this point, there's so much code in lnav, rewriting would be a long process. There are some sub-components[1] that are written in Rust though.

A new project called logana[2] is written in Rust and is headed in a good direction. Use/contribute to that if you're really interested.

[1] - https://github.com/tstack/lnav/tree/master/src/third-party/l...

[2] - https://github.com/pauloremoli/logana/


Thanks for the reply and the tip about logana.

As I mentioned in the following comment[1], that was meant more as a joke. Thanks for your work!

[1] - https://news.ycombinator.com/item?id=47514276


To elaborate on this, lnav (https://lnav.org) is always polling files to check for new data and will load it in automatically. It does not require the user to do anything.

As far as following the tail of the file: if the focused line is at the end of the file, the display will scroll automatically; otherwise, the display will stick to the current position. Also, if there is a search active, matches in the new data will be found and highlighted.


> It’s a bit odd to use

I would say it's a bad UX and not just odd. I can't see any benefit to making it modal. It should just load new data as it becomes available without making the user do anything.


I tend to agree with ProZD's tier list[1] where "Kiki's Delivery Service", "Porco Rosso", and "Totoro" are at S rank. Those might also be a good introduction since they're pretty "normal".

[1] - https://www.youtube.com/watch?v=g_8uHtL6V0Y


> There's also a new "https boot", which is supposed to be a PXE replacement, but TLS certs have time validity windows, and some clients may not have an RTC, or might have a dead CMOS battery, and those might not boot if the date is wrong.

I think the lack of entropy right after boot can also be a problem for the RNG. But, maybe that has been solved in more modern hardware.


> ... push it into overload ...

Oh, oh, I get to talk about my favorite bug!

I was working on network-booting servers with iPXE and we got a bug saying that things were working fine until the cluster size went over 4/5 machines. In a larger cluster, machines would not come up from a reboot. I thought QA was just being silly, why would the size of the cluster matter? I took a closer look and, sure enough, was able to reproduce the bug. Basically, the machine would sit there stuck trying to download the boot image over TCP from the server.

After some investigation, it turned out to be related to the heartbeats sent between machines (they were ICMP pings). Since iPXE is a very nice and fancy bootloader, it will happily respond to ICMP pings. Note that, in order to do this, it would do an ARP to find address to send the response to. Unfortunately, the size of the ARP cache was pretty small since this was "embedded" software (take a guess how big the cache was...). Essentially, while iPXE was downloading the image, the address of the image server would get pushed out of the ARP cache by all these heartbeats. Thus, the download would suffer since it had to constantly pause to redo the ARP request. So, things would work with a smaller cluster size since the ARP cache was big enough to keep track of the download server and the peers in the cluster.

I think I "fixed" it by responding to the ICMP using the source MAC address (making sure it wasn't broadcast) rather than doing an ARP.


Yeah broadcast with iPxe commonly has this issue, I’ve also run into it in my career more than once.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: