Hacker News new | past | comments | ask | show | jobs | submit | alexpasmantier's comments login

Just added it to the showcase section https://github.com/alexpasmantier/television/wiki/Showcase if you wish to add any details :-)

Holy moly, forgot to add that too :p

tv isn’t intended to be a direct competitor to fzf, but since the two share similarities, here’s a quick breakdown of how tv differs:

- Batteries-included experience: tv is designed to work out of the box, with minimal setup.

- Smart shell integration: Autocomplete intelligently selects the right source for you.

- Interactive data source switching: You can change the data source on the fly without exiting the application.

- Centralized configuration: All settings are managed in one place, eliminating the need for custom shell scripts.

- Transitions feature: Enables interactive piping of results through multiple steps (e.g., git-repos > files > textual content in files).

- Built-in syntax-highlighted previews: More robust and integrated compared to configuring something like fzf --preview 'bat -n --color=always {0}'.

That said, tv is still an early-stage project (while fzf has been evolving for over 11 years). I’m also planning to draw inspiration from some of fzf’s excellent features in the future.


Great work!

From the README:

> is designed to be easily extensible

Can you elaborate a little on that?


Television is designed with a "framework-like" approach in mind. Here’s what makes it easy to extend (in a nutshell):

- Custom Cable Channels: You’re not limited to the built-in channels. You can create your own "cable channels"—essentially custom data sources—by tweaking a simple config file. As the project grows, the vision is for users to contribute their own channel recipes to the wiki, making it easy for others to pick and choose what suits their needs.

- Smart Shell Integration: Television integrates seamlessly with your shell, offering features like smart autocomplete and interactive piping of results. It’s also fully customizable, so you can tweak the smart autocomplete behavior or other features to match your workflow.

- Coming Soon: User-definable custom actions within cable channel recipes for an even more streamlined and tailored experience.

This probably doesn’t answer everything, but I’d be happy to dive into more details about any of the points above! :-)


Author here—thanks for trying out tv and for sharing your feedback, I really appreciate it.

The preview window in tv is designed to compute its content asynchronously. This approach ensures smooth input handling and maintains overall UI responsiveness. You can test this yourself by using a custom preview command that simulates a delay, like so:

  tv --preview "sleep 1 && echo 'the preview'"
You'll notice that even with a delay, the UI remains responsive and doesn't block.

That said, I'm sure there's always room for improvement, and I'd be happy to hear your thoughts or suggestions if you're open to discussing it further.


Seeking curious testers and enthusiastic contributors! :-)


Changed to mp4, should work now.


Thanks for the feedback, will update the video!


Posting this here as well for reference https://news.ycombinator.com/item?id=41380671

@burntsushi

Hi! First of all, thank you for taking the time to write this. I've been using ripgrep for quite some time, and it's an amazing piece of software. Having your comment here is truly an honor.

> I'm not sure I totally get the motivation here to be honest

This is primarily a small project I started to familiarize myself with Rust. I thought that exploring the basics of ripgrep and attempting to build something similar would be a good way to get started.

> Also, the flags that it does support are overriding long-held custom that are likely to be confusing to users

Noted. I'll consider making these changes to avoid potentially confusing anyone.

> It's also pretty annoying to share screenshots of benchmarks instead of just showing a simple copyable command with a paste of the results.

I've updated the documentation with the actual commands and included a copy of the results.

> I also can't quite reproduce at least the curl benchmark

I just ran the curl benchmark again on the same machine (my work laptop, an M3 Apple MacBook), and here are the results:

  $ hyperfine "rg '[A-Z]+_NOBODY' ." "gg '[A-Z]+_NOBODY'" "ggrep -rE '[A-Z]+_NOBODY' ."

  Benchmark 1: rg '[A-Z]+_NOBODY' .
     Time (mean ± σ):      38.5 ms ±   2.2 ms    [User: 18.1 ms, System: 207.3 ms]
     Range (min … max):    33.8 ms …  42.8 ms    72 runs
  
  Benchmark 2: gg '[A-Z]+_NOBODY'
     Time (mean ± σ):      21.8 ms ±   0.8 ms    [User: 15.4 ms, System: 53.1 ms]
     Range (min … max):    20.2 ms …  23.8 ms    115 runs
  
  Benchmark 3: ggrep -rE '[A-Z]+_NOBODY' .
     Time (mean ± σ):      73.3 ms ±   0.9 ms    [User: 26.5 ms, System: 45.7 ms]
     Range (min … max):    70.8 ms …  75.6 ms    41 runs
  
  Summary
     gg '[A-Z]+_NOBODY' ran
       1.77 ± 0.12 times faster than rg '[A-Z]+_NOBODY' .
       3.36 ± 0.13 times faster than ggrep -rE '[A-Z]+_NOBODY' .
> It looks like it's assuming that the `ArrayQueue` it uses is never full?

I used a default maximum size for the queue (configurable via the --max-results argument) to pre-allocate it, as I thought this might improve performance. However, I'm currently not handling errors properly and just allowing the program to panic when the number of results exceeds the set limit.

> So why doesn't it have the same performance profile as ripgrep?

Given the differences in execution times between our benchmarks, I suspect that because ripgrep's (and, by extension, gg's) performance bottleneck is primarily disk I/O, variations in filesystems and underlying storage hardware could explain the significantly different results we're observing. What do you think?


@burntsushi

Hi! First of all, thank you for taking the time to write this. I've been using ripgrep for quite some time, and it's an amazing piece of software. Having your comment here is truly an honor.

> I'm not sure I totally get the motivation here to be honest

This is primarily a small project I started to familiarize myself with Rust. I thought that exploring the basics of ripgrep and attempting to build something similar would be a good way to get started.

> Also, the flags that it does support are overriding long-held custom that are likely to be confusing to users

Noted. I'll consider making these changes to avoid potentially confusing anyone.

> It's also pretty annoying to share screenshots of benchmarks instead of just showing a simple copyable command with a paste of the results.

I've updated the documentation with the actual commands and included a copy of the results.

> I also can't quite reproduce at least the curl benchmark

I just ran the curl benchmark again on the same machine (my work laptop, an M3 Apple MacBook), and here are the results:

  $ hyperfine "rg '[A-Z]+_NOBODY' ." "gg '[A-Z]+_NOBODY'" "ggrep -rE '[A-Z]+_NOBODY' ."

  Benchmark 1: rg '[A-Z]+_NOBODY' .
     Time (mean ± σ):      38.5 ms ±   2.2 ms    [User: 18.1 ms, System: 207.3 ms]
     Range (min … max):    33.8 ms …  42.8 ms    72 runs
  
  Benchmark 2: gg '[A-Z]+_NOBODY'
     Time (mean ± σ):      21.8 ms ±   0.8 ms    [User: 15.4 ms, System: 53.1 ms]
     Range (min … max):    20.2 ms …  23.8 ms    115 runs
  
  Benchmark 3: ggrep -rE '[A-Z]+_NOBODY' .
     Time (mean ± σ):      73.3 ms ±   0.9 ms    [User: 26.5 ms, System: 45.7 ms]
     Range (min … max):    70.8 ms …  75.6 ms    41 runs
  
  Summary
     gg '[A-Z]+_NOBODY' ran
       1.77 ± 0.12 times faster than rg '[A-Z]+_NOBODY' .
       3.36 ± 0.13 times faster than ggrep -rE '[A-Z]+_NOBODY' .
> It looks like it's assuming that the `ArrayQueue` it uses is never full?

I used a default maximum size for the queue (configurable via the --max-results argument) to pre-allocate it, as I thought this might improve performance. However, I'm currently not handling errors properly and just allowing the program to panic when the number of results exceeds the set limit.

> So why doesn't it have the same performance profile as ripgrep?

Given the differences in execution times between our benchmarks, I suspect that because ripgrep's (and, by extension, gg's) performance bottleneck is primarily disk I/O, variations in filesystems and underlying storage hardware could explain the significantly different results we're observing. What do you think?


It's not disk I/O because we're using hyperfine for measuring. It does warm-up runs first, and unless your machine has a teeny amount of RAM, everything is in cache. You can put your corpus on a ramdisk (usually `/tmp` is on Linux and I believe always `/dev/shm`, IDK about macOS) to verify this.

Since you're running on macOS, I'll do the same. I have an M2 mac mini. My previous benchmarks were on my Linux workstation. Your `curl` benchmark:

    $ hyperfine "rg '[A-Z]+_NOBODY' ." "gg '[A-Z]+_NOBODY'"
    Benchmark 1: rg '[A-Z]+_NOBODY' .
      Time (mean ± σ):      20.3 ms ±   0.7 ms    [User: 18.6 ms, System: 96.0 ms]
      Range (min … max):    18.4 ms …  21.3 ms    126 runs

    Benchmark 2: gg '[A-Z]+_NOBODY'
      Time (mean ± σ):      17.9 ms ±   0.7 ms    [User: 15.6 ms, System: 38.6 ms]
      Range (min … max):    17.0 ms …  19.9 ms    141 runs

    Summary
      gg '[A-Z]+_NOBODY' ran
        1.13 ± 0.06 times faster than rg '[A-Z]+_NOBODY' .
So slightly edged out by `gg` here, but not as big of a difference as you're seeing. What version of ripgrep are you using?

Also, as I said before, these times are pretty short. Try a bigger corpus. For example, in my clone of Linux (also on my M2 mac mini):

    $ git remote -v
    origin  git@github.com:BurntSushi/linux (fetch)
    origin  git@github.com:BurntSushi/linux (push)

    $ git rev-parse HEAD
    84e57d292203a45c96dbcb2e6be9dd80961d981a

    $ hyperfine "rg '[A-Z]+_NOBODY' ." "gg '[A-Z]+_NOBODY'"
    Benchmark 1: rg '[A-Z]+_NOBODY' .
      Time (mean ± σ):     343.3 ms ±   4.2 ms    [User: 359.3 ms, System: 2243.3 ms]
      Range (min … max):   339.0 ms … 352.7 ms    10 runs

    Benchmark 2: gg '[A-Z]+_NOBODY'
      Time (mean ± σ):     351.1 ms ±   4.6 ms    [User: 326.4 ms, System: 1059.1 ms]
      Range (min … max):   348.2 ms … 363.8 ms    10 runs

    Summary
      rg '[A-Z]+_NOBODY' . ran
        1.02 ± 0.02 times faster than gg '[A-Z]+_NOBODY'
It is very interesting that the differences are almost zero on macOS but quite a bit bigger on Linux. That might be worth investigating.

IMO, if you're advertising "circumstantially faster than ripgrep," then you should be able to characterize the circumstances in which that occurs.


Oh... I see the problem. It's probably the thread heuristic. When running gg and rg, make sure -T and -j, respectively, are set to the same number. Because I think gg always defaults to `4`. Where as ripgrep is probably defaulting to a higher number. On very small corpora, like curl, this can actually lead to overall slower times due to the overhead of starting the threads.

This also explains why the times are faster on Linux. My Linux workstation has a lot more CPUs than my M2 mac mini. My mac mini has 8 logical CPUs while my Linux box has 24. ripgrep won't necessarily start one thread per core, but at 8 cores, it will indeed start one thread per core. Where as gg will start 4. You can see ripgrep's heuristic here: https://github.com/BurntSushi/ripgrep/blob/e0f1000df67f82ab0...

I suppose thread count heuristics are fair game for benchmarks, but in order to measure those better, you need a bigger variety of corpus sizes. Even with the Linux kernel, the difference between 4 and 8 threads for `gg` is not that big:

    $ hyperfine "gg -T4 '[A-Z]+_NOBODY'" "gg -T8 '[A-Z]+_NOBODY'"
    Benchmark 1: gg -T4 '[A-Z]+_NOBODY'
      Time (mean ± σ):     364.3 ms ±   2.5 ms    [User: 331.1 ms, System: 1108.6 ms]
      Range (min … max):   360.8 ms … 369.1 ms    10 runs

    Benchmark 2: gg -T8 '[A-Z]+_NOBODY'
      Time (mean ± σ):     349.3 ms ±   3.1 ms    [User: 454.2 ms, System: 2056.2 ms]
      Range (min … max):   345.4 ms … 355.8 ms    10 runs

    Summary
      gg -T8 '[A-Z]+_NOBODY' ran
        1.04 ± 0.01 times faster than gg -T4 '[A-Z]+_NOBODY'
But go to a bigger corpus and a difference becomes much more apparent:

    $ hyperfine "gg -T4 '[A-Z]+_NOBODY'" "gg -T8 '[A-Z]+_NOBODY'"
    Benchmark 1: gg -T4 '[A-Z]+_NOBODY'
      Time (mean ± σ):     16.777 s ±  0.351 s    [User: 1.868 s, System: 12.301 s]
      Range (min … max):   16.376 s … 17.396 s    10 runs

    Benchmark 2: gg -T8 '[A-Z]+_NOBODY'
      Time (mean ± σ):     10.273 s ±  0.628 s    [User: 1.931 s, System: 12.215 s]
      Range (min … max):    8.980 s … 11.066 s    10 runs

    Summary
      gg -T8 '[A-Z]+_NOBODY' ran
        1.63 ± 0.11 times faster than gg -T4 '[A-Z]+_NOBODY'
This is on a checkout of the Chromium repository.

The increased variety of benchmarks is important here because you might have a simpler heuristic for thread count that does result in overall marginally faster times in some cases, but this obscures what you're giving up: substantially slower times in other cases. Moreover, the cases where 4 versus 8 threads results in faster times for 4 threads tend to have very small absolute differences. i.e., Not hugely perceptible by humans.


Ahh! Great catch, and thanks for taking the time to put that in writing.

I did set gg to default to 4 threads, which seemed to be the optimal number on my machine for the typical repo sizes I navigate daily. Increasing the number of threads beyond that often results in unnecessary overhead for my personal use cases.

I appreciate you pointing out the heuristic used in the ripgrep project. From what I understand, it also uses a fixed, machine-dependent number of threads, predetermined regardless of the task at hand (except for single-file tasks).

This is something I was curious about while writing the code but couldn't fully answer due to my limited knowledge of the subject: could we potentially use a filesystem-specific heuristic to estimate the workload and dynamically adjust the number of threads accordingly?

What I mean is a method, perhaps within the ignore crate, to estimate the amount of data to process—such as the number of files, file sizes, or number of lines—based on easily and cheaply accessible filesystem metadata.


I'm not aware of one. Any tool that tells you disk space has to actually crawl the directory tree to report it. But that is precisely the thing we want to parallelize.

The only other option I can think of is to dynamically adjust. Maybe after a certain amount of work has completed, spin up more threads. But I'm not sure it's worth doing.


Looking at inode metadata—specifically the number of links for directory nodes—might iteratively provide a one-step-ahead view of what's left to crawl, allowing for preemptive thread adjustments during recursion.

e.g. looking at the Links: 101 metadata on the `curl` codebase for src:

  $ stat -x src
  
    File: "src"
    Size: 3232         FileType: Directory
    Mode: (0755/drwxr-xr-x)         Uid: (  501/    alex)  Gid: (   20/   staff)
  Device: 1,22   Inode: 5857579    Links: 101
  Access: Tue Aug 27 22:21:23 2024
  Modify: Tue Aug 27 22:21:19 2024
  Change: Tue Aug 27 22:21:19 2024
   Birth: Tue Aug 27 22:21:19 2024
But then that still involves dynamically adjusting and might be kind of overkill for a relatively uncertain benefit...


Hi Thanks for your comment. I uploaded a couple of tests using `hyperfine` to show cases where it might be faster. Will put in more work to do a proper benchmarking session in the days to come.


Thank you for having added those!

What makes it circumstantially more performant, by the way?

Off-topic, but consider changing "circumstancially" to "circumstantially" in the README; the latter is the correct term.


> What makes it circumstantially more performant, by the way?

The thread above might help provide the start of an answer.

> Off-topic, but consider changing "circumstancially" to "circumstantially" in the README; the latter is the correct term.

Done, thanks for spotting the typo.


Oh okay, thank you, and you're welcome. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: