Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: McFly, a smart Bash history search CLI in Rust with a neural network (github.com)
130 points by tectonic on Dec 3, 2018 | hide | past | favorite | 56 comments

TIL I can use ctrl+R to search bash history! This is going to save me a lot of time going forward!

I rarely use ctrl+r, because most of the time I'm searching based on command name, which is at the start (unless it is in pipeline, etc).. so, I use this instead in `.inputrc`:

    # use up and down arrow to match search history based on typed starting text
    "\e[A": history-search-backward
    "\e[B": history-search-forward

This is wonderful, thank you!

This is actually a general feature of the GNU readline library, which a lot of programmes including bash use. For instance, a lot of REPLs like ghci use it.



As does my favorite underappreciated Unix CLI tool, units(1).

  $ units --verbose
  You have: 1 mile * 20 pounds force
  You want: joule
          1 mile * 20 pounds force = 143174.38 joule
          1 mile * 20 pounds force = (1 / 6.98449e-06) joule

I had no idea of that command. Thanks.

For dumb terminals, you can also use rlwrap: https://linux.die.net/man/1/rlwrap

Additional tip: if you overshoot and miss the command you wanted by hitting ctrl+R too much times (as it happens sometime when looking for an older command), you can go forward with ctrl+S. [EDIT: As Hikikomori pointed (thanks!), I got my shortcuts mixed up, ctrl+S is the shortcut for zsh, and the equivalent for bash is ctrl+shift+R]

(ctrl+S and ctrl+R are based on emacs shortcuts, where ctrl+S is search and ctrl+R is reverse search)

Note that your terminal emulator needs to pass the ctrl+S through. ctrl+S won’t work by default in a lot of cases from personal experience.

Doesn't work on GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin18)

ctrl+shift+r works though.

Yep you're right, I got my shortcuts between bash and zsh mixed-up. I edited my post, thanks for the correction.

(Not sure why bash did not go the obvious route here, I guess they already had something on ctrl+S when they implemented forward search...)

EDIT: actually, you can activate ctrl+S for search-foward on bash by using `stty -ixon`. See more info about why here https://unix.stackexchange.com/questions/141422/searching-hi...

ctrl-s is usually for pausing the terminal (where ctrl-q resumes it).


Now you should try fzf and take your command searching to a whole new level with fuzzy find!


I did virtually the same thing in about 1991. Was really into NNs back then. Apparently that was the dark ages for NNs.

I learned about them too back then. Programmed simple neurons in BASIC. 1993 a friend of mine wrote back-propagation networks in C on his Amiga.

For zsh, zaw[1]'s history is insanely useful. I don't really think a neural network is needed. Usually I get to the exact line I wanted with a few keystrokes.

[1]: https://github.com/zsh-users/zaw

Would it be much work to make this work with other shells? Personally I use fish, and would love to give this a spin with fzf.

It shouldn't be too hard if someone wants to work on it. https://github.com/cantino/mcfly/issues/3

Same thing.

Also, my first Rust project!

This is such a cool project! Plus a pretty awesome example of using Rust for CLI's. I'm curious about the neural network though. Is that also written in Rust? Or are you embedding something else?

Thanks! It is written in Rust. It's a very simple NN with one hidden layer and back propagation.

Just saw the implementation. It's goes a little over my head unfortunately, but I am curious about the affects of it. Did you compare the non-neural net version with the neural net version? If so what differences did you find?

I found that the neural network does a little better than a simple linear regression function for weighting the parameters. I train it on a few months of my own shell history to predict, based on the last commands, what I'll type next.

are the default weights I see in the code the result of your own local training?


Minor nitpick: probably better to append to PROMPT_COMMAND, rather than overwriting it.

I do actually append to it:

PROMPT_COMMAND="__last_exit=\$?;history -a \$MCFLY_HISTORY;mcfly add --exit \$__last_exit --append-to-histfile;history -cr \$MCFLY_HISTORY;${PROMPT_COMMAND}"

What made you choose Rust? And what were your impressions of it for this project?

I wanted a fast, safe systems language. I enjoyed it! The learning curve is steep but then you start to get it.

haha Feels same. My first program in rust was https://github.com/AkshayIyer12/advent-of-code/blob/master/2... .

Felt good finally writing some code in Rust after the rust lang book.

How does this compare with fzf?

I think fzf is just for finding files?

No, fzf is highly extensible. It can be used for files, grep, RG, git grep, bash history, and tons more.

no, fzf also does history, it uses `fc`. I think it should be doable to integrate fzf with McFly.

Good to know, thanks!

Cool project and well executed. My only complaints are:

1. The GUI should display at the bottom of the screen rather than at the top. I found myself constantly having to jump my eyes from the bottom of the terminal where my command was, to the top of the screen.

2. Show the full commands in the search results. Maybe they could be wrapped? Long commands with changes towards the end of the string get chopped off making it hard to know the differences between them in the display.

Nifty! Is the neural network constantly being adjusted from the user's own patterns of history re-use?

This also reminds me of one of my dream features for a shell: full dataflow input/output/provenance histories, on a per-file(-version) basis.

For example: show all the commands a certain file has been read from, show the command(s) that wrote to a file, show true steps by which a certain file was constructed from predecessor files/commands, etc.

Good question. At the moment it's static. I plan to make it do online learning at some point where it updates the weights as you use it.

this is a really nice example of a simple, practical, "non-deep" NN. I'm wondering, are there any good learning resources you could recommend to start a project like this?

It makes me want to try a personal project that makes use of a tiny light-weight NN as well. I see giant AlphaGo neural nets mentioned so frequently that I forgot they could be lean :)

I read some tutorials on back propagation for this, but if you want to get into ML in general, I highly recommend fast.ai and Andrew Ng's free Stanford ML course.

I’m still quite new to neural networks. Can you explain the material benefit to using a neural network over a priority queue with a similar weighting system? Are you giving any other input to the neural network than simply the metrics you listed?

P.S. e.g. a polynomial over the metrics. P.P.S. I imagine you’re using the neural network to tune the weights?

I started with a list of commands prioritized by a linear function of the metrics listed. It did okay, but since I was learning the linear function with back propagation, I figured a "real" network would do slightly better, and it seems to (but possibly only slightly).

That’s interesting. What’s the more real network than the one with back propagation?

Rather, the first one was simply a single node linear perceptron (a linear function) that I trained with backprop because I could, even though there are better techniques for fitting a linear function. Now that it's a "real" network, backprop is appropriate.

Makes sense. Thanks.

> I plan to make it do online learning

From what source? Do users need to worry about people gaming this to have ads show up in results?

Sorry, I agree that was confusing. Online learning is a term in ML for when you train a model incrementally over time. McFly won't connect to the Internet.

Ah, thanks for the clarification! To a non-ML person, this brought back memories of what Canonical was trying to do with search functionality on Ubuntu, by fetching ads over the internet to interleve in local search results.

Is there any reason that you hardcoded the model? I think its the right decision for a small model, I was just sort of surprised.

You can retrain it once you have a bunch of data with the train command. But I'll probably make it do online learning at some point too.

Great name


neat application! I was curious if the Rust community has existing machine learning/neural net libraries and found this:


thought I'd share in case you/others wanted to avoid coding their own or experiment quickly with other types of models.

It's fantastic and very useful! Good work!!

Thanks :)

Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact