
Mcfly – neural-network powered directory and context-aware shell history search - yankcrime
https://github.com/cantino/mcfly
======
oefrha
Regarding

    
    
      if [[ -r "$(brew --prefix)/opt/mcfly/mcfly.zsh" ]]; then
        source "$(brew --prefix)/opt/mcfly/mcfly.zsh"
      fi
    

Tip: hardcode brew --prefix. It's /usr/local for most people anyway. brew is
written in Ruby so invocations are expensive; brew --prefix takes ~30ms on my
systems so the above snippet adds ~60ms to shell startup which is a complete
waste. Add a couple of frivolous snippets like this and you have people
complaining about slow startup.

~~~
antihero
Or as a compromise, `export BREW_PREFIX=$(brew --prefix)` somewhere nearer
startup.

~~~
smichel17
Optimization on that:

    
    
        if [[ -d /usr/local/opt/mcfly ]]; then
            export BREW_PREFIX=/usr/local
        else
            export BREW_PREFIX=$(brew --prefix)
        fi

~~~
rovr138
I like what pyenv does when you do

    
    
        pyenv init
    

It prints the command for you. They could add the check there, generate the
static path and then just print what they need to add hardcoded.

Edit,

For reference, this is what it prints on the different shells,

On ZSH,

    
    
        # Load pyenv automatically by appending
        # the following to ~/.zshrc:
        
        eval "$(pyenv init -)"
    

On Bash,

    
    
        # Load pyenv automatically by appending
        # the following to ~/.bash_profile:
        
        eval "$(pyenv init -)"
    

On Fish,

    
    
        # Load pyenv automatically by appending
        # the following to ~/.config/fish/config.fish:
        
        status --is-interactive; and source (pyenv init -|psub)

~~~
oefrha
If every module you load does their own 30ms check, it might cut 60ms to 30ms
at each call site but they add up just the same.

~~~
rovr138
Of course.

I’m saying they print what you should add. I’m not saying have an init command
that you run on your shell script on startup. I’m saying have an init command
that tell you what to put.

I think the hardcoded approach by smichel17 with a fallback is a good one.
Except that if you ran this command, it might print a different hardcoded path
based on where it found the prefix to be at.

------
ihm
> The command's historical exit status. You probably don't want to run old
> failed commands.

I often will use ctrl R to run tests, and they keep failing until I fix all
the bugs.

~~~
gpvos
I suppose the idea is that the neural network will learn that you do that, and
maybe that you do that more often in some directories than others, or for
commands with "test" in the name, etc.

------
mijoharas
so, I just learnt from this that brew apparently now supports linux.

Can someone tell me why someone would want to use brew rather than their
existing package manager on linux? Is it due to existing familiarity with
brew, or the large number of snippets in people's docs that show how to
install with brew?

Genuine question, I'm sure there are reasons to want to use it over a builtin
package manager on linux, I'm just not sure what they are.

~~~
staycoolboy
It depends on the package.

Some folks want the newest and brew might update faster than Debian, e.g.,
Cmake was at 3.10 on Ubuntu for the longest time but Brew had it at 3.15.

Me personally, I've run into waaaaaaaaay too many issues trying to run more
than one package manager. The best solution I've encountered: use the native
package manager first, and if you need a new version of something, build it
manually BUT SAVE THE BUILD DIRECTORY for uninstall.

(My favourite bug was autoconf tools requiring dependencies that were mixed
between brew and macports: each one required the other to be first in the
PATH!!)

------
tetris11
Is this not overkill for a reasonably straight forward recommendation system?

~~~
donquichotte
Of course it is. fzf [1] is all you need, and it behaves deterministically.

[1] [https://github.com/junegunn/fzf](https://github.com/junegunn/fzf)

------
malux85
Would this be a good candidate for the neural Turing Machine architecture?
Where the goal is to learn the memory access patterns of an internalised
memory bank ...

If the shell commands were vectorised into a formalised encoding and then this
encoding was stored in the neural memory bank and the objective function of
the neural network was to learn the access patterns (reads and writes) of this
memory, then it would have enough contextual awareness (through access
patterns) and internal knowledge (through The formalised command
vectorisation) to know that if I “cd (somewhere)” run a few commands then a
likely command might be “cd (back)” without supervised training data?

All learnt through essentially observation by the network?

And if so, learning the symbolic nature of the directory name, and the
relationship to “cd” and how that is accessed would be a rudimentary form of
abstract reasoning, no?

~~~
sqrt17
I wish someone would build that and force you to use it for the next six
months.

NTMs and similar are very slow at learning, and would (e.g.) not pick up on
the fact that you've created a directory until you've interacted with it a
couple thousand times.

~~~
malux85
> I wish someone would build that and force you to use it for the next six
> months.

Ok, ok, I was only asking / wondering, sorry I asked, sheesh

------
oweiler
This is an amazing tool right next to ripgrep, starship and lsd.

~~~
rovr138
I've heard so much about starship lately but I found it super slow.

------
dang
If curious see also

2018
[https://news.ycombinator.com/item?id=18593015](https://news.ycombinator.com/item?id=18593015)

