
Xeon – Node.js tool for creating modular, reusable shell scripts - drkraken
https://github.com/hzlmn/xeon
======
koyao
The nice thing about shell scripts is that it's ubiquitous and have very
little external dependency. Having a shell script depend on node.js seems a
bit counter-intuitive?

~~~
drkraken
It not depend on Node.js.

Xeon just add ability to use npm as your package manager, why use tools like
bpkg or something else if you can use great environment that trusted by
thousands people.

Xeon bundle should be made on dev step, you should not bundle it on real
server .etc where u use this script.

~~~
subway
Because thousands of people place their trust poorly.

~~~
nilliams
Do you have any specific complaints with npm?

~~~
subway
Reliance on transport security instead of providing cryptographic verification
of code is my biggest beef, very closely followed by what is essentially a
nonexistent reputation system (or, in lieu of a code reputation system, a
curated selection of packages).

------
jamescun
What is the benefit of this over the `source` builtin?

    
    
        source bye.sh

or

    
    
        . bye.sh

~~~
drkraken
better management of your dependencies, also bundling to single file for
easier distribution. In future transforming scripts with plugins means reusing
fish shell functions in bash scripts .etc Also each shell has different syntax
for sourcing files.

~~~
mbreese
If you're needing to manage dependencies for a bash script, are you sure
you're using the right language for the job? I'm not convinced that mixing and
matching functions from different shells is a good thing.

~~~
techdragon
I like to modularise my shell scripts for maintainability reasons. It would be
nice to be able to just "pull in the pieces" as needed from my collection of
components instead of, as I currently tend to do, re-including them wherever
I'm using them.

~~~
mbreese
I guess I'm having trouble seeing the use-case for having large enough shell
scripts to require splitting them up into modules.

What types of functions do you write in shell that you need to re-import?

------
bio4m
Is it really a good idea to name your open source module after a trademarked
name ? Xeon is the brand name for Intel's line of server and workstation
processors and they dont strike me as the kind of firm that would take co-
opting of a brand name lightly.

~~~
dogma1138
Seems like there are multiple registrations for Xeon
[https://trademarks.justia.com/search?q=Xeon+](https://trademarks.justia.com/search?q=Xeon+)

IIRC you can register identical trademarks as long as they represent something
sufficiently different especially if it's a commonly used word, no one is
going to confuse between a CPU an Ecig and a Scooter.

------
mbreese
I'd reconsider that name, if I were the author...

~~~
userbinator
Agreed, both for the sake of everyone trying to find your tool and to stay out
of the way of Intel's lawyers.

------
chriswarbo
I've not played with node.js since the very early days, and never really used
NPM. However, I do see the need for modular, composable shell scripts.

Personally, I've been using Nix in a similar way, since it also has nice
features like caching, laziness, splicing into indented strings, dependency
management, etc. For example, if you have a Nix expression stored in "my-
script.nix" you can use the following (e.g. in "my-script.sh") to invoke it:

    
    
        nix-instantiate --read-write-mode --show-trace --eval -E 'import ./my-script.nix'
    

The `--eval` tells Nix to evaluate an expression, rather than build a package.
`-E` is the expression to evaluate (in this case, importing our "script"
file). `--read-write-mode` allows the script to add things to the Nix store.
`--show-trace` is to aid debugging.

~~~
koolba
> ... splicing into indented strings ...

That sounds pretty cool, how does it work? Does it match the indentation of
the parent that issued the splice for the entire child? How about nested
splices?

~~~
chriswarbo
Well, this is basically two features I condensed into one phrase. I didn't
mean context-sensitive splicing (e.g. splicing together Python code, whilst
adhering to the off-side rule).

Firstly, indented strings mean that long strings (such as scripts) can be
embedded inside other expressions quite naturally. For example:

    
    
        runCommand "foo"
                   { buildInputs = [ python imagemagick ]; }
                   ''
                     I am an indented string
                     I will be executed as a bash script, with the following dependencies available:
                      - python
                      - imagemagick
    
                     Since these lines, and those above beginning with "I", have the least indentation,
                     they will appear flush to the left. The "list" above will hence be indented by 1
                     space.
                   ''
    

Secondly, splicing allows Nix expressions to be embedded inside strings. A
splice begins with "${" and ends with "}". The expression should either
evaluate to a string, which is inserted as-is, or a "derivation" (e.g. a
package), which gets "instantiated" (i.e. installed) and its installation
directory is inserted into the resulting string. Splices can be nested too.

For example, instead of giving "python" as a dependency in the buildInputs, we
could splice the full path into a string, e.g.

    
    
        ''
          "${python}/bin/python" my_script.py
        ''
    

Although this is probably a bad idea, since there may be transitive
dependencies, etc. missing when the script gets executed.

If we want to build up a result incrementally, with each step getting cached,
we can use "runCommand", and write the results to "$out". For example:

    
    
        with import <nixpkgs> {};
        with builtins;
    
        let
    
        # Takes a script and runs it with jq available (Nix functions are curried)
        runJq = runCommand "jq-cmd" { buildInputs = [ jq ]; };
        
        step1 = runJq ''
                        echo "I am step 1" 1>&2
                        echo '[{"name": "foo"}, {"name": "bar"}]' | jq 'map(.name)' > "$out"
                      '';
        step2 = runJq ''
                        echo "I am step 2" 1>&2
                        I won't be executed, because Nix is lazy and nothing calls me
                      '';
        step3 = runJq ''
                        echo "I am step 3" 1>&2
                        jq 'length' < "${step1}" > "$out"
                      '';
    
        in readFile step3
    

When run, this gives the following:

    
    
        $ ./go.sh
        building path(s) ‘/nix/store/5ks08zbvmgzbhg9kr0k4g75nf2ymsqsr-jq-cmd’
        I am step 1
        building path(s) ‘/nix/store/v1svcqq6cmi4xc9650qz9w2x177w4pfr-jq-cmd’
        I am step 3
        "2\n"
        $ ./go.sh
        "2\n"
    

The results are cached, and will be re-used as long as the commands aren't
edited, and their dependencies don't change (e.g. if a newer version of jq is
available, they'll be re-run with that version).

In this case, each "step" represents the data, which is common in lazy
languages. Alternatively, we can use "writeScript" to write more 'traditional'
process-oriented scripts:

    
    
        with import <nixpkgs> {};
        with builtins;
    
        let
    
        # Takes a script and runs it with jq available (Nix functions are curried)
        runJq = runCommand "jq-cmd" { buildInputs = [ jq ]; };
    
        step1 = writeScript "step-1" ''
                  echo "I am step 1" 1>&2
                  echo '[{"name": "foo"}, {"name": "bar"}]' | jq 'map(.name)'
                '';
        step2 = writeScript "step-2" ''
                  echo "I am step 2" 1>&2
                  I won't be executed, because Nix is lazy and nothing calls me
                '';
        step3 = writeScript "step-3" ''
                  echo "I am step 3" 1>&2
                  "${step1}" | jq 'length'
                '';
    
        in readFile (runJq ''
             "${step3}" > "$out"
           '')
    

Of course, we need _something_ to invoke these scripts, which is why I used
"runJq" in the final expression. When run, we get:

    
    
        $ ./go.sh 
        building path(s) ‘/nix/store/fnw68cmkib5fkmhls4fkdhx0vb2cyka8-step-1’
        building path(s) ‘/nix/store/1kiwa6m11d0apxfjbwpqq3vl6jbv3sdx-step-3’
        building path(s) ‘/nix/store/9hv1jcrglyx8x6xa64pnds6vzcp35zl5-jq-cmd’
        I am step 3
        I am step 1
        "2\n"
        $ ./go.sh 
        "2\n"
    

This time the _scripts_ are cached, but we execute them both together in a
normal pipe. The overall result of the "runJq" call is still cached though.
This is how you'd run non-bash scripts too: by using "writeFile" to save your
code to disk, and "runCommand" to invoke it with a bash one-liner. For
example, if we want "step4" to use Haskell we might do the following:

    
    
        runJq = runCommand "jq-cmd" { buildInputs = [ jq ghc ]; };
    
        ...
    
        hsScript = script: writeScript "hs-cmd" ''
                     runhaskell "${writeScript "hs-script" script}"
                   '';
    
        ...
    
        step4 = hsScript ''
                  doTimes :: (Show a) => a -> String -> String
                  doTimes str n = show (replicate (read n) str)
    
                  hello = "hello world"
    
                  main = interact (doTimes hello)
                '';
    
        in readFile (runJq ''
             "${step3}" | "${step4}" > "$out"
           '')
    

This reads the length given by jq, and writes out a list of that many "hello
world"s:

    
    
        $ ./go.sh
        building path(s) ‘/nix/store/2d7wrd78dk1ilj84adnyq8ddgzy6m2rr-hs-cmd’
        building path(s) ‘/nix/store/haqcwssfbzbj5s4ampv322qbpll1gw1h-jq-cmd’
        I am step 3
        I am step 1
        "[\"hello world\",\"hello world\"]"
    

Unfortunately, this can end up separating the code from its dependencies, i.e.
we needed to give "ghc" as a dependency to whichever script _invokes_ "step4"
(via "runJq"), rather than being able to add it in "hsScript". If we used the
original data-oriented approach, this wouldn't be an issue.

It's also pretty easy to transfer data between the Nix language and the
processes we're invoking, using "readFile" and "builtins.fromJSON", or
"builtins.toJSON" inside a splice; although Nix doesn't support floats, so you
might need to turn them into strings first. This is useful for doing tricky
transformations on small amounts of data, which may be error-prone in bash,
but where invoking a full-blown language like Haskell or Python would be
overkill. It can also be useful for things like assertions.

------
mrmondo
May I suggest that the naming of the project might make it hard for people to
discover / find / seek information on the product?

~~~
scoot
Nope:
[https://www.google.com/search?q=xeon+script](https://www.google.com/search?q=xeon+script)
(#2 result)

~~~
drkraken
also, xeon.js, xeon github - 1 result

~~~
astrodust
It doesn't matter that you _can_ find it, but that the name is obviously in
wide use as a trademark in the technology space.

Intel.js or the.js would be just as easy to find, yet that's not the issue
here.

~~~
mrmondo
Exactly, he knows what he's looking for, exactly - also because of the HN post
and release google's search ranking will be rating it higher than in the
future. Say you search for
[https://www.google.com.au/search?q=xeon+code](https://www.google.com.au/search?q=xeon+code)
or something similar - you're not going to find it. OK maybe you should be
better with your search terms but still, can we stop naming tech things the
same exact name as other common tech search terms?

------
seniorsassycat
Woah, this has been on my sort list of projects to build.

Advantages over `source` \- require is relative to script file instead of
execution `pwd` \- a build process can create a single distributable file by
cat-ing the required files.

Down the road I'd like to see a `babel` for bash. I think import / export, and
functions with arguments would make my time with bash more enjoyable and
productive.

As with other tools, `xeon` does a little too much,

    
    
      require("http://some.external.domain/awesome_script.sh")

~~~
Hurtak
> require is relative to script file instead of execution
    
    
      cd "$(dirname "$0")"
    

can solve that

------
clux
Is there really any benefit to this over explicit child process calls? I
realize the syntax is shorter, but now you're hiding the fact that you are
shelling out.

Overloading require for this purpose is a guaranteed way to break static
analysers and module bundlers.

~~~
drkraken
If I understand you right, child process is made by utility that check your
local version and laters to notify your for updates. It should only be called
once an hour

------
amatus
How does this compare to shlib.import?

[https://github.com/major0/shlib/blob/master/doc/import.md](https://github.com/major0/shlib/blob/master/doc/import.md)

~~~
drkraken
main difference from all of this tools, that they try to create bridge between
different shell types, instead of providing same API, integrated with other
tools.

------
nikolay
Outside of the packaging benefits (which could be accomplished in many other
ways), but which come at the cost of prefixing stuff, I don't see much use.

