Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Aviary.sh – A tiny Bash alternative to Ansible (github.com)
276 points by dxchester 7 days ago | hide | past | web | favorite | 206 comments





I'm not sure if I am understanding this (aviary.sh) correctly, but it looks like this requires an agent to run on the configured hosts.

One of the things that I love about Ansible's model is that we never need an agent on the host. Once you configure something with Ansible, there's no artifact of the configuration left on the machine.


At some point in a previous company we had a lot of individual VPSes set up basically the same way. I was sick of internal documentation that listed step-by-step commands intertwined with descriptions and manual actions, and any attempt at puppet or ansible just blew up because it was something else to learn by the team (believe me, I tried, it just wouldn't stick with anyone).

So I created `apply`[0].

This small bash thing pushes bash scripts called 'units' through ssh to execute through an uploaded 'run' script:

   ./push units/update units/sshd units/ssh_authorized_keys root@foo.example.com
By writing those 'unit' scripts to be idempotent you can just run them again and again. 'units' can be aggregated in 'groups', which can themselves reference 'groups':

   ./push groups/base groups/ruby units/dockerd root@foo.example.com
Finally, you can define 'hosts', which are like 'groups', only they save you some typing, which it could do in sequence or parallel:

   ./apply hosts/foo.example.com hosts/bar.example.com
And since units/, groups/, and hosts/, are just directories and files, autocompletion works immediately and you could get creative with shell expansion for arguments.

It was a deceptively simple experience, immediately accessible, trivially enabled literate coding, and overall extremely useful both to set up and maintain those VPSes as well as creating dev environments, or local VMs to test a e.g one-shot unit performing a change or migration.

[0]: https://gitlab.com/adhoc-gti/apply

[1]: https://github.com/lloeki/apply (personal fork)


The agent approach is mostly ok, it tends however to suffer when you have more than a few hundred machines (I’ve found, anyway) that your ansible code rots a bit. Say for that one server, you know the one, the weird etl thing bi uses, was run once and during a prod problem you suddenly have to start fixing the ansible code to rebuild it.

With an agent, it’s applying all the time and this doesn’t happen.

We actually moved to salt from ansible and we’re happy..


I manage thousands of machines with rrconf

I'm a bit confused - agent vs. agentless isn't obviously correlated to continuous vs. on human action to me. Write a cronjob / systemd timer / scheduled task / Jenkins job / Travis cronjob / GitHub Actions scheduled event / CloudWatch + Lambda / whatever you like to run your Ansible playbook, from one machine/container/whatever, on your entire fleet. (It's certainly no harder than writing a cronjob or whatever to run your config management on every machine - if you can schedule tasks on all your machines, you can certainly schedule them on one.)

That gets you the standard advantages of agentless setups, including not requiring the runtime of your config management tool to be everywhere, being able to reprovision ephemeral + immutable cloud resources, and being able to centrally report errors, without any more risk of configuration drift or bitrot.


while ansible does push more commonly (and checkout mitogen for speed increases there), its also trivial to do pull mode with some loss of multi-node orchestration, and just cron/systemd timer it.

Have also had issues with agents in high security areas.

Machines sitting behind VNets, governed by security review boards makes getting agents approved a bit tricky


This behaviour is why I'm using (and liking) Chef these days, where six years ago I absolutely hated Chef.

I still love Ansible, especially for small quick things. As with many tools, though, it's not the only one I reach for any more.


Salt agents in particular call out to the master making NAT a non issue. While it does have problems of its own (connection scaling), it made automating servers across 600 physical locations much easier.

That is not entirely true. Ansible needs python installed on the target host, and a lot of modules (plugins?) require additional python libraries on the target.

The av agent that GP refers to is a script that runs periodically on target hosts to pull the latest configuration using Git. Ansible has a Python dependency but no agent running on target hosts, instead relying on SSH to push configuration changes.

Ideally, I’d like to see a configuration tool that uses Ansible’s SSH push model but without the Python dependency.


Strictly speaking, shouldn't it be possible to bodge ansible to send and execute a static python interpreter to bootstrap a host? I've been bitten by "oh no, that Python's too old" or similar several times now. The idea of bootstrapping from only having a `sh` on the far end seems like it ought to be a thing, but (for instance) mitogen only goes part-way.

There was a pen-test tool designed on these lines once called Mosquito Lisp, which as far as I can see evolved into something called WaspVM: https://github.com/swdunlop/WaspVM I have no idea if it's in use, alive, or even working these days.

Is anyone working on this side of the problem and I've missed it?


If python is considered an agent, isn't bash now an agent?

It doesn't need any python. Ansible opens a regular ssh session and can run anything from it - including python if available.

Only with the “raw” module... which is what you use to ensure Python is installed on a machine managed by Ansible.

you can use the "raw" module without python.

Ansible does create a bunch of temp files. I've even had issues where my ansible barfed because my uid changed and the old temp files couldn't be accessed anymore.

The key selling points of Ansible for me were --check and --diff which make it much easier to test and debug a playbook

This is more similar to using ansible-pull than the traditional approach.

If you need a simple way to inject secrets into your aviary variables you may find encpass.sh (https://github.com/plyint/encpass.sh) a convenient choice.

That's a pretty cool project. It also is written in bash as well!

Ansible is agentless tho. That's a pretty big difference.

Yeah, it was the thing that made me choose it instead of other alternatives. I manage machines in heterogeneous client networks without direct Internet access most of the time and agents for managing infrastructure are not useful at all.

Yes, but it's also horrible to use. I would rather set up a Puppet agent once than struggle with Ansible for an eternity.

Would I be right if I guessed you have a development background?

For reference, I have an ops background.


I have a dev background and think that Ansible is much nicer to work with than Chef and Puppet.

Nope, I'm an ops guy. Been banging my head against these tools for 15 years.

This looks really promising. I've been wanting a very lightweight solution for this which runs on a single host (no need for another host just to run the scripts) and pulls its configuration from a GitHub repository periodically.

I ended up writing my own - and hating that I had to do that - because I couldn't figure out how to use any of the existing options. Their documentation all seems to assume multiple hosts and a separate machine that you run the scripts on.

"Each host periodically fetches the latest version of the inventory to see what roles should it be performing" - that's exactly what I want.


> which runs on a single host (no need for another host just to run the scripts)

This isn't a limitation of Ansible, you can run playbooks against localhost.


True, but I think OP's point at least in part was that an awful lot of the tutorials out there start with the assumption that the audience are sysads and SREs who want to manage a fleet.

Yes, exactly. I could tell that Ansible could do it but so much of the documentation assumed you would be using it with a fleet that I eventually gave up figuring it out.

CFEngine is the most lightweight solution I have seen (in terms of resource utilization). The agent is a small C binary. It’s fast! You can set it up to pull its configuration from git.

Sounds like you're looking for ansible-pull

More modular, smaller, simpler than aviary is rrconf https://github.com/rrconf/rrconf

> Install from the command line, on a box to be managed by aviary.sh:

It would be even cooler if I didn't need to do anything to do the box besides install openssh-server. I should be able to:

    $ sudo apt-get install aviary
    $ aviary install my_server 192.168.1.25 # a fresh ubuntu 18.04 install
and everything else should just be magic from there.

If you like this - Fabric is awesome.

https://www.fabfile.org/

Use python over SSH. Super fast to begin, if you can ssh into the server, you can run the equivalent of shell commands (subprocess calls in python) remotely. With python, you can abstract and reuse, with the scriptability of bash, but higher level niceties of the language, libraries and toolchain (linting, formatting, autocompletion, imports).

Can it pull the latest version of your app, build it on your server? Yep. Restart daemons? Update libraries (npm, python, etc)? Update system packages? All work fine.

Can it configure a vanilla server from scratch? Yes, it's helpful to bring in a higher level reusable library like https://github.com/fabtools/fabtools helps with that.

When does it hit limitations? More complex orchestration where you have multiple servers (and variations of them) with configuration to talk to each other over networks. Eventually you get to the kind of setup where having a tool like Ansible / Salt / Puppet / etc. makes sense.

For a basic PHP/python/ruby/node site that's just running on a single cloud server? A declarative config manager would be overkill for me.


> Use python over SSH. Super fast to begin, if you can ssh into the server, you can run the equivalent of shell commands (subprocess calls in python) remotely.

This literally describes ansible. In fact, ansible one-ups this because you can specify raw commands if python is not installed remotely [1].

Ansible provides lots of modules, you don't have to use any of them. I have plenty of 1-off ansible playbooks that I don't care about idempotency that are just a bunch of 'cmd' statements. It's a very flexible tool.

1: https://docs.ansible.com/ansible/latest/modules/raw_module.h...


Not really. Ansible is a complex DSL based on YAML. Fabric is really simple and uses real Python.

Can't say I've ever really thought of YAML as complex. Especially when the Ansible modules are translating things so you can just define the action you want to take, and each module is very well documented.

Embedding python in yaml has a ton of gotchas for cases where string processing doesn't quite line up with the way yaml wants things. Variable interpolation is also a real mess.

Typically the recommendation is to write your own python plugin/filter/test if you find yourself writing a lot of python in yaml, that way the yaml stays sane and readable, and just has some light Jinja.

> In fact, ansible one-ups this because you can specify raw commands if python is not installed remotely

GP's description of fabric is slightly wrong - it's not python over ssh, it's ssh in python. You only need python locally, fabric runs shell commands remotely.


This is my main issue with Ansible: as soon as you run a single raw shell command, haven't the main benefits of Ansible just disappeared? And wouldn't it be likely that you'll probably need to do this pretty regularly?

Some of the modules seem pretty pointless as well, like the synchronize files one; why not just use rsync directly? And while you're at it, just SSH to servers from shell! Sure, you would need something similar to the inventory, but that wouldn't be too difficult to come up with...

I think investing time in building a library of idempotent shell scripts instead is the way to go personally.


You can have your shell commands be idempotent. For instance, I have a number of places where I trigger a shell command only if an Ansible managed file changes.

It's only non-idempotent if the shell command runs every time, and the shell command itself isn't idempotent.

The reason for some of the "pointless" modules is that they integrate into Ansible better. I.e. for synchronize, you can run Ansible with the "--check" option, which won't change anything, but will report back what would be changed if you actually executed it. Running rsync in a shell command doesn't have that option because Ansible has no idea what that command does.

My bigger issue with Ansible is that the YAML is annoying. The looping behavior is frustrating, variable precedence is weird, IDE autocompletion is generally lacking. I wish they had designed a sane way to just write Python code.


What’s the status of the fabtools project? It supports up to Ubuntu 14.04 (as in, 2014) and has very little commit activity. Without fabtools, fabric operates at a much lower level of abstraction than Ansible.

I really hate that fabric got rid of roles in the new version. I want to be able to still do "fab task -R prod".

Try fab-classic:

https://github.com/ploxiln/fab-classic

We decided that migrating to Fabric 2.0 doesn't worth it.


This was written by team.video who is in a hyper competitive space. Every hour spent on Aviary is an hour not spent on differentiating customer features. Why? There are a ton of open source CM tools that give you same functionality and do it better. This is a problem you should have solved after you were a) profitable b) at scale and c) probably not even then.

So like, unless you're one of their investors upset at how your money's being spent, why is this any of your business? Startups often take unusual paths to profitability. They all seem crazy at first, many are in fact crazy, a few turn out to be quite prescient.

Not being an investor doesn't preclude me from having an opinion. There is nothing prescient here... just an engineering team burning cycles better spent elsewhere at current stage of the company.

Unfortunately all I could think of was "that sounds cool but I'd want to stick to dash instead of bash"

Also didn't realize it wasn't agentless which is pretty critical imo


> Unfortunately all I could think of was "that sounds cool but I'd want to stick to dash instead of bash"

Well, POSIX sh, ideally. That way you get free portability to ~everything; it's not a great burden to install bash or dash on ex. NetBSD or AIX, but sh is as close to universally preinstalled as you're going to get.

> Also didn't realize it wasn't agentless which is pretty critical imo

Ouch, yeah that outright disqualifies it for me.


Now this is something I can get behind! Very cool work. I think bash gets a bit of a bad rap and it's very popular to bandwagon hate on it, but it's one of my favorite tools ever.

This looks really effective! I do something similar to keep track of what's on my workstations: I just have one large idempotent shell script to set up all the software I use. To help myself not loose track of updating it I have a little helper script to detect packages that aren't tracked in the config. I've recently thought about breaking it up into smaller "module-style" scripts. Maybe this could be a good fit (with minor adoptions)!

Really neat. I still need to evaluate it more, but this is the kind of thing I've looking for for small infrastucture that isn't likely to grow (like home services or small office, etc).

I'm all for trying simpler approaches and have written some tooling in bash myself, but at this point Python is pre-installed on many servers like bash is, a large benefit of Ansible is the VAST number of modules already written for Ansible.

Imagine if this grew to support all of Ansibles features and modules, but did so in bash.

shudders


it requires an agent, so it's a tiny Bash alternative to Puppet, but that doesn't sound remotely as cool

I mentioned it above already, but I've been experimenting with an agentless puppet-lite:

https://github.com/skx/marionette/

Early days, but the primitives are enough to do useful things on my own systems..


Currently I do a fresh install of Ubuntu every time I do new work (every 6months). Normally I spend about 6 hours getting my machine configured up (installing softwarem configuring my terminal, etc)

Will Aviary be able to solve this?


Maybe you have an involved set up and already do these things, but surely you could just write a bash script that installs all the software you need and store it with your dotfiles.

After a new Ubuntu install you can just pull down your dotfiles and run your install script. Aside from some things like ssh keys, gnome tweaks, etc. - this approach would likely get you 90% of the way to set up and would take ~15 mins.

Arriving at this set up would take a little bit of time, and require periodic maintenance, but given a weekend you could have it set up and tested.

Here[0] is an interesting way to store dot files.

[0] https://www.anand-iyer.com/blog/2018/a-simpler-way-to-manage...


I've been working on a localhost-only system for automation myself, inspired by puppet but with far fewer modules:

https://github.com/skx/marionette/

As of this week it can now pull down docker-images from dockhub, which was something I'd been needing to rebuild my own systems. Early days, and it is as much proof-of-concept as anything else, but it seems robust and reliable in my own use.

Of course I need to add "written in go" to advertise properly!


Does this save time in writing your own personal automation or make that personal automation less error prone? How would you compare it to just writing your own bash scripts or using something like Bashible (https://github.com/mig1984/bashible)?

Making things repeatable and scripting will usually be a time-saver (unless you spend 100+ hours writing the automation and never reinstall your hosts!)

For me it was a learning experience as much as anything else. I'd probably have stuck with ansible, puppet, or similar established project if I didn't want to learn/experiment.


On first glance this looks awesome. Certainly will be better than the bunch of scripts I've written to provision my laptops.

Fingers crossed it will work for you, and others!

It replaces a simpler alternative I wrote in the past which used ssh/scp to upload files and run commands:

https://github.com/skx/deployr

Right now I have a git-repository containing per-host config. So I can run something like this via cron:

     cd /srv/config/$(hostname --fqdn)/
     marionette main.in
I should document that, and the inclusion-facility explicitly. But otherwise feel free to file bug-reports/feature-requests if you have ideas for modules that would be useful.

Wow, perfect timing. I was just looking for something like this. I haven't looked at it too in-depth yet, but the design goals align with what I want. Minimal configuration management that pulls config from git and runs locally. Ansible pull, salt masterless, chef solo, etc. work, but they weren't designed to be run that way, so have unneeded complexity (and are kind of heavy for what I want).

I have been working lately on something a bit similar, but in pure POSIX sh. It's called primer, https://github.com/efrecon/primer, has no agent, cannot work on remote host but supports amalgamation for easy remote installation and operation. Primer is very much docker oriented in the sense that it seeks to automate the installation of all tools before a docker project, then supposes you run docker-compose for the rest of your projects, or orchestrate in any other way.

> Installation

> curl https://aviary.sh/install | sudo bash

I would really try to avoid this, especially when it's targeted for sysadmins ( I guess DevOps nowadays ).

What's the big deal of having this instead :

> $ wget https://gitlab.com/dchester/aviary.sh/-/archive/1.3.2/aviary...

> ./aviary-install.sh


Better still:

    wget https://gitlab.com/dchester/aviary.sh/-/archive/1.3.2/aviary-install.sh
    less aviary-install.sh
    ./aviary-install.sh

Any sysadmin (or devops engineer) worth trusting with root on production machines a) knows how to convert from one to the other b) knows there isn't a difference between the two unless you actually go look at the script in between the wget and the execution (and, probably, go look at what it downloads too and read through all the code).

On non-production machines the first one is easier.


I like having the option, tbh. I usually self host the install scripts, which removes most of the potential issues.

I’ve been waiting for a tool like this.

Here you go! Yeah, I've used a bunch of CM tools over the years, and it always seems like you're learning a DSL of one sort or another, just to have it translate into the bash that you already know.

And don't get me wrong, I love bash. I automated my first job using bash. But relying on bash just relegates aviary to a certain job description and thus just low value, domain specific tasks. Seems like a neat way for a linux admin to run a homelab, but not something that's easily adopted for larger teams working across OS, network platforms or integrating with external services.

Not everything needs to be for large teams. There’s value in simple solutions for small teams.

YAML is a standard for so many different tools now that I don't think it's really considered a DSL.

I think there is an Genuine Enterprise Need explosion at certain scales. It is not box ticking exercises or empire building but sometime after (my guess) 1,000 units of anything (users, servers, customers, whatever) you start finding that "simple as possible" does not help that much anymore. LDAP support becomes useful and so on.

Maybe, I hope maybe, it's not a real portal that we pass through, but it does seem so.


The metric is number of different things you do as an organisation, or number of bored staff.

You could probably keep things simple for longer if you issue everyone with a new fidget spinner monthly. A lot of what I see is just people who can’t resist fiddling with stuff that works until it doesn’t.


The usual approach (cheaper than fidget spinners) is to have so few people and so much work they only do the barest essentials

Yeah, but that doesn't work when you have middle managers who's pay is tied to the amount of people that they manage.

I don't care what language or paradigm the tool is written with, Configuration Management tools are evil and should be avoided at all costs. I would literally rather maintain a crappy server and fragile software install manually and treat it as a pet, than use any CM tool at all and pretend my servers are cattle. In reality, the CM becomes the pet.

The solution to CM is to use immutable infrastructure, and versioned immutable artifacts. With these paradigms, state never drifts, so there is never a need for configuration management at all. Everything becomes stable and predictable and you no longer have to maintain a finicky pet.

But how do you bootstrap your systems, you say? The simplest way possible: make a crappy procedural program that bootstraps the very beginning of your system just enough to push versioned immutable artifacts and run arbitrary commands (essentially just "scp", "reboot", "docker pull", "exec"). With cloud-based systems, you shouldn't need to bootstrap anything at all. Build your versioned system images and containers, deploy them, destroy them, re-deploy them.

No offense meant to Aviary, I'm sure there are still legacy systems that require some CM before they can be abandoned, but I really hope people building new systems will abandon them ASAP.


If you like rerun, drist or bashing. I made a short Go program to replicate the experience.

https://github.com/tongson/rr

Yeah, naming conflict with the debugger. Was a simpler replacement for rerun so the shorter command name made sense to me.


I do something sort of similar, also using git to maintain the inventory of what to run on what machine.

In my case, I'm using docker-compose.yml files for each machine, and then running:

  docker-compose pull
  docker-compose up -d
We initially wanted to use Watchtower, but it didn't work well for us.


This is a great feature, introduced in Feb this year. It addressees a substantial pain point with docker-compose.

Thanks for pointing this out.


Indeed this does look very useful.

I had a quick glance over this, but for contexts which are defined as remote ssh hosts, I couldn't see where the images are obtained. Are they being built remotely on the host, or are they being downloaded from your local machine over a tunnel or via a docker registry?


Nitpicking: Why pull separately? it will pull if no image stored locally or even if the `docker-compose.yml` changed.

Stale images can be a reason. If I specify `image: foo:latest`, then push new versions, it won't automatically pull new versions unless you do an explicit pull.

Two reasons I've done this in my own Makefiles: predictable start times, and separation of failure-prone steps.

How is this idempotent?

  cat <<EOF > /etc/motd
  "Ever make mistakes in life? Let’s make them birds. Yeah, 
  they’re birds now."
  --Bob Ross
  EOF

Paraphrased from Wikipedia: "A function looking up a customer's name and address in a database is typically idempotent, since this will not cause the database to change. Similarly, changing a customer's address to XYZ is typically idempotent, because the final address will be the same no matter how many times XYZ is submitted. However, placing an order for a cart for the customer is typically not idempotent, since running the call several times will lead to several orders being placed. Canceling an order is idempotent, because the order remains canceled no matter how many requests are made."

The greater-than redirector fits that description, as it is normally implemented as an atomic write. Note that it does not say anything about optimization or efficiency. Only that the result remains unchanged.

Web developers have bastardized the term and take it to mean many things it does not because of the definition of HTTP GET, even though no GET operation is ever idempotent in the real world.


Looked good but I needed to leave the site after seeing they suggested doing

curl https://aviary.sh/install | sudo bash


What’s the problem with running a script this way? Several major tools install like this (docker, netdata...) here’s a take on this that was posted to HN a while back: https://www.arp242.net/curl-to-sh.html

The problem is that this way, you're giving total control of your machine to a script that curl will download from a website, before you check the script.

A better way to try a new software is to download the sources, check them, build/install/run them under a low user with lowest required access to your system. Even better, do this in a virtual machine.


Discounting an entire tool just because you don't like their recommended installation method (which you're hardly forced to use) sounds rather strange to me.

I think you should reconsider. Curl-bashing (or curl-kshing or whatever) at least ties the download to a domain name in control of the project and uses https. The alternatives - such as serving up a .deb or .rpm with a digest - aren't a good fit when the whole point of a project is to provide a single portable shell script (ok it's only a bash script), and especially when the project itself is acting as an installer of sorts, like this one. Would you rather have dozens of aviary.sh scripts with different versions and potentially malicious modifications flying around on github and download sites?

Very nice! Ansible can be a little heavyweight for some of the things that me and a lot of people do. I'll definitely be taking a look at this.

Typically I use Packer + Terraform and it works generally well (if you going with the immutable servers philosophy). However, often times immutable servers is hard to actually do in practice. If Hashicorp added a new product similar to Ansible / Aviary.sh / Fabric I think it would complete the entire DevOps loop under a single ecosystem.

If you're sticking with hashicorp tools and use consul, then something like consul-template should work. Your packer builds would just have to copy consul-template configs and template files for the things you want to manage. Each template watches the consul kv store for matching keys, and rewrites the target config file using their values. You can instantly rewrite configs and execute follow up actions across your fleet by updating a consul key value.

Agree Consul is an elegant solution for managing files, what about running commands?

I don't see it used very often, but Consul actually can execute remote commands. You do have to enable it in the config though.

https://www.consul.io/docs/commands/exec

edit: Also, Nomad would be another choice if you're looking at hashi tools.


cloud-init?

You can hook ansible roles into your Packer process pretty easily. That's what I do at least.

Nice, I plan on using this

> bash is just fine (yes, it is)

No, it's not. It's 2020, we need to stop it with the stringly-typed, footgun-filled madness that is bash scripting. We deserve better than this.

Unless I'm missing something, this README example already contains a bug:

    template $(dirname $0)/motd.template > /etc/motd
This will fail if the script path contains a space.

Ha, Ansible parses and 'executes' YAML files using Python, but not before passing them (repeatedly?) through a template library (Jinja2) originally designed to produce HTML pages. They even came up with a way to write for loops in YAML... What I'm trying to say is that sometimes great software is written using (in some people's opinion) terrible technology. Not everything needs to be written in idiomatic Haskell or Rust.

Personally I have to say that I started writing more bash scripts again recently, I even wrote one that interfaces with the GPIO ports on a Raspberry Pi. Works like a charm, is concise and (I find) also quite readable: https://gist.github.com/adewes/7a4c20a5a7379e19d78ba54521d3d...

But then again I'm also a person who enjoys Perl, so maybe it is time for me to be banished to a lonely island :D


> originally designed to produce HTML pages

Jinja2 can produce HTML pages, but it was never exclusively designed for that. In fact one of the earliest uses of Jinja2 was generating LaTeX.


I stand corrected, didn't know that!

Perl was great for encouraging that kind of thinking with the old motto "there's more than one way to do it". I'm glad I started with Perl because it's helped me think about solutions I may have otherwise dismissed.

Bash is far from perfect, but it's readily available at many people's fingertips, and it gets you to some solutions really fast. This is cool.


Yes it is.

If you have two type systems and you don't want them to interact, gluing them together with stringly typed bash is a good way to keep their scope small. As a user interacting with multiple programs by different authors, that's a very common use case.

Giving your type systems boundaries is a good idea. The alternative tends towards one-type-system-to-rule-them-all, an idea that Russell thoroughly explored in the early 1900's, Godel proved to be unmaintainable in the 30's, and that everybody except the Java and C# crowd has since abandoned as a dead end.


It is a pretty sloppy and archaic language. We need an updated version that keeps the string typing and inline execution of external binaries, but has modern syntax, plus a few more conveniences and guardrails.

You might be interested in Powershell. I used bash for 20 years and considered myself an experienced user and programmer with it. Comparatively, my experience with Powershell is relatively short, but I switched to it and the first time I needed to handle a piece of structured data I was hooked and haven't looked back.

I recently discovered Powershell, in a Windows context. It's weird, but has interesting ideas, and is better than bash for sure.

Supposedly it supports Linux, but I haven't actually tried.


It does, and MacOS, and I found that once I changed to bash-style tab-completion and used PSReadLine to get the keybindings I was familiar with, it becomes a pretty comfortable shell. I remove some default shell aliases because PowerShell's notion of file modes and permissions ends up not mapping well, but you have all your normal Unix commands available so it's not really a big deal. Quoting is weird but it's not worse than bash, where quoting is also weird. It's surprising how well objects-as-arguments and objects-as-pipelines work, even when mixing with standard Unix commands. I appreciate standard parameter handling, command naming and I think it pays off on the "Unix philosophy" (small program units doing one thing well) better than bash and shell tools. Those standard approaches produce an aesthetic I didn't like for a long time (Invoke-RestMethod -Url ...) but aliases cut down on your typing (irm instead of Invoke-RestMethod) and I now appreciate the discoverability of standard commands. I keep discovering little gems like default parameter values and the various ConvertFrom-* commands, some from modules.

I was only taking issue with the objection to string typing. I totally agree that the rest of bash requires a bit of stockholm syndrome to love.

I haven't yet taken the time to play with it but I'm hoping www.oilshell.org addresses some of the warts.


I don't disagree with your point, in this context. Bash can be good enough for all kinds of things. Or maybe not. But it's not a simple binary equation--there are tradeoffs that might lead you to it or away from it for a particular situation.

I do think there are a few issues that make bash non-robust for this kind of task. If you only look at this through the lens of the "type system", you have a good point about gluing views together, but I think "just strings" is not enough typing. I think I and the industry have had better luck with a base system more like JSON, instead of "just strings". It provides enough structure so that there's a straightforward way to turn wire protocols or other external data into your Java objects or whatever, let dynamic scripting languages handle it in their native base types (which often map pretty well to what JSON offers, while bash really doesn't--note that the author here isn') and are fairly reasonable to manage on their own without special tooling.

However, that aside, even if "just strings" were fine for everything, the fact that bash is designed as an interactive shell with barewords for strings really exposes a lot of gotchas for handling data from different sources, just from the sometimes-diabolical problems with quoting and I/O. In the absence of actual function return values, capturing output and storing a process exit code, which might sometimes be the last process's exit or maybe something inside the pipeline if you've set the right global options, is error-prone. It's not robust to hope that the only output coming (on two different output streams) constitutes a complete and correctly-anticipated return value from a subprogram. Errors associated with empty variable values are frequent for this reason.

Secondly, bash lacks a lot of features which you need external programs to provide, and the dependency management glue between them is not good. You'll need find, ls, sed, awk (and you may not remember the days with multiple significant Unix platforms and differences in behavior between gawk, nawk, awk-of-different-flavors), cut, sort, ps (/usr/bin/ps or /usr/ucb/ps) all stored in different platform- and environment-specific locations and exhibiting different behaviors without a clear way of constraining your dependencies or even asserting them correctly. In the modern world, you can add to that list curl, jq, parallel, and other shell tools. Fairly common tasks like YAML handling is even more of a clusterf. The fact that these tools were all designed to work for their own reasons and not as extensions to a common language isn't their* problem, but it becomes yours if you're trying to deliver a robust and serious piece of software with it. You'll do a lot of DIY around discovering their edge cases and handling them.

There are a number of other problems but I think they're lesser in scale, like the lack of a standard way to deliver and use extensions. It's hard to do safe file access. Bats is a pretty good testing framework but no one (including the author of this script) seems to use it. But what I don't really see are many advantages over a regular scripting language, for something like this. The fact that "bash is already installed" is only an advantage as long as your script isn't going to do anything interesting; once it starts doing interesting things, it needs to worry about all kinds of other things being installed.

That said, I do think it's a little unfair to tear the author apart over something that really shouldn't be any more ambitious than a cron job doing "git clone https://config-repo/config.git && config/manage.sh". The fact that they call it an "alternative to Ansible" is a little eyebrow-raising, but I'm sure it is: to them. The question isn't really whether it does what Ansible does, but if you can manage a machine's configuration with a shell script, and of course the answer is yes. But the answer to the question, "Is bash a fine language to write a general-purpose configuration management system in?": the answer is no.


I don't really mind the string typing of Bash. Shell commands, their arguments, and their outputs are all strings anyway so it makes sense to give special treatment for strings.

That said, features such as word-splitting are terrible footguns that only exist because of the burden of backwards compatibility. Hackernews, what other languages do you suggest as Bash replacements that doesn't have as many opportunities to shoot yourself on the foot?

This is an area that I never found something I truly liked. A lot of people suggest Python for this kind of task but when I have a small script that is mostly doing pipelines and command substitutions with "$()" then the python version with subprocess is much more verbose.


There's a wonderful library called `plumbum` [0] that lets you execute external binaries and capture output using Bash-like piping syntax, while preserving all the goodness of Python's syntax and flow control. Highly recommended.

[0] https://plumbum.readthedocs.io/en/latest/


Are you really really really doing mostly pipelines and command substitutions? Most of the time when opening a script with such claim there is maybe one pipe that makes sense and all the others are just things like cut, grep and awk to trim of any trailing newlines. Substitutions are usually from things like date. All of things that are easier to manipulate in a typed language.

When I open `av` in vim, shellcheck spits out 167 warnings. This is not written robustly.

FWIW shellcheck complains about many false positives.

But in this case, their code is comparing strings to ints like on line 126:

    while [[ "$#" -gt 0 ]]; do
Spot checking code for basic bash mistakes like this tends to tell me the quality of the code overall, as these are basic mistakes. $0.02

If $# is a number, aren’t "$#" and $# exactly the same in Bash? Enclosing things in double quotes doesn’t change their type, since everything is already treated as a string.

https://www.gnu.org/software/bash/manual/html_node/Double-Qu...


While the other reply to you is correct, the actual answer to your question is No -- bash interprets leading 0s (zeros) in the C sense, it indicates octal not decimal. So this very small bash will fail for you:

    X="08"
    if [[ "$X" -eq 8 ]]; then
      echo Y
    fi
Use `bash -x` to run it in debug mode:

    $ bash -x test.sh 
    + X=08
    + [[ 08 -eq 8 ]]
    test.sh: line 2: [[: 08: value too great for base (error token is "08")
This comes into play if you're dealing with things like IPv4 address, where it is technically allowable to have leading zeros, however by convention it's never done because of code traps which see those leading zeros as octal mode, not decimal.

Your example only demonstrates the effect of `[[`. If you replace it with `[` the error won't occur. In both cases double-quoting (or not) `$X` makes no difference in the context of your example.

[ is an external program called "test" and [[ is a bash builtin, they have many different behaviours beyond just trying to use one to prove the other wrong on a random internet comment.

I merely used that to show that the error message was tied to `[[` and not to the use of double quotes (which your parent comment was asking about).

It's true, but you're semantically overloading the variable syntax (by hinting that it's treated as string).

Plus, left and single operands, in bash conditions, don't require quoting, so a lean style is actually not to use them at all.


> …left and single operands…

That's false:

  x='foo bar'; [ $x = 'foo bar' ]; echo $?
  -bash: [: too many arguments

  x='foo bar'; [ "$x" = 'foo bar' ]; echo $?
  0

As I replied to your other comment, you cannot just replace [[ with [ and get the same behaviour. If you use the actual bash builtin [[ which we're discussing, it works as expected. Also, please use == when comparing, = is an assignment operator (in almost every programming language).

My bad. I interpreted “bash conditions” differently.

Is this actually a shellcheck error? It just evaluates to the integer number of arguments in $@, so I doubt it is (and I doubt it matters whether it's quoted? unless there's a bash back-compat issue here?)

Is there a bug in the command line parsing there? I can't see it.

In this specific case it's more of a nitpick, I think. Even shellcheck won't demand double-quoting `$#` because everyone(?) knows that it'll always be a number and have no whitespace in it nor evaluate to an empty string.

Do you refer to shellcheck in general or in this specific case?

I do a non-trivial amount of shell scripting, and there is only one warning I disable in shellcheck on a general basis (SC2016).

The author of this script clearly doesn't like quoting where the underlying value is guaranteed not to include special characters (or space), although this is somewhat inconsistent with the fact that he's using quotes in several patterns where it's not needed (eg. left or single operands in bash conditionals, or in variables initialization).

On can consider always quoting a defensive practice. In a fragile language like Bash, I personally think defensive practices are very important, so the first impression I get from a Bash program not adopting them is "not great".


It whines at `ldd`, too, but that ain't stopping me.

Can't say anything about this tool, but compared to ansible, bash is mostly fine. In my opinion ansible is a (and pardon my language) clusterf* of a tool compared to the alternatives. Certainly, if bash had the likes of inventories, roles and other features to make maintenance and reuse easier, I'd choose that over ansible.

Ansible is writing bash scripts in YAML, templates using system that doesn't grok YAML and causes parsing errors due to whitespace, interpreted by schizophrenic Python that might be 2 or it might be 3.

... I guess pure bash might be better.


I have my issues with Ansible, and I'm really not a fan of "templated YAML"-based tools, which has been an unfortunate trend that's not limited to Ansible (GCDM and helm share this problem). But if you're writing a lot of bash scripts into your plays you're probably doing it wrong.

you can run pure bash from ansible, either by copying a script or using the shell module: examples: both with sudo

ansible all -m shell -a "hostname" -b

ansible all -m script -a "script=/home/user/myscript.sh" -b

I love to throw together a quick inventory list and run adhoc ansible commands against a bunch of servers.


I think you are forgetting that serious operators do not have spaces in their paths.

*People who are used to work with bad tools don’t have spaces in their paths.

"C:\Documents and Settings" still hurts after all these years.

It still hurts today, because Microsoft puts a space between your first and last name in your profile if you are using a local account but not for a domain account, which causes all kinds of weird issues if you ever have to use cmd.exe...

"C:\Program Files(x86)". Who ok'd that?

It's a design decision and a good one, at that. Forcing 1 million developers to handle spaces properly while they spend 1 billion hours developing their software is preferable to 1 billion regular users having to remember to not use space, a perfectly good separator, in their 1 billion billions hours of usage of said software.

Speculation, but I suspect that Microsoft explicitly put both user data and programs in paths with long filenames and spaces to help force software to support those things. That always seemed like a good thing to me, it's ridiculous that we still have software that breaks on paths with spaces.

I haven't looked in a while, but last I checked you could also still get to it via C:\PROGRA~1? (This was years ago, that might have been 'fixed'?)

These are called 8.3 filenames. They're a filesystem feature. They're still created by default in Windows 10 on NTFS, but can be disabled or stripped using `fsutil`.

cough make cough

I don't know about "serious," but trustworthy operators use tools that work correctly in the presence of weird states because they take responsibility for their work instead of blaming others.

"Serious operators" never want to put their files into iCloud Drive? They must be using butterflies then

https://xkcd.com/378/


"Serious operators" probably use a cloud storage solution that supports WebDAV/FTP/SFTP, like box.com or Google Drive - "real operators", on the other hand, have a self-hosted solution (-; And IMHO relying on iCloud for essential data storage probably isn't a good idea.

I find iCloud much better than Google Cloud and I'm pretty sure I qualify as a "serious/real operator" any way you want to call it. I have no time or interest in hosting a custom solution. And I certainly have no need for FTP.

But even if I wasn't experienced, quit this gatekeeping crap. Let people use the tools that work for them. It does not in any way reflect on how "serious/real" they are.


> "Serious operators" probably use a cloud storage solution that supports WebDAV/FTP/SFTP, like box.com or Google Drive - "real operators", on the other hand, have a self-hosted solution (-;

I didn't mean to discredit you for using iCloud - if it came across that way, I apologize. My original comment was meant to be tongue in cheek. I've got to admit that I'm a bit gob-smacked by your reaction... After all, my comment was similar in structure to yours. I even quote unquoted the terms, and the "real operator" part was a reference to the editor war xkcd you mentioned. Taking your post literally, I could accuse you of doing the same thing by assuming that real operators were Mac users. And I'm sure there are many use cases where hosting your own cloud storage would be absurd. But, re-reading my comment, it didn't read like a flame war instigating piece of propaganda "Calling all real sysops to arms against the user threat. Grab your cattle prod, fellow comrades, and crank up the voltage!"

> And IMHO relying on iCloud for essential data storage probably isn't a good idea.

I don't condone gatekeeping (although, to be perfectly honest, I don't see how my comment could be interpreted as elitist. It's not like the learning curve for GDrive or any other solution with a better cross platform support is _that_ high), but I don't see a reason why I shouldn't voice my opinion on a tool, even if responding to something obviously meant as a joke. Of course you're free to use iCloud. Personally, I wouldn't feel comfortable with the thought of my critical data being stored on a platform that, historically, hasn't always been known for being extremely stable. Maybe I'm just biased. If you feel comfortable using it, then use it. I won't stand in your way.


I genuinely apologize as I think I misunderstood. Of course I'm fine with a bit of jokes about tech but I think I didn't catch any intended sarcasm in your comment—in fact I think I took the direct opposite intention.

I'm admittedly sensitive to gatekeeping. I do think it's one of the biggest reasons the tech industry is the monoculture it is.

Anyhow, I appreciate you clarifying and hope I didn't offend too much.


No problem. The subtleties of human speech don't carry over to text very well ;-)

rsync or btrfsync for the win

On the contrary, serious operators write programs to handle spaces in the paths.

That’s just objectively false.

> No, it's not.

Then come up with something better.

It will need the same or better ease-of-use for non-programmers, should function as a shell (so that their everyday use also teaches them scripting), with the same flexibility that lets people who don't care about typing or other fancy programmer stuff and just need to get work done.

If your first instinct is to say "python", you don't understand the problem.


A Python script is (generally) easier to read, easier to write, easier to modify/maintain, easier to debug and easier to test.

Bash programming is not easy and full of loads of gotchas.

Useful for running multiple commands but when you start to need to do programming things like conditionals/loops/variables then using an actual programming language makes more sense.


You are misunderstanding or misrepresenting what I said. If it wasn't obvious, I meant that bash scripting is not okay _for this sort of complex work/projects_.

On paper I agree with you, but that has not been our experience in practice.

Bash "just works" with some very well understood limitations. Every one of these "better" solutions introduces 10x the complexity with 10x the limitations (always undocumented).

After spending dozens of hours configuring them thanks to terrible documentation, generally taking multiple support tickets (if there's anyone to talk to at all), we get it working. Then, after using it for a period of time, they fall flat on their face in far worse ways than bash.

Granted I'm only managing hundreds of servers, not thousands or more, but more often than not a

    for i in {000..199}; do ssh machine$i "some command"; done
is more reliable, with better error handling, than all of the other solutions.

Edit: As long as linux/unix maintains the "everything is a (plaintext) file" approach, I have trouble picturing a better solution than text/string based interface like Bash. You can build more front-end complexity on top of it, but you're always going to end up back in the same place.


> Granted I'm only managing hundreds of servers, not thousands or more, but more often than not a

> for i in {000..199}; do ssh machine$i "some command"; done is more reliable, with better error handling, than all of the other solutions.

This is absolutely not the case:

From the top of my head: there is a network error that hits while running this against 5% of machines. What is the state of those servers?

If one is managing more than a few machines, one should absolutely use something like Puppet/Ansible/Salt/Chef with a repo to ensure that nothing is ever lost.


And all of those tools (at least the ones I've looked at) use SSH internally as well and are just as susceptible to network errors, except they tend to fail with less feedback than the raw ssh command.

I never take it on blind faith either:

for i in {000..199}; do ssh machine$i "echo new_setting > /file"; done

for i in {000..199}; do ssh machine$i "grep new_setting /file"; done

for i in {000..199}; do ssh machine$i "reload service"; done

for i in {000..199}; do ssh machine$i "verify reload"; done

As I was taught "never make a change without verifying it". All changes are documented as well.


> SSH internally as well and are just as susceptible to network errors, except they tend to fail with less feedback than the raw ssh command.

Maybe some. Puppet, for example, fetches a manifest and applies it on the server. Failure to apply a manifest triggers re-running the manifest. In small installations it will be asking puppetmater(s). Large installation run masterless, where they fetch the repo and run it. I would be highly surprised if Ansible did not have this mode even if it is not a popular way of doing it.

> for i in {000..199}; do ssh machine$i "echo new_setting > /file"; done > for i in {000..199}; do ssh machine$i "grep new_setting > /file"; done > for i in {000..199}; do ssh machine$i "reload service"; done > for i in {000..199}; do ssh machine$i "verify reload"; done

You are running this sequentially and by hand?! What happens when the server is dead and you need a new one? You provision it by hand and go through the stages to get it into the state where it can accept the same cattle command?


That's fair. Maybe it's worth looking into again.

> You are running this sequentially and by hand?! What happens when the server is dead and you need a new one? You provision it by hand and go through the stages to get it into the state where it can accept the same cattle command?

Depends on the change, but generally something like that. Change + Verify. It really doesn't take much more than 10-15 minutes for any given change. We've been at a stable state for about 2 years so changes are rare at this point, other than our security update cycle.

For rebuilds, we pull periodic disk images (monthly + any major changes) and can stand up a new server in about 20 minutes, almost all of which is hands-off.


We also use base image which is a core OS configured to our liking with puppet and minimum bootstrap core that allows the system to ask "What am I supposed to be, again?" If we know that any of the instances are currently functioning or were functioning as intended than we know that new instance will be functioning as intended.

After that it takes between 60 seconds and 5 minutes for it to be inserted into service regardless of the complexity of its configuration and registration.


I have the complete opposite experience.

Places that have hundreds of bash scripts that they manually run in a loop like your example are normally a one off nightmare of random changes in random places with zero consistency, tons of edge cases, and a lot of failure scenarios.

Ansible is very entry level friendly, we've had devs grok it in 20-30 minutes and write their own playbooks. We've administrated thousands of servers at scale. We can even use it to package immutable servers with Packer and Terraform, with mostly expected results.

I'd choose the latter every time.

In the end of the day it's a collection of python modules that people have invested in that let you write steps as plainly as possible in yaml with some light logic involved.

Why reinvent the wheel?


We do realize that's a downside and are very careful about documentation. We only have 2 admins on the system, and both of us are well aware of everything that has been done.

Our experience was Ansible was amazing until it isn't. It worked for a while, then any failures would leave systems in an unknown state that had to be verified via adhoc testing like this anyway.

Then, when Ansible ran into errors, it was just a pain to get working again. Granted we haven't tried in a couple years.

Funny, because to us, Ansible just feels like reinventing the wheel. It's a more complicated tool for pushing text based changes to a group of servers.


I fail how to see how I could be doing either.

"[W]e need to stop it with the stringly-typed, footgun-filled madness that is bash scripting" is pretty categorical.


While I agree with the sentiment that bash is not the solution for an ansible-like thing, I also agree with what you're saying.

I consider myself an accomplished system engineer and software developer. I use bash all the time. Not writing bash scripts is like saying not to use the terminal, it's just cacophony.


With bash at least I don't have to read the documentation each time I want to create a file or anything else...

> We deserve better than this.

What have we done to deserve better? Have you seen a community of people working on a bash scripting replacement lately? Has it hit HN? Does it have the same confluence of features, convenience, portability, simplicity, and utility that bash scripting has? If so, why aren't we using it yet?

The fact is, we all use bash because it's the best we have for what we really want to do. Bash is what we deserve. 31 years of "good enough".


> No, it's not. It's 2020, we need to stop it with the stringly-typed, footgun-filled madness that is bash scripting.

Ansible is written in Python. You could make the same argument.


How? Python has an actual type system. A programming language that does strings for everything would be, for example, PHP.

Python has dynamic typing, which can be equally dangerous. In fact I’d argue it’s more dangerous. You will not throw errors until runtime, which will literally blow up in your face.

> which can be equally dangerous

Bash is weakly typed, which is a different concept than dynamic typing.

Dynamically typed languages prevent bugs that weakly typed languages don't, so from this point of view, the latter are unambiguously "more dangerous" than the former.


A dynamic type system is an incremental improvement over a weakly typed type system, and still will not guard against runtime errors. They are equally dangerous w.r.t runtime errors.

Not that I endorse the “equally dangerous” comment, but if you're interested in configuration management with a language based around a powerful static type system, may I note the existence of Propellor? https://propellor.branchable.com/

It gives you the option to use static typing, if you so desire.

I'm not sure how not having a type system is safer than having a type system. What makes it feel more dangerous?


When talking about Python in 2020 i think most people imply you are running Mypy, which does a very good job at statically analyzing python almost to strongly typed language level. Plus, as others said, weak and dynamic are not same thing.

So does Bash, though.

And Bash does not support static typing at all, unlike Python. Not sure what their argument even is.

Python doesn't support static typing either unless you're using mypy which is experimental. Bash and Python both however have strong types, although Bash's are weaker than Python's.

Right, but Python gives you the illusion of safety. And the person I was replying to was trying to dismiss this interesting Ansible alternative because it was written in a language he doesn’t like.

Python with type annotations can actually be pretty safe. There are also typechecked Python variants being developed. At any rate, I’d say the defect rate in Bash is incredibly higher than it is in Python, and 90% of that is due to unquoted string expansion. Try putting spaces in your paths and you’ll quickly see how 75% of the Unix stack breaks.

Somehow that is not a real problem in practice. Maybe whitespace is not needed in paths that much. If a need to store a random byte string in a file name appears once in a year, one can encode it into hexadecimal or base64 with % replacing /.

That's sort of like saying, lack of pointer safety is not a real problem in practice. Yes, you can work around it, but when it's a problem for production systems dealing with sensitive or untrusted data it's a huge problem.

I don't see how the two compare. As a programmer, you need pointers for dynamic allocation. You don't need to use whitespace in paths, ever. It is a convenience for the user.

Whitespace in paths occur, similar to how buffer overflows occur. It's the job of the programmer to understand this and guard against it; but many don't. And all you need is one bash script that does something like this

  rm -rf $path
to go from working one day to having deleted an entire directory tree just because someone named their file

  "foo ."

> But it's too funky you say. Well, yes, bash can have its quirks. But we need some funk every now and then. Let's just embrace it!

My god, no, stop. Funkyness in production equals outages.


I'm not sure how bash is considered funky, it's nearly first class on all boxes. Nice thing about this tool you just use `script` type out your commands, copy it your modules and your done.

As bad as Ruby is, it’s still better than Bash and you can do all the same effortless execution of random commands in your script like you would in bash. If one can’t be bothered to write it in a real language like Rust or even Python, you could at least do Ruby.

What makes python more of a real language than ruby?

It’s more mainstream for tools like this.

Please explain. Have you heard of chef? That is ruby and it can be beautiful. Chef > Ansible. Chef server is meh.

The point wasn’t that Python is better than Ruby, they are about the same IMO. The point was they are both better than Bash, and Ruby actually makes it super convenient to write bash-like scripts, so there’s no excuse to not be using Ruby.

Eh.. Puppet? [ed: and chef..]

Python and Ruby are both functionally equivalent in my book. The difference is prevalence; for systems tools, Python is more common because it comes installed by default with more Linux distros and in my experience anyways, is faster to respond to breaking upstream issues and build failures, which is almost certainly just due to more widespread usage. I have nothing against Ruby, it’s a pretty language and does what you need and has good metaprogramming support. Just saying for a tool you want lots of people to adopt, it’s more mainstream to choose Python.

Right. I just can't think of a more mainstream tool in this space than puppet - that's why I thought your statement seemed odd.

Yeah, I was thinking more broadly about systems utilities rather than specifically state management.

Ruby has fallen by the wayside in recent years largely because its lunch has been eaten by Python and JavaScript, but mostly by happenstance, not as a result of the merits of the language itself. Ruby has comparable speed and arguably more robust niceties for building complex software vs. Python. See, e.g., https://leanpub.com/combinators https://computationbook.com/ https://www.toptal.com/ruby/ruby-metaprogramming-cooler-than...

When it comes to server admin the libraries available for Ruby is miles ahead of anything available for Python.

Bash just use bash ! Nothing new to learn for developers or to debug. Ok i'm ready for my down-votes...



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: