Hacker News new | past | comments | ask | show | jobs | submit login
Bashible: An Ansible-inspired deployment/automation tool written in Bash DSL (github.com)
132 points by vitro 8 days ago | hide | past | web | favorite | 83 comments





Portability is pretty important in the deployment/automation space and it's hard to beat bash in that capacity. It just sucks that bash sucks so much.

Ansible already is pretty portable/flexible on target systems (the joys of being agentless and working mostly over SSH). You tend to need python and sometimes some specific modules (python-apt, for instance) on the target, but that's hardly a burden.

I think the Ansible modules can be executable binaries as well. I have heard of modules written in go.

Having written Ansible modules in POSIX sh, I can assure you that this is absolutely the case:) Internally, Ansible modules are just normal executables that expect a very specific set of parameters and return a very specific JSON-encoded format. I believe there's also a newer pure-Python API, but I ignored it after realizing that it was much easier to just use the old one. (... and, of course, I really enjoyed the notion of writing a module in shell)

Agreed. Ansible works, but it works so very slowly because it zips and transfers scripts/binaries over to the remote host without caching.

I hacked up a simple system, in golang, which uses and SSH connection to run commands and transfer files. Of course I simplified the problem to only really uploading files, running commands, and simple template expansion. For a lot of services where you just need to download a binary and configure a config-file it works well but it's nowhere near as useful as Ansible's module system:

https://github.com/skx/deployr/


It's not so cut and dried, eg OS X ships with an ancient bash that's missing many features, and of course many operating systems (eg BSDs, Windows) and Linux distributions don't include bash by default.

bash 4.3 introduced a lot of changes (like addition of declare -n); there are many versions of bash, so it's not something set in stone.

https://lists.gnu.org/archive/html/info-gnu/2016-09/msg00012...


How about this? Show HN: Posixcube, a shell script automation framework alternative to Ansible (github.com) https://news.ycombinator.com/item?id=13378852

I’m surprised there’s no explanation of why Bash was chosen for this.

I'd imagine it's because it's available everywhere (even openwrt routers and those things), and it's easier to get away with not having to use sudo (depending on the task you are doing, of course). With Ansible, you almost always need python installed on the target host. Which, in 95% of cases is not a problem. It's not ideal to install a system package just because you want to do some provisioning (there are ways around that, but it gets hacky pretty quick...).

I haven't looked at this yet, but I really like the idea. Always wanted to have something like this for a few special cases. And if it's well designed, it could be a nice, lightweight and portable solution to do server provisioning, development project setups, use in Docker builds, etc.


> it's available everywhere (even openwrt routers and those things)

BASH itself? Or a Bourne-family shell? I thought most embedded systems were shipping busybox's ash as /bin/sh


Most *nix server and cloud nodes use bash for the shell. You're right about embedded systems but they're not usually the target platform for a config management system like this one or Ansible.

Yeah, I was mostly thrown by the reference to openwrt. I'm sure most/all GNU/Linux systems ship BASH.

Debian ship's dash for /bin/sh doesn't it?

but also ships bash for interactive use, so #!/bin/bash will just work, which you should be doing anyways in scripts that want bash.

If you use `env` instead, it'll pick up more recent versions of bash available on the path, such as the one installed by homebrew

    #!/usr/bin/env bash

/usr/bin/env bash?

It's common even for embedded systems which use busybox to provide /bin/bash because bash itself is not that large.

In my experience, Perl is available everywhere. Some sort of bourne shell is likely available everywhere, but Perl is a safe bet. (EDIT: actually, some systems have recently started removing Perl from their base I hear... Fedora?)

Neither are available on common small embedded systems that rely 90% on busybox.

Full circle DevOps. Nice :)

No more DSLs for deployment/config management tasks please. Too many already.

I'd rather more DSLs than more things with complex logic encoded into YAML.

I'd argue that Ansible is using a DSL, just one that happens to be expressed in YAML. I mean, it has loops, conditionals, various forms of subroutines (roles, plays that can be included)... I don't think that using a YAML format rather than being C-shaped makes it any less a DSL.

At that point you should at least use something elegant like s-expressions.

There are so many things I hate about ansible.



Could you elaborate on how and why s-expression solve address the issues of yaml-based DSL? Thanks

S-expressions have keywords, and don't require commas between list elements. That alone goes a _long_ways towards making s-expression based languages way more readable/writable than a similar datastructure encoded in something like JSON or YAML.

Here's an example of a JSON coded query taken from a PuppetDB tutorial (PuppetDB uses a very lispy query language):

  ["and",
    ["=", "type", "User"],
    ["=", "title", "nick"]]
And the same code as you could write it in native s-expressions:

  (and
    (= :type "user")
    (= :title "nick"))
...and one way you could encode it in yaml...

  - and:                                                                                                                      
      - =:                                                                                                                    
          - type                                                                                                              
          - user                                                                                                              
      - title:                                                                                                                
          - title                                                                                                             
          - nick

why are you encapsulating everything in lists for your yaml example? you arent in your other examples either?

    - filters:
        - operant: =
          type: user
        - operant: =
          title: nick
if the module author wants to, he could even remove the secondary list because its the same operator, making this

      - filters:
        - operant: =
          type: user
          title: nick
thats just as readable as your other examples imo. yaml doesnt have to be an unreadable mess. its just possible to make bad apis, which remains true for yaml just as for any other format.

Thanks for example and explanation, this makes good sense. I apologize if this is a naive question but is there a reason we don't see configuration management taking advantage of S-expressions?

The YAML syntax is awful for expressing those constructs though. It might be a DSL, but it's a very awkward one.

It can read pretty comparably to other languages, IMO

    # Write each item in the list to a file
    for ITEM in $LIST; do
      # …
    done

    - name: Write all items to individual files
      for_each: list
      do:
        # …

> complex logic encoded into YAML

This should stop too.


What's the answer then?

Pulumi is the only imperative one I know about, but I can't really opine on any of them.

Except...

When we find a new problem, we solve it the way we always solve things. And we like to solve things with tons of impenetrable abstraction. We hardly care about legibility of stack traces let alone ease of tracing through the code.

I'm sure 24-year-old me would disagree with this, but I think there's a qualitative difference between an arcane abstraction for getting text onto a scroll area on a screen and arcane abstraction that lets me accidentally undeploy all of my servers (24 year old me would be equally frustrated by both).

BDD test frameworks mostly look like imperative code, and after the initial shock of learning that some things are declared/registered immediately and executed only after everything else has gotten a chance to register, it's not that difficult to figure out how things work (until you have to step into the internals, and some test frameworks are okay, while others are a disaster).

I kinda think infrastructure as code should just be code. Give me something that looks like a test framework but with better adherence to Least Surprise.


I think a good analogy may be tests vs type systems, especially if you think about new vs old type systems.

Traditional deployments are just like traditional unit tests. All imperative, easy enough to work with, all useful, but if you don’t have good discipline it’s still really easy to sneak past them (ie, writing a test that doesn’t really test anything).

Some traditional DSLs like Puppet are like the type system in a language like C. It wants to help you declare invariants and validate them statically, but it kind of just sucks at it, and you still need a lot of discipline. And now you also need to be aware of your type system pitfalls. (Still better than no type system, though.)

I think what people want out of DSLs (and what others have mentioned in this thread) is something like a high-end modern type system like Idris: the ability to put useful, sensible invariants into your scripts and have them usefully validated. Now, like Idris, we probably won’t reach Nirvana on that for a while. (Stateful databases can throw a wrench in things, for example.) But for a lot of tasks, I think we are starting to know enough in general programming language theory as well as systems design that, as a culture, we can solve 80% of these problems with a really high level of reliability soon.


Applications with features for user stories.

AWS CLI is a very useful tool, because it can do simple things with AWS. In order to do more complex things, usually you have to "script" multiple commands together or write a Boto script in Python. But if the tool just came with those complex things already pre-designed as a feature, we wouldn't have to script it, it would be more reliable, easier, simpler, and nobody would need to bang their head against a DSL to get the functionality.

Nearly everyone on AWS who has wanted to try Fargate could use the following function: "Build me an ECS cluster, create everything I need for it (a VPC, security groups, etc), create an ECR registry, upload this Docker container to the registry, create me a Fargate service with tasks for the thing I uploaded to ECR, create one ALB for the cluster, add target group forwarding rules for each Fargate service in the cluster, and manage everything in Route53, including domain and certs".

Now, you could spend hours/days trying to set up Terraform or Ansible to do all of that, or script it with Bash or Python in a few hours. Or you could run something like "aws ecs user-story fargate-apps --cluster-name foo --service-container docker-image:latest --service-name some-img-name:latest --service-url https://my.custom.domain/some-img-name/ ". No DSL. No plugins. No cobbling together 100 lego pieces and reading 200 pages of documentation. No scripting. It's a very common user story, and the AWS CLI already has literally all of the functionality you would need.

But that won't happen, because the IT industry is purposely designed to be unnecessarily inefficient, complex, and expensive. If a tool like AWS CLI or Terraform just bundled this user story natively without you having to "compose" it like a DSL, the companies that produce them wouldn't make as much money, and could potentially incur more cost through the maintenance of it. An open-source community could support it, but it'd mostly be written by engineers of private companies, and companies are pathologically terrified of releasing any intellectual property without lawyers and contracts and CLAs and so on.

Literally half of the reason my job exists at all is that nobody has yet bothered to release the cobbled-together lego components of an enterprise organization under the public domain.


Check out https://github.com/aws/amazon-ecs-cli-v2 There are some very motivated people working to make getting started and getting going with ECS easier. The CDK (https://github.com/aws/aws-cdk) is different but through its higher level constructs it's looking to make it easier to build and compose useful abstractions (like a "load balanced Fargate service"). Useful along with https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-patt...

> But that won't happen, because the IT industry is purposely designed to be unnecessarily inefficient, complex, and expensive.

As a fellow developer, I feel your frustration. There have always been UX problems in AWS across the various pieces of first-party software you use to access them (console, SDK, etc.)

But as a developer at AWS (my opinions are my own), I think I disagree? I'll write out a stream of consciousness, it might be wrong. It comes down to at least a couple things:

1) Good UX is hard, especially for newer services that are still learning customer usage and building foundational features. There's no test I can write to know I did it well now and for the future. 2) We have limited resources and tend to focus on building (hopefully) good APIs and features (and operating the services). Good UX requires investing a lot of time and effort, I think.

I don't think I've ever heard anyone say we shouldn't focus on providing a better UX, but most of the time we want to get through the mountain of features that our customers need. Sometimes an org grows large enough, and has been around long enough to understand their customers, and has the right leadership, that they can invest in building out UX improvements, whether that is console integrations or a custom CLI or whatever else. In the meantime, there are individuals/groups/companies that build abstractions - though I also don't think I've ever heard leadership say we should rely on third parties to make our services usable. Sometimes we adopt improvements, or partner with companies, or commit developer time to help maintain a project.

The tricky thing with first-party software is that once something goes in, it's supported forever-ish and is difficult to deprecate, so you have to be very deliberate. The wrong abstraction ends up being costly. We get a ton of value just by being able to auto-generate the CLI from the same models we use to define our service API, and the AWS CLI is pretty well suited for that. Though some teams do provide custom CLIs bundled in, I think.

Though in a more specific sense, I totally agree that the experience of "I have a Docker image. Run it for me." should be easy and I'm glad we have people doing something about it.


A declarative configuration that specifies deployment state, using a powerful language like Cue[1], and a reconcilation mechanism - written in a real programming language - that ensures that real state matches declared state (like k8s).

The Ansible/Puppet/Chef/... approach of establishing and then haphazardly poking at highly stateful server state is fundamentally flawed. Top-tier systems engineering organizations like Google figured this out a decade ago and moved to mostly stateless deploments that can be redeployed and reconfigured with confidence. There's always some state you can't get rid of, but it must be minimized to reduce the "entropy tax".

[1]: https://cuelang.org


Replace "language" with "standard interface/protocol", and replace "programming language based reconciliation method" with "standard interfaces for managing infrastructure and network service components", and I agree.

"One-size-and-tool-fits-all" solutions will never work for everyone. But if you standardize the operations, and the language for communicating operations in between components, and then let anyone implement any one of these parts in a platform agnostic way - including kernels, SDNs, hypervisors, storage, processing, etc - then you have platform independent ways of managing state.

At that point state will be important but trivial, the way state in a TCP connection is trivial. We care what the state is, but every component in the network can interact with the state in a standard way, to the point that nobody tears their hair out about "oh no the network protocols are sending state everywhere!!!" If any tool can read and understand the same state in the same way, any tool can manage it (meaning multiple tools managed by multiple people). That's how network protocols work; let's just extend the model into general computing.


I believe the format/DSL of Homebrew would be excellent for this purpose. In a certain sense, you could actually use it as is: take the formulas you want, add specific configuration, and deployment is just cloning your repository and running install for all your formulas.

Each installation runs in a couple of steps, each of which has a default but can also be configured individually. It has lots of helpers for adding/replacing in configuration files, for testing, for temporary files and cleanup, for patching, and for specifying options. It’s got dependency management built in, etc.

And if (when!) you need more flexibility, you don’t have to learn about the insanity that is a for-loop in YAML, but can use a language that you either already know or should have no trouble understanding, ruby.


Make packages to describe the state of individual app configurations and version your config files, describe the server state as a set of packages.

Ansible made sense when people were physically provisioning things, no one (sane) runs apps on physically provisioned servers now except people deploying cloud platforms.


Terraform, I guess.

But that is it's own nightmare.


Terraform is just back to "quirky DSL", though

Write Python or similar that evaluates YAML. This works super well for config management. Give it a shot!

No more dsls without tooling to work with them at least.

Are there any other Ansible alternatives that avoid the whole YAML mess?

Idempotency and reusability are what I like about Ansible. YAML is a dependency I could do without.

Maybe what we need is a set of CLI commands that mirror Ansible modules in performing idempotent operations through SSH and APIs.


Is ansible really reusible? We had an ansible deploy for bringing up MySQL database. I stopped using the codebase and came back to it months later; and I spent two days trying to get it to work (it was an unholy combination of local and community yaml) and eventually just rewrote the damn thing as a bare sequence on literal MySQL (in 20 minutes I might add) and disabled verification - definitely worst practices - to get it to deploy.

Maybe the problem is that I'm not an ansible professional.


Ansible roles solve that. It's a way of organizing tasks in a way where you can use them from other playbooks and supply the configuration parameters for it.

And yes, there are public roles for bringing up MySQL, here's one of the more popular:

https://galaxy.ansible.com/geerlingguy/mysql


Some of that could be due to versioning... Ansible's releases always have breaking changes, so you can get into trouble if your setup is vastly different from when the playbooks were created. I think this is part of the point of a dedicated control machine, but unfortunately the documentation (last I checked) seems to neglect the importance of it.

Along the same lines, there are also .cfg files and roles that can live outside the root directory.

I've been able to get decent reusability out of roles, but that's mostly because I can live with my servers all being deployed the same way (e.g. they all have the same nginx config with minimal customization points and share the same certificates).


> the point of a dedicated control machine

Usually virtualenv + requirements.txt + all configuration inside playbook repository should make for a painless experience without one (save for speed).


This is fairly anecdotal, but... The Ansible Galaxy is a repository of code/libraries that people have written and published to be consumed by others. I've _never_ been able to use anything I've found on the Ansible Galaxy in my environment because everything I've looked at has been so opinionated or inflexible it would be less work to write something that does the same job from scratch.

This is in stark contrast with my experience with the Puppet Forge, which my environment uses 20+ modules from for managing different parts of our systems.


Sounds like that was written as a single Ansible play and not using the more standard role-based structure. That's the equivalent of writing an entire program in a single main() with no other functions.

Honestly, this is going to sound absolutely terrible, but the easiest way I found around some limitations of mysql modules in Ansible is to create SQL statements using a template and just load that in.

Ansible is not idempotent, unless you implement that idempotency yourself in its awkward little YAML ecosystem.

Example: V1 of a playbook installs htop. V2 doesn't. Now, a machine that went through V1 then V2 has a different state than a machine that just went through V2. Of course, you can make V2 explicitly uninstall htop - but that's a hack, and while it's easy for this particular example it sucks for real life problems. And even if you implement explicit 'clean up after previous versions', then you have to manage those as well, figure out how long to keep them, make sure they don't have side effects on other machines (maybe something else installed htop and actually needs it?), etc.

If you want idempotency, try NixOS.


>Example: V1 of a playbook installs htop. V2 doesn't. Now, a machine that went through V1 then V2 has a different state than a machine that just went through V2.

This has nothing to do with idempotence. Idempotence means you can apply the same action more than once, and all applications after the first are no-ops. In Ansible's case, that applies to its tasks; eg if you run a task that starts a service but the service is already started, the task simply succeeds without any other effect, as opposed to failing or some other behavior.


I haven't used Puppet for 6 years but it had its own DSL. But I'm not sure if that's any better than yaml and it's much more work to set up and requires a client installed on each target machine.

Would NixOps fall into the same category?

yup, cdist https://cdi.st

Exciting, very nice idea, I'll keep my eyes on this. It happened more than once that I had need for something like this, and I stewed over how to implement it for quite a while now. Haven't looked at it in detail yet, but from a first glance it looks well thought through.

Bash should be recognized as the tech equivalent of asbestos.

A fair analogy, given how much money I have made as someone who is willing to deal with metaphorical asbestos.

The question is, were you the one cleaning up the metaphorical asbestos or creating it? :)

There's plenty of money to be made in both.


Y'all just have bias against the string system.

Writing decent code in bash is just as possible as any other language.

I stick to POSIX sh but same difference.


Why POSIX sh? Bash is everywhere. I once in my career had access to and needed jobs performed on a mainframe with AIX running. It had bash, it had ruby, it had day, it had fish shell. I couldn't and can't understand why people insist on asceticism (Korn shell, POSIX sh) when nicer options are available.

Eh, I simply don't need bash.

Every few dozen scripts or so I'll change the shebang to bash to have more than 1 array, but other than that I seldom pine anything that it has to offer. I believe it's a bloated mess along with most GNU utils, so avoiding it is partially out of spite, but I wouldn't impair my code to do so.


*zsh ... Not "day"

dash is 10% of the size of bash. You don't need the interactive features when running shell scripts, and the few "extra" features are pretty crap anyway (arrays? yeah right).

"Thirteen Incorrect Ways and Two Awkward Ways to Use Arrays"

http://www.oilshell.org/blog/2016/11/06.html


Yeah I should clarify; I think it's well past the time of sh.

Still, sh is unrivaled for portability and brevity/velocity; I wouldn't swap virtually any the shell scripts I've written for python scripts. I think sh is a decent and often well-suited tool if you can manage to wield it properly (not a straightforward endeavor).

I look forward to checking out oilshell though, so thank you~

Edit: FWIW for arrays in POSIX sh I generally only use:

  set -- a b c
  echo $#
  echo "$1"
  shift
  echo "$@"
Once in a blue moon "$*" or maybe an unquoted $@. See 2.5.2 of https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V... (My go-to reference)

In config management, I often think to myself as I'm trying to figure out how to express in the DSL some simple thing, that I'd rather just write the bash. Why should I learn each new fashionable DSL just to have it translate back to the shell commands I spent so long translating into the DSL? It's a game of telephone.

This is a cool/fun project, but I've got to say, if I'm troubleshooting a production issue I'd so rather be dealing with the (shorter) generated bash equivalent than the bashable DSL in the example they have there.


My answer: so that other people can actually work with the scripts you write.

Some DSLs are definitely a waste of time (hello Jenkins!), but in the anecdotal experience of myself and literally everyone I’ve talked to, Ansible is massively easier to learn than Bash. I’ve had junior devs (who don’t even know Python!) writing and maintaining useful tasks within minutes, and that’s not even an exaggeration.

Shell scripts, IME, work great until you need a certain amount of logic, and then they become a collapsing Death Star of edge cases and tribal-knowledge pitfalls.


Not only Bash but everything that is not compiled. The amount of grief interpreted languages introduced is outrageous.

> everything that is not compiled.

Agreed. I see fixes like this in bashible that would have never happened in a compiled language with static types:

https://github.com/mig1984/bashible/commit/63fcc766d0244a6fb...


Compilation does not guarantee neither readability nor ease of maintenance.

Interpreted languages typically give you a very quick feedback loop; they're great fit with scripts and utilities that you won't ever write UT for.


And I did not claim any of those. Btw. a typical feedback loop for many of the compiled languages is 100ms (for example ReasonML).

My main problem with interpreted languages is the duality of their implementation. Especially Python suffers from this, the high-performance part of the language is in C, C++, Fortran. When you install a package you get all of the problems of those languages on the top the Python problems (versions, environments, installed libraries, etc.).

What I would like to have is the Rust way, compile the code on any platform and target any other platform, configuring even the libc that you want to use. OcaML also works very nicely, compiling the code to native or byte code.

I think the detour to interpreted languages set us back in the long run.


if Bash is asbestos, what are Windows Batch-scripts?

Dry mud

NIH If you are looking for a DSL that uses shell: https://www.cdi.st/cdist-why.html

It's already mature and used in production at various places/datacenters.


"cdist is written in Python" - that's a significant difference

In my experience, Python is just as brittle, because it just as much lacks static checking. (Though of course mypy helps with the basics, and there ought to be a test or dry-run mode in every "devops" system, but I have no idea what cdist does.)

Uh, so, my thought behind that comment was that writing in bash has an advantage over Python, as no extra runtime environment is required in case of bash... sorry for not being clear enough in the comment, I now see I wrote it too vaguely...



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: