The trouble with even a simple, well-made tool like Ansible is its dynamic DSL. The YAML file is essentially code, but modules like "file" and "apt" written as configuration entries give a declarative feel to it. But they are just procedures that strive to be idempotent, and its YAML file is just a series of procedure calls, sprinkled with some facility for organizing hierarchies and sharing code between them.
YAML the format is also not a great way to encode systems data. Thankfully Ansible has documented the gotchas with its space-indentation and need for explicit quoting in http://docs.ansible.com/ansible/latest/YAMLSyntax.html#gotch.... But that is just an attempt to band-aid over the unsuitability of YAML for this purpose.
Chef's configuration on the other hand can be done in Ruby, but you're still constrained to its DSLs. The shape and structure of Chef's objects can only gleaned by reading reams of documentation, and worse, by running the cookbooks with debug statements.
The problem is compounded when you upgrade these tools and things stop working silently. The community solution for this unsurprisingly is unit tests. I'm still grappling with that idea.
But what if we could write our own provisioning code in Haskell or OCaml/Reason, with library functions that help install packages, returns errors in neat sum types which we can confidently handle and doesn't leave us to worry about the known unknowns that we haven't enumerated and handled, and a compiler with exhaustive pattern matching that helpfully tells us about all the record types and their structure within a fast edit-compile cycle, ...
"Properties are defined using Haskell in the file ~/.propellor/config.hs. There is fairly complete API documentation, which includes many built-in Properties for dealing with Apt and Apache , Cron and Commands , Dns and Docker, etc.
There is no special language as used in puppet, chef, ansible, etc.. just the full power of Haskell. Hopefully that power can be put to good use in making declarative properties that are powerful, nicely idempotent, and easy to adapt to a system's special needs!"
I'll be learning Haskell for real this time.
You just(1) have to implement update functions as well.
(1) that's the "just" that doesn't mean "just" :-)
It's hacky code, but that's exactly what I did in Clojure: https://gist.github.com/rauhs/a72cfbeef4c80f9a58480484c49e7a...
With a normal REPL this works well especially when developing plays.
Yes, JSON is very close to a proper subset of Yaml.
But you're up against a potentially changing state -- the file system. So that only gets you so far.
Idempotency is very hard to achieve. What gets me is there is an excellent tool -- rsync -- which is extremely fast and idempotent. The various config tools make far too little use of it.
YAML is not turing complete and is hence not "code". That is a large part of what makes ansible such a powerful tool - it enforces a strict separation of concerns between the definition of server configuration and its execution and keeps one in a language that is deliberately not able to be made more complex than is needed, is thus kept readable and straightforward to maintain.
By contrast, turing complete languages are veritable magnets for technical debt, which is why a bad chef playbook will always be worse than a bad ansible playbook. Haskell, being turing complete, would suffer the same issue albeit not to the same degree as ruby.
>YAML the format is also not a great way to encode systems data. Thankfully Ansible has documented the gotchas with its space-indentation and need for explicit quoting in http://docs.ansible.com/ansible/latest/YAMLSyntax.html#gotch.... But that is just an attempt to band-aid over the unsuitability of YAML for this purpose.
YAML has a few oblique gotchas but they don't affect the fundamental suitability of it for this purpose. Once you get used to it, space indentation becomes a feature, not a bug.
Ansible playbooks have variables, loops, conditionals, scoping rules, inheritance, and you can even extend it by writing custom modules in Python, which is a Turing complete language.
The problem with not using a real programming language when you're trying to essentially "program" is that you're expressing complex programs without any of the abstractions and tooling support that a real language has.
Despite all that I like Ansible's constrained format and the resulting simplicity compared to Chef. But I think we can take those design aesthetics and apply it in a statically typed library and maybe get something more sturdy.
Actually you can write custom modules in any language proving when you exit you provide the required JSON to indicate if the module succeeded or failed, changed something, or has a message to pass back.
As far as I am aware, variables, scoping rules and inheritance don't imply turing completeness and the way it does loops and conditionals could really more be characterized as configuration for loops/conditionals further down the stack (e.g. as seen here http://docs.ansible.com/ansible/latest/playbooks_loops.html#... )
>you can even extend it by writing custom modules in Python, which is a Turing complete language.
Yes, the idea is that you have the power to break into a (good) turing complete language when it is necessary but you can keep that code - and by extension the technical debt it would attract - to an absolute minimum. I've personally never felt the need to write a custom module in python when using ansible despite using it a lot, so for me, minimum is zero. I'm sure I'm not alone either.
>The problem with not using a real programming language when you're trying to essentially "program" is that you're expressing complex programs without any of the abstractions and tooling support that a real language has.
Yes, this is a problem if you're trying to work within a problem space that is not heavily constrained. If almost every time I used ansible I felt the need to write a python module to do almost everything I would agree that it is a fundamentally unsuitable task. I feel this issue when writing build scripts which I think absolutely have to be kept turing complete, but never server orchestration.
Then what does imply turing completeness?
According to the wikipedia article :
"To show that something is Turing complete, it is enough to show that it can be used to simulate some Turing complete system. For example, an imperative language is Turing complete if it has conditional branching (e.g., "if" and "goto" statements, or a "branch if zero" instruction; see one instruction set computer) and the ability to change an arbitrary amount of memory (e.g., the ability to maintain an arbitrary number of variables). Since this is almost always the case, most if not all imperative languages are Turing complete if the limitations of finite memory are ignored."
If it did arbitrary branching and if you could mutate a variable and then reuse it later on in a playbook.
> YAML is not turing complete and is hence not "code".
seems wrong to me. In fact I've heard code/coding to include non-Turing complete languages (e.g. html) as distinct from programs/programming.
IMHO you can only "code" in programming languages (not necesseraly executed by a CPU).
A programming language does not need to be turing complete though to "code" in it. You're just limited to the problems you can solve with a turing-incomplete language.
It's interesting to think if using a problem-declaration language is coding.
You do realize that YAML is a data serialization format, and as such is not
a programming language by any stretch, right? And it's the Ansible's language
encoded as YAML that is the problem? And I wouldn't be so sure that the
language is not Turing complete, especially that this YAML is templated with
Jinja2 (or has something changed recently?), which is Turing complete.
While technically inferior to Chef and Puppet, the simplicity of Ansible's syntax is what makes it so useful.
Learning curve and preventing people from being "clever" is absolutely essential for a tool which is used by a team of developers.
> Inspired by YAML, Scheme, and TCL. Based on ulithp. Converts YAML to s-expressions then evaluates using ulithp.
Cannot stress enough to new developers how you should have one of Ansible, Puppet, etc., framework under your belt to improve your ability to build software products. I'd recommend picking one that works equally good with old legacy systems, rather then just whatever is new like Kubernetes, because it will seriously broaden your career options and make you an asset to almost every software company.
EDIT: to clarify, k8s can obviously be used with legacy software applications. But the idea was that if you get hired by $some_old_PHP_shop throwing Ansible into the mix may have significantly less friction then convincing whoever is in charge to start maintaining k8s clusters.
Wholeheartedly agree. Got brought on board and Ansible was the first tool I reached for (there was previously nothing like this in place). The new tech is great, but something more broad in scope that can work with different kinds of systems can really come in handy if you're not working on new tech or greenfield projects.
Just don't opt into the Paramiko control scheme, which isn't used by default except on RHEL6 anyway.
You can also set ssh_args in the config file to enable ControlMaster, etc. http://docs.ansible.com/ansible/latest/intro_configuration.h...
Even without controlmasters when using higher parallellism than the default 5 it the ssh connections start to fail for no apparent reason.
E: And another reason I would love to not use Ansible is the god awful YAML. It's a horrible choice for a tool like this.
I then wrote a wrapper (also in git) that invoked ansible with all the correct/preferred settings.
Works a treat.
Ansible started as a more organized way to manage infrastructure beyond hacks and handmade scripts. As technology matured and infrastructure become more complex, the simplicity of Ansible fell short, and tools like Puppet and Chef came into scene.
But technology kept pushing forward, and these tools evolved to become increasingly complex until they become behemoths with towering ecosystems around them.
Enter containers and container orchestration. Suddenly many of the problems that configuration managers were trying to solve aren't there anymore. And suddenly Ansible simplicity, which became its weakness, has become its strength.
I profoundly disagree with this statement. I think containers are, potentially, a huge step backward in the configuration management task domain. They are very useful in dynamic scaling orchestration tasks, but awful as a configuration management tool. And they are mostly used as a configuration management tool.
Ansible/puppet/chef describe how to install your complete server pool. It's hugely reassuring to know that this knowledge is written somewhere, in formal form, that it is version-controlled and that it is debuggable.
Containers as configuration management are the modern day equivalent of the mythical server that no one touches and that no one knows how to reinstall. (a bit exaggerated, but correct nonetheless)
* "Dockerfile" or whatever that language is called is horrible, terrible, and very bad, with extremely minimal support for anything useful and braindead implementation for the little it supports (every new RUN line cements a new layer in the image, meaning you need lots of commands on a single RUN-line continuation to keep size down).
Essentially, this means that people do all their work before Dockerization, in some undefined mechanism that is probably not very reproducible. Most often this is copying over static configurations, making changes, etc., before running docker build.
* The FROM semantics encourage passing the buck and using opaque binary bases that are not necessarily well-understood. Since Dockerfiles are very hard to do anything useful in, people will get something minimal working and then cement it in a FROM tag to iterate further, because they don't want to potentially break that base. We have short Dockerfiles where I work but they are still stacked 2-3 layers deep, and this is just slapping binaries on top of each other; unlike an Ansible playbook, it provides no information about how or why these things are being combined.
Docker image distribution does not necessitate source Dockerfile distribution. The build is not necessarily reproducible, and this semantic results in a big, fragile dependency stack when someone somewhere in the FROM chain falls over. It also creates some attack surface for injecting into an early commonly-used FROM source. People just trust that FROM images are great and good and reliable and trustworthy. We all should know how that will end up.
Making a Docker image is much closer to the old "zip up this folder and label it 'todays-good-code' and save it on your desktop Bob, we don't want to lose it." There is no information about what is in that copy. It is a raw binary blob.
This is the antithesis of configuration management.
Wait what? I'm pretty sure Ansible came after Puppet and Chef, not the other way around.
Salt 2011 (also for completeness)
OP's timeline is a bit off :)
I got introduced to them the other way around and didn't fact check. Sorry about that.