There are of some downsides to that approach which stem from the fact that you can’t possibly declare every possible thing. For instance, if you were to run the above declaration and then later remove the declaration, chef wouldn’t know to delete the directory; at that point chef wouldn’t know anything about the directory at all. So ideally you can build infrastructure from scratch each time (Docker, etc) rather than having to converge an existing machine.
If you are doing sysop^Wdevops work it is the single most important thing you can learn, once you have a good grasp of your shell and your editor.
The difference between a giant git archive of shell scripts that various people have modified over the years and state changes described in a configuation management language is the difference between fixing things in the small and reasoning about integrated systems. It's something that needs to be experienced to be appreciated.
Reasoning about system state as an integrated whole is just as relevant when you are shipping applications as containers, if not more so. It's not uncommon to start using something like Kubernetes without first being able to describe global state and the result is just as messy as before, if not worse. Something like Helm is impossible to understand unless you have complete control over your configuration.
I understand a shell script is not suitable for orchestrating a datacenter, but sometimes it is suitable for installing a couple of things.
I learned a lot about catching all the different bad things that can happen, adding prerequisite checks, rolling back changes automatically, etc. In the end, a tool like Puppet/Chef/Ansible would have taken a fraction of the time to develop scripts for.
Containers have definitely lessened the need for mutable configuration management, but they are still useful in the 90% of environments that are still working in a past decade.
Usually those cookbooks will want to have things in a particular way, which might not be the way you want.
Chef/Ansible/Puppet all have the problem of having many layers of overhead for the same thing.
This makes them slow, hard to debug, hard to read and even write (in my opinion).
Sure, bash is hell, but the go from bash to something like Rust, instead of Chef/Ansible.
Sure, again, this might sound rather unconventional, but you get safety, speed and convenience of a full fledged programming language, single binary output, etc. And you need to test cookbooks/playbooks too, so why spend enormous effort on cobbling together scripts and high-level abstractions instead of writing just what you need with the assumptions you really have - instead of what a general cookbook/playbook has to have (which is a vast difference).
Chef/Ansible/Salt could be described as rudimentary type systems for operating system states, that enable you to convert between different types/states.
It's conceivable you could build such a system in Rust, but Rust itself has no primitives that would make it easy. It certainly isn't a good starting point when you want to set up a database server.
I've used both Chef and Ansible. Their maintenance cost is pretty high unfortunately, and they are not that flexible, nor resilient to worth it.
No wonder the container/k8s boom is so big. Immutable infrastructure (docker images) with really powerful operational features is what can justify high maintenance efforts. Whereas fancy but brittle (due to the inherent problem of state discovery) CM systems are at best only useful for the initial platform setup.
Rust is just useful because you can quickly and relatively safely produce single binary programs. (Many people use go for this, but it's easier to skip error handling in go which result in runtime problems, which is very inconvenient in a provisioning/setup system.)
And the image build should be just a simple imperative install these packages, use this config, run this command on invocation.
Nix is rather amazing with its powerful CLI stuff it provides (S3 compatible dependency store, fetching via SSH, closures, etc).
My only problem with NixOS is that it's very much like Gentoo. It has infinite composability built-in, but it means you have to rebuild everything. In Debian/Ubuntu land you usually can simply enable/disable install/uninstall specific feature related packages. (For example postfix and postfix-mysql packages.)
Then they simply execute other programs to then parse their output. And/or fiddle with files (parse them, alter them, write them).
Sure fundamentally the syscalls and apt/dnf/yum will be the slow parts, but I found that development of CM scripts/plays/recipes are usually bottlenecked on the turnaround time of the CM system's own workflow. And execution time is a significant part. (The bootstrap, the transfer of whatever files, and so on.)
Rust would help with writing things that are relatively well error handled at compile time, and gives you single binaries.
And simple imperative script is a lot more readable than a custom DSL with who knows what ruby hooks.
Currently the CM system that makes sense on the long run is a git repo for Terraform. (Because everything else runs in containers anyway, and to set up the immutable images you don't need "state management". And where you need it you need active state management such as k8s and its specific operators.)
I wouldn't pick Rust for this but there are lots of examples of using Node/JS, or C# or Python. It's a much nicer environment and benefits from full IDE support.
For a good example, look at what Pulumi is doing for modern infrastructure: https://www.pulumi.com/
Chef and friends are blocked on disk I/O. How does a typesystem and/or thinner abstraction layer to disk I/O speed up the underlying expensive operation: blocking disk I/O?
It’s much easier to see the failure conditions in a Rust program rather than in bash. Also rust seems like easier to maintain, too.
How is a 100 lines of rust easier to maintain than 10 lines of shell script?
It's exactly what the other commnet said: "Writing a shell script in rust would be obnoxiously difficult for no real benefit."
>> How is a 100 lines of rust easier to maintain than 10 lines of shell script?
In 10 lines of _proper_ bash you won't be able to even check if the arguments supplied to the script even exist, let alone parse something more than simply subscripting argv.
I don’t think there would be anything wrong with a rust implementation of infrastructure management, but I also don’t think it’s the silver bullet to solve what plagues the space.
But it simply is a solution in search of a problem in today's container orchestrator/platform world.
HAproxy is amazing, but a k8s Ingress service  has a nice API, so I don't have to run Chef to add a new vhost. Yet I can persist the config in a YAML. (Or json, or whatever.)
 which can of course be backed by HAproxy down below, but traefik has dynamic config; though haproxy2 will do that too
You could rewrite your bash scripts in rust but while it would probably be safer/more correct than bash, the whole paradigm of writing imperative scripts to manipulate existing machines is flawed, whichever language you write it in. And executor performance is almost never the bottleneck in my experience.
I agree that manipulating existing machines is folly, but that's why people use containers, and immutable images. And then you can prepare the images anyway you like, but since they are very deterministic (modulo the external repositories/packages/curl-bash-piped-scripts), you usually use something simple (eg bash), because there's no need for that state management.
And actual state management should be left for the platform (k8s, swarm, or some cluster manager).
I found the chef workflow slow, not just the executor itself. (Uploading cookbooks, fiddling with dependencies, testing them, debugging them, etc.)
Ansible was even worse in my experience. Much slower execution, harder to debug, opaque python blobs, extremely confusing and fragile "YAML programming" coupled with hostile variables files. (All the things that are tenfold more intuitive in Chef.)