Hacker News new | past | comments | ask | show | jobs | submit login
The golden age of configuration languages (cosminilie.ro)
46 points by iliec 12 days ago | hide | past | favorite | 49 comments






This barely readable webpage notwithstanding (my advice: use firefox's reader mode to remove the CSS) I am sad about how the devops community changed from being about practices (like "don't throw code over the wall" and "release often [for various reasons]") to seemingly being wholly about technology.

A huge proportion of people "doing devops" (whatever that is supposed to mean) are in fact really just lobbying for the likes of terraform and kubernetes to be adopted, preferably via some expensive Amazon/Google/Microsoft service. This article is no different. Now the answer is for the sysadmins to code Javascript and use something called Pulumi. I don't remember this being what DevOps was when I was interested in it in 2015.


As an old school sysadmin for a long time, the feeling I've always gotten from DevOps is that it is like anything else in the big ball of mud we call "computer science": a new generation re-inventing the wheel.

To be sure, the scale is larger than ever before. But history is replete with "IT in the large". Just look at mainframes, quietly ticking away for decades on loads that people are re-discovering how to handle.

Partly I blame this on the hubris of the young - "this ain't your granddad's IT." Heck, I recognize it in myself from years ago, when I was all in on Linux, but the BSDs had been around a while.

But I do believe there is a wanton lack of communication in the industry and academia. Part of this is on purpose - companies don't like "giving away" their hard won lessons, nor is it (short term) profitable to pay people to write documentation. But there's a bigger culture around "if you don't understand, you're just not smart enough." It's toxic.

We could sorely use something like the group in "Anathem" that does nothing but study history and remind others of it.

I wish we could learn the lessons of the past without having to repeat them and adding yet another standard to the fourteen already out there.


Yeah, the rule always seems to be "two steps forward, one step back."

I'm not convinced it's a hubris thing. Many of these "two steps forward, one step back" advancements happen because a commercial entity has intentionally blocked the tech tree from advancing in a certain direction that would undercut their position. The tree still advances, but it has to grow around the blockade, and that involves a certain amount of seemingly redundant work. It's only redundant in the sense of technological novelty, though. It's not redundant in the social context.


I think it's a hubris thing now, but it was clear that the "DevOps" movement/hype started as a cost-cutting measure masquerading as innovation. Who needs classic Ops/IT/sysadmins when developers can just automate all that in their spare time between working on other systems? It shouldn't be a surprise that once the cost-cutting is done you realize too late after the fact that you lost a whole bunch of institutional knowledge and have simply created a new version of your old workforce in their place. The hubris of course comes in accepting the masquerade that everything was really about innovation in the first place, rather than cost-cutting, and rather than rehiring for the old institutional knowledge, just reinventing all the wheels in the new situation with the hubris that it is vastly different than what came before (even though it isn't) because it is far more innovative (even though that is unlikely, because the businesses are still the same, the red tape the same, the expectations set by people still real familiar with the old predecessors, etc).

> you lost a whole bunch of institutional knowledge

That also sounds like an opportunity for the old guard to write down what they know before it's (they're) gone. I wonder if any of them have, and if so where.


Automating repeatable things to remove the need for some institutional knowledge is a good thing.

Definitely. I'm not claiming at all that it didn't necessarily come from a good place originally or that it hasn't had good ideas, just that the hype cycle around the kernels of good ideas became it's own demon that lost sight of the institutional knowledge forest for some of the most easily automated milling of institutional knowledge trees, and were surprised what happen when the forest was a clear cut field strewn with detritus and they had lost more than they expected in the culling. (I think that metaphor works? It sounds good at least.)

This I will give to DevOps - it's hyper focus on automation, change management and repeatability are all good things. As someone with a leg in each domain (software developer by day, system administrator by night), I've really felt that both areas could learn a lot from each other. These days with version control being as quick and easy as it is, there's no reason not to use it for everything. Configuration as code is immensely powerful. And repeatability to eliminate "well, it works on my machine" is fucking brilliant.

But there still needs to be documentation. The why must always be explained, even with the most readable of code, that can only layout the what and the how.


It's gone too far though. Where I work everything is automated, even the setup of things where we only have one of them. It's all a giant pile of Ansible scripts, some of which are sort of half maintained.

This kind of fanatically DevOps-y approach has a few problems:

1. It's slow and expensive to develop all these scripts.

2. It encourages "disposable infrastructure" in which people think, gee, I need a test version of {very big complicated thing}, so I'll just run these scripts and create an absurdly expensive duplicate that they then forget to shut down. You end up with a cloud filled with VMs that were used once, by one guy, for twenty minutes for a demo, and then never properly deleted.

3. There's no real discoverability or way to iteratively figure out what you want with these things. If you aren't entirely sure how to set something up, you can't really do it by coding it because you'd end up being limited by the tools. Whereas shells and vim and such are designed for interactive work. But then once you've got it set up, for many tasks you hopefully don't need to set it up again anytime soon, so at that point automation is just pointless overhead. It'd be quicker to just drop some notes in a wiki.

With respect to the article, it did very much remind me of the great wheel turning. I used to be an SRE at Google and I saw there two relevant trends.

One was that when I joined they still had some parts of the infrastructure managed by a giant Python script called the Babysitter. The Babysitter was in theory split into configuration and implementation aspects, but somehow not really. It's dead for perhaps 12 or 13 years now so many Googlers would never have encountered it. The replacement system (Borg) used an intentionally limited configuration language, partly due to the experience of the Babysitter in which people had full programmability, and used it! But the borgcfg language rapidly started sprouting various kinds of programmability features too, and in fact one day someone demonstrated you could build Conway's Game of Life in it. Seems like this new call to use full programming languages to create configuration is another turn of this wheel.

The other was hitting the limits of what was automatable. One of the tasks I had to do in my job was to set up some services in new clusters, and shut them down in old ones. This was a multi-step process that was very time consuming and annoying, so there were many attempts to automate it. Unfortunately most were unconvincing, partly because the speed at which new clusters needed to be set up was somehow sort of aligned with the speed at which the general set of tasks was itself changing. So you could attempt to automate the task, but by the time you'd finished the steps involved were already changing, so the next time you wanted to run your script it was wrong. Also: setting up clusters of hundreds of machines isn't something that's easy to test virtually (there were no "mock datacenter" libraries back then).

In the end, what I noticed was the big automation wins came from the infrastructure teams when they improved the underlying systems to be more robust. Us glorified sysadmins wrote a lot of scripts, but it was often hard to prove they really saved a lot of time when the cost of implementing and changing them was taken into account.


On your points 1 and 2, absolutely I recognize those problems. The second appears to be a user education or non-bounded task problem (can't you just have the cloud shutdown VMs that aren't accessed for a specified time period?). I do like the concept of disposable infrastructure, but it is a tool like seven-league boots that one must think very carefully while using.

Addressing the first point, well, that's almost getting into business level ROI calculations, which at least on an individual developer level might be reasonably approximated by things like "is it worth the time?"[0], but that's never been a perfectly answerable easy question; for one thing, you might need to consider that the time spent is learning that will pay off in the future. I like to think of it as "technical investing" as opposed to technical debt.

As for three, that's a big problem, no matter the scale or project. I do not have the benefit of your experience with software in the large (thank you for your stories!), but even on a smaller project I'm working on recently I run into this problem: we have git repos. We have documentation in the repos. But to get to that documentation, you have to know how to get to the repos, and where do you document that? I tried imagining automating things completely, and the answer I came to was ultimately I'd have to build an AI to on-board new developers to the project. Not ideal.

And like you said, the scaffolding surrounding projects is frustrating too. Again, same project, but this time I'm digging into the build system. Turns out it was converted to make from the previous build system that appeared to be mostly shell scripts, yet the shell scripts are still around, unused, but like little land mines, waiting for someone to tinker with them and waste a bunch of time.

[0] - https://imgs.xkcd.com/comics/is_it_worth_the_time.png


The population growth of the industry is much faster than diffusion rate of best practices.

People become attached to initial ideas, resist any change, regardless of relative merits.

Everett Roger's The Diffusion of Innovation [1962], marketing's seminal textbook, explains all. Everything since is rehash (at best) or worse than wrong.

https://wikipedia.org/wiki/Diffusion_of_innovations


That's not the way I interpret 're-inventing the wheel.' I don't see it as the next gen thinking they are smarter, but rather questioning the assumptions built into the last gen. Mainframes vs Distributed (cloud, kubernetes, etc) systems have different behaviors at different scales and are useful for different kinds of applications and business sizes. A perfect example is the startup economy. I can spool up a service for a few hundred people essentially for free, but still have the ability to scale if things catch on and I need to serve hundreds of thousands. That same scaling using old technologies would be a lot less smooth.

I mean, even if we do assume each new tech fad is mostly driven by hubris, if the results benefit the ecosystem, which cloud undoubtedly has through democratizing scalability, so what?

As far as the secret keeping of industry, I don't by that at all. I've worked for a diverse set of organizations, and every single one of them wanted to either trumpet their own practices to build clout with developers, or utilize knowledge in the public domain so that talent was easier to find. If anything companies avoid having secret sauce in their infra at all costs.


So yes, the spread of powerful computing certainly is to be lauded. I'm not denying that, but I feel it's more to do with a driving down of costs than anything. If (and I admit that's a big if) big iron vendors had paid attention and not rested on their laurels, I truly believe they could have easily morphed into "cloud" providers, and possibly even brought it on sooner, like a decade earlier.

But the activity of keeping secrets is still with us, even in this age of widespread FLOSS. Just look up how many software patents are still held. It's no good trumpeting your organization's advantages if everyone has the same advantages.


I agree that there is a lot of shilling and resume-driven-development going on, but there has also been genuine progress. I would hate to manage infrastructure without Terraform, for example.

I don't know, I kind of hate managing infrastructure with Terraform. Buggering around with it's state files to move resources between files is error prone and comes up often because as with any software project: you do need to refactor after your first burst of work on it.

I also dislike like the patterns it encourages or a the "run loads of infra!" culture that comes with it.


Cloud infrastructure is certainly a big thing nowadays but AWS isnt really the grand solution to all problems. Its quite expensive, and in my case, since im not using any fancy stuff, have transferred all of my servers from EC2 to a more simple service (hetzner) and now pay less than half the cost.

I think most businesses need to really evaluate this issue before blindly putting everything on AWS since it can really get pricey once you need good performance.

So my point is, all these AWS specific config languages really create an even more tight lock in with the platform, maybe it would be best to do it in a cloud agnostic way, which of course also comes with its own issues.


Well the truth is, almost no organisations ever consider cost before they get started. That's one of the purposes of "cloud credits" that AWS/GCP/etc give out to new companies and projects that they are very successful in that.

Even worse, organisations rarely do it even once they're up and running and spending large sums - and even then I have never seen the impetus for cost control ever come from within the technology department. It's finance that reins in the spending later, and usually by sticking with the same cloud supplier but limiting activity - so now you need to get someone to sign off on your infrastructure before you provision any of it - which is largely the problem the cloud was supposed to solve.


You could say that cloud companies exploit company dysfunction. That engineering is usually not incentivized (until too late) to keep costs down is the key to this whole mess.

I've seen the pendulum swing the other way, too. At former large employers, I've had to cobble together (shadow IT) servers from discarded equipment because the process of getting cloud boxes allocated was so onerous.


> You could say that cloud companies exploit company dysfunction.

Like every large software vendor, yes, it's moderately profitable to produce good software and sell it but it is much, much more profitable to abuse bad procurement practice.

Shadow IT is a sad state of affairs but at least it builds the skills and culture to economise on computing cost.


This may be the subject of an amazing HBR article in 10-15 years.

"Businesses need to evaluate before blindly using AWS" - Why is that seemingly logical sentence proven false so many times every day in so many companies?

In my opinion, the success of AWS is due to the way it interacts with corporate hierarchy. It allows every level to externalize some annoying part of their job in exchange for a little more money. Even Finance is happy because capital expenditures (that require more complex planning and tax handling) turn into operating expenses.

Eventually the board will wonder what happened to earnings per share, but by that time the project to migrate off amazon will be impossibly large and most people with the skills to run their own infrastructure will have retired.


Most European businesses. EC2 and traditional dedicated server pricing are much closer together for US datacenters.

I think eventually we will see a European native cloud provider spring up with prices that make sense locally the same way EC2 prices make sense in America, and the big three cloud providers' pricing for EU datacenters will be driven towards Hetzner's pricing.


This article, like so many others, makes an assumption that Salt is the 4th place configuration management tool. It is not, it is a python orchestration framework that just happens to have a configuration state module.

Deployments I've written in Salt just use the Python API (https://docs.saltstack.com/en/latest/ref/clients/) to make changes to machines or infrastructure. That's the power of it, and unfortunately its terribly underutilized.

There are legitimate criticisms of Salt of course. There's a lot of moving parts, there's a lot of half-baked stuff, the crypto has been questioned, but the core idea is honestly pretty great and its always been a pleasure to work with.


> It is not, it is a python orchestration framework that just happens to have a configuration state module.

I've seen small pieces of utilization like this that looked awesome. I totally agree with the sentiment.

> There are legitimate criticisms of Salt of course. There's a lot of moving parts, there's a lot of half-baked stuff, the crypto has been questioned, but the core idea is honestly pretty great and its always been a pleasure to work with.

I could deal with that, but I have had absolutely misery trying to stabilize Salt minions. I had constant issues with them crashing or just hanging. Complexity is manageable, I can work around the parts that aren't totally there yet, I can live with maybe insecure crypto, but I can't have any stage of the deployment pipeline be flaky. It's annoying for me because I have to fix it; it's annoying to the people that need to deployments because they can never tell if their pipeline will succeed or not, and they have to wait on me to fix it. It ends up building a lot of animosity towards the tool, and by association, to the team that introduced it.

I just use Ansible now. It's not nearly as powerful (the ability to drop into Python in Salt is really nice, and I find equivalent things difficult to write in Ansible), but it pretty much always works. The abstractions are messier, but at a certain point, the less powerful but stable solution just makes more sense.

I really hope Saltstack can turn around on their reliability, because I would love to use it again.


I chose salt after failing to get Puppet and Chef working. Out of the box on Debian, just no luck. Salt was just plug and go.

Haven't bothered with Ansible (yet) as it didn't officially support states when I was evaluating options, and statefulness was my number one need. I can pick up languages fairly easy (I'm a software dev by trade), so different languages don't bother me, it was more about feature set. It is annoying, though, that it seems everyone treats Salt as forgettable. I wish I could find things like https://github.com/dev-sec for Salt, but they only have Ansible, Puppet and Chef versions.


I hope that configuration language programming will die soon. The approach of generating a configuration using a "real" programming language is superior (and nothing new).

>But, I think people will try to push this further and integrate the infrastructure code inside the actual app. All of it will be managed, by the application itself, at runtime.

I would only adopt this approach if there is an abstraction layer in between, like k8s, or if it would be easy to create an equivalent implementation for a different cloud provider. Being locked into dependence on AWS isn't the worst thing, but I would like to keep my application independent of a proprietary environment.


>The need to handle application infrastructure (I see this different than managing core business services infrastructure like AD, email, finance systems, etc.)

This point goes by quickly but it's important. One of the biggest factors in my employer's relative reasonableness is IT's corporate network and engineering's production environment have nothing to do with each other. Separate teams, datacenters, stacks, and cultures.

We still have a group that will write a change management plan in Microsoft Word to work on the corporate finance Oracle instance during a scheduled downtime window. But I can release my new API endpoint to customers whenever I want (after code review) using a contemporary CI/CD workflow.


That sounds a bit like that soon programming will be gone.

No it will not. Nice dream, but it's not gonna happen. Devops is here to stay, or worse, devops will eventually move into the "real" development. With proper languages, not what we have right now. Either programming languages or something like Dhall.


I dont see any value in this article to be honest.

Heroku was and still is significantly easier to use than any of the tooling or services available. And it's also sufficient for a lot of businesses.

These things are just tools. If your team is skilled enough not to have a dedicated people that care about your infra that's fine.


I too am surprised with the amount of negative feedback. This isn't a post about locking you into AWS. Pulumi, Terraform, and friends are tools that work across cloud providers, and even your own infrastructure.

The real meat of this argument is that in the future DevOps won't be managing infrastructure to hand off to application developers. It will be creating meta programs that are designed to be used by application developers to let them build their own infra securely and reliably. This is what the `awsx` package in pulumi is. A predefined set of best practices that can be used by the application developer to create.

As someone who's worked as various stages of sysadmin throughout the past 10 years, I see this (pulumi-like) as the future. And I agree with the article in general.


Author should check out https://arc.codes and https://begin.com because I think that's what they're looking for.

I would err to say that Pulumi is not and should not really be used for this, but I was just thinking of how annoying it is setup some open source projects in AWS (especially if they're made for it or any other specific env) and that having the Pulumi config, at least some basic modules, would make it so much easier to get started.


You should have a look at https://github.com/purpleidea/mgmt/ Disclosure, I'm the main author.

This article is interesting but it is nigh unreadable.

It's an article about the fact that DevOps as we know it, might be coming to an end.

Not sure why all previous comments are all negative. I can read this article just fine, it has nice examples, and presents the vision I also agree with - I don't want strict Dev vs DevOps split, I want Devs to be more in charge of the infrastructure of the application, and as a primarily-Dev I don't want to be dealing with bunch of half-assed DSLs that break under any complexity. I want to write a piece of code in my language of choice and use it to generate the infra description.

> I think DevOps, as we understand it today, is coming to an end. At least the Ops part of it.

No, this is backwards. Ops in DevOps is more important than it ever has been. Devs honestly have the simplest job in the world: they just have to write some business logic and check out for the day. There's many other teams of people who then take over in order just to make sure that the app will actually work for customers, and will fix it when it stops working. The scope of work covered in QE, Architecture, Data/Privacy, InfoSec, etc teams is massive, and Ops aligns it all so the app works accordingly.

On "configuration languages": Pulumi won't ever take over because 1) some of it requires money and companies are cheapskates, 2) the licensing is not Free (part of #2), 3) there's already this other crap called Terraform that everyone is already using that doesn't have the problems of #1 & #2 and does most of what is needed for infrastructure. For the rest, there's not one thing that can manage it.

CFEngine was where I learned that Configuration Management is the 8th circle of hell. Puppet was slightly better, but now I am trying to burn down every Puppet install in the company whenever I can. Chef isn't any better. Configuration Management as a concept (such as it is up to this point, anyway) is just horrible.

It will take another 10 years, but eventually the industry will wake up to the fact that its current obsession with tooling isn't actually generating any more profit or product quality than a bunch of shell scripts would, and so maybe they should calm down a bit.

The only critical feature of orchestration tools is dependency management. Knowing what changes will impact what other changes and planning for them. Otherwise, a bunch of shell scripts really does work just as well. At that point they'll probably write some other tools, hopefully ones that separate the task of dependency management into a standard independent thing, and let other tools interoperate with it, so that we don't have to custom-code dependency management into anything that wants to use it.

I mean, Terraform is a genuinely crappy product to use, but we literally can't stop using it because of that one feature. Same goes for systems like K8s, where the design is an absolute mess, but nobody can get rid of it because they desperately need something to run a container on multiple hosts for them. And since they have that, they decide, fuck it, let's pile on a bunch of other bullshit while we're doing that and we'll lie to ourselves and say it's better now.

The important parts of modern DevOps are not technology. They are the practices, principles, values, and patterns that have proven to uplift everything else. DevOps is not about Ops, or Dev, or any other team.


Unreadable on pc.

The sad part is the page is perfectly readable if you remove the CSS. They took something that was easy to read by default, and made it less readable in an attempt to make it more readable.

https://motherfuckingwebsite.com/


"Ah, this font is a little too large. Let's just zoom out-" https://i.imgur.com/l7Gh3jX.png

Weirdly enough, zooming in actually widens the columns and makes it almost normal looking

Totally agree. The font size is defined as 3% of min(viewport width, viewport height). On my full-screen desktop browser this results in so large text it is unbearable to read. Fiddling with browser window size results in varying font sizes ranging from way too large to way too small to read.

Massive accessibility fail. Font size is one of the few things that usually works fine out of the box with modern browsers across devices of all sizes - unless someone decides to do crazy stuff like defining the font size in relation to the viewport size.


Same here... I tried to zoom out to to make the font size smaller, but it just shrank the column width while leaving the font size the same :(

Shame as I'm interested in the topic, been trying Pulumi at the moment


Fair warning, I’ve been in situations with Pulumi where destroying and recreating a “stack” is the only way to solve little intermittent bugs. Not sure what level of stability you’d want from a tool like this but it made me raise an eyebrow.

I understood it was basically Terraform under the hood...

I have little experience with either, do you find another tool more reliable?

I guess all of these tools are dependent on the consistency of the cloud provider APIs they're working with


We did evaluate pulumi. Our experience was that it basically gives you alternative, more error prone way to create terraform resource graph. Also while they claim to support python, it definitelly feels like second class citizen, it's not very idiomatic (more like almost-literal rewrite from the typescript).

So for time being we are sticking with terragrun&terraform combo.

But the concept of pulumi looks interesting and we will likely re-evaluate in a year or so to see if it has moved in the right direction.


I had to put it into Outline: https://outline.com/S8at4k

It's quite readable if you make your window size super short and wide, but it's weird that this is even necessary.

Reader mode on firefox rescues it.

Please don't do things like "font-size: 3vmin".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: