Hacker News new | past | comments | ask | show | jobs | submit login
I prefer semi-automation (lucaskostka.com)
114 points by greatNespresso 32 days ago | hide | past | favorite | 54 comments



Automation is not just about saving time. It’s also about saving a “recipe” for a task, about avoiding human error, and about staying in the flow.

I automated logging into my website’s admin area to edit a post. Now I type “edit” in the address bar. I must have used that shortcut a thousand times in 2022. P

I have another that cd into a project and launch the dev environment. Same deal.

If you get sidetracked a lot, automation is critical. It’s our industry’s equivalent to mise en place. Deal with the drudgery in advance to focus on your work when it matters.


Worth reading about Autonomation (https://en.wikipedia.org/wiki/Autonomation), which originated in Toyota. "Autonomation aims to prevent the production of defective products, eliminate overproduction and focus attention on understanding the problems and ensuring that they do not reoccur. "


Yep. I was really into the TPS a while ago, and autonomation is a concept that stuck with me.


> I have another that cd into a project and launch the dev environment.

I have this set up with direnv[1] and it's incredibly useful, especially with Emacs integration. Not only does it reduce the cognitive burden of keeping track of whether I've opened a venv/launched a nix shell/etc, but it also prevents time-wasting mistakes like launching the wrong version of an interpreter or compiler.

[1]: https://direnv.net/


I guess my solution to this problem is not very popular - I try to keep my dev environment simple enough that I don't need venv or anything like that. I find that venv or docker adds cognitive load that then needs more tools to unload, like direnv.

So if my devenv gets too complicated I try to simplify it. Seems like a logical solution to me but often I run into people who simply don't believe me that it makes things simpler. :shrug:


> I guess my solution to this problem is not very popular - I try to keep my dev environment simple enough that I don't need venv or anything like that. I find that venv or docker adds cognitive load that then needs more tools to unload, like direnv.

Good for you but this doesn't work in most scenarios, especially if you work in a company with a build process.


At some point your code has dependencies that call for this.


I'm really interested in setting this up also. Can you briefly explain how you set this up with direnv? Looks like that's just for loading environment files, not launching processes?


Emacs launches processes on the `PATH` so using `direnv`, you can change the `PATH` to the version of `rust-analyzer`/`gopls`/`python`/`psql`/`protoc` that you're using as well as setting the correct load paths for each language.


I think it’s fair to say that in the age of the internet, everyone is sidetracked all the time.

Another thing I like about automation is that it lets you think bigger thoughts. If I have to go in and tweak a report in Excel, I’m only going to have time to do the bare minimum to get the result I need. But if I load it into pandas, some additional analysis is only a single command away, so I’ll probably do it. And with the rise of AI tools like chatGPT, I don’t even have to remember the commands, I just ask the AI “hey, come up with a forecast for this variable”.


> It’s our industry’s equivalent to mise en place.

What an excellent comparison! Thanks!


> I automated logging into my website’s admin area to edit a post. Now I type “edit” in the address bar. I must have used that shortcut a thousand times in 2022.

Cool !! Maybe one of the best illustration of a frequent task indeed!


What browser do you use? How do you create shortcuts which can be accessed by typing in the address bar?


In firefox you can give a bookmark a keyword that you can then write in the address bar to open it. It can also be used for adding search engines.


That bookmark can be a bookmarklet/javascript as well, so you can run commands typing in the address bar and hitting enter.


I've used both things but never considered combining them. Very clever.


You can abuse the Chrome custom Site Search feature to achieve the same. You don't have to use the %s argument substitution thing.


I do this without %s to go to dev, qa, or prod: ad/aq/ap for Airflow, kd/kq/kp for Keycloak.

I also started using custom searches for every new technology. I know I'm going to be googling "kubernetes foo" or "Elasticsearch bar" 1000 times in my life, so I just make a "k8" or "es" search engine right from the start. Also "diff %s" for "difference between".


That's exactly it. A Javascript bookmarklet with an assigned keyword ("edit").


care to share it so i can crib off of you? oddly enough i write javascript all the time but never ever really got comfortable writing "bookmarklets"...


This is the code:

    javascript:void(window.open(document.querySelector('head%20%3E%20link%5Brel=edit%5D').href,%20%20'_blank'));
It relies on the page having a <link> tag with the URL to the admin panel.


I would extend this idea with Step-Wise semi automation. There have been large complex data sets I had to wrangle or sysadmin operational tasks that I have have to complete that I did in several automation steps. "Bend, fold, and staple" the problem with a script, check the results, make a decision, plan the next step. Repeat until done. (PS Keep all your scripts and record notes in playbook)

For problems that happened more that once use the playbook to repeat. Learn about the corner case and record them in the playbook For problems that repeat often add more automation as needed until you have a prototype for the fully automated. If there is enough value and problem stability decide if a rewrite is needed to get it out of my hands and have it fully automated. From my experience this was very rare.


"If there is enough value and problem stability decide if a rewrite is needed to get it out of my hands and have it fully automated. From my experience this was very rare." > Cannot agree more with this view


I once talked with someone who was opposed to something like Let's Encrypt or even scripts for renewing a web server's TLS certificates because: "Automating things makes you forget how to do them." That was an... interesting argument.

Especially because even when I don't use Let's Encrypt or another ACME provider, such tasks are very easily automated by Ansible. Better yet, you can automate reminders and alerts towards any tasks that need to be initiated manually (e.g. provide the new certificate in some Ansible repo directory, let the automation care about actually deploying it).

Furthermore, doing things manually is an excellent way to end up with wildly inconsistent states, be it configuration, code style, database migrations, service deployments or anything else, really: https://blog.kronis.dev/articles/my-journey-from-ad-hoc-chao...

At the same time, however, I recognize that some things just aren't easy to automate (e.g. front end testing in some stacks, or integration tests that need a DB instance when you can't just easily launch and migrate Oracle with a bunch of test data in containers, even with Oracle XE; in projects that don't use MySQL/MariaDB/PostgreSQL or something more suited for that) and that sometimes wasting time on those isn't a worthy pursuit.

> But I do not always strive to automate the task at hand entirely.

In other words, I largely agree with this - automate everything that is easy to automate (e.g. unit tests, server config deployments, CI/CD builds and so on) and only consider all of the hard stuff a must if you really need to (e.g. compliance or requirements).


> "Automating things makes you forget how to do them." That was an... interesting argument.

Let me put it differently: Automating things significantly increases the cost of changing them.

While you are doing it manually, you have the knowledge, the flexibility and the power to change and adjust if needed. When automated, you lose knowledge, flexibility and control over the task.

It can be good or bad depending on the situation. When we talk about strictly defined actions with a high risk of human error, the automation is indeed necessary. Its power of resisting the change is helping us.

When you automate a fuzzy, volatile, complex decision-making logic, it becomes a curse rather than a blessing. It is almost impossible to do it correctly, with test coverage and verification and debugging and covering all of the corner cases.. (How often do you see proper QA for the infrastructure glue?) And even if you do invest enormous amount of resources, then you put all that knowledge of the process in a black box and lose the key. Which leads to the situations, where to change the process hidden in a box, people choose to create new layers of automations around it rather than look inside.

Refactoring a mess of a process is hard. But refactoring a mess, which has been automated, is simply impossible.

(Not saying this justifies the argument against Let's Encrypt though)


> Automating things significantly increases the cost of changing them.

I've described this as the difference between traveling by train vs car.

When you're manually driving a car, you can go anywhere your tires are capable of driving. You have full freedom, full optionality, to change direction.

When you're sitting on a train (traveling by automation), there's a set schedule, set stations and pre-built tracks. There's no option to deviate from your itinerary at all. Moving train tracks takes too much time.

By laying the tracks up front, automation reduces control/visibility and ossifies your decisions in a way that limits future options. The tradeoff of is that you can sit back and read a magazine while a machine does the work.

People generally don't like to hear this - they immediately see the benefits but are very likely to ignore or brush off the additional complexities and contraints that automation adds.


> Automating things makes you forget how to do them

I understand this anxiety and have definitely felt it before. In most of those cases it turned out that once I automated the thing, it was actually a relief, because now not only do I not have to remember the specific details, I also have a script that I can pull up and read if I need to remember those details. Automation can be great documentation sometimes. (big emphasis on sometimes though)


If the automation script is commented well, it also becomes the documentation for the procedure. So in that sense it helps you remember as well.


My best heuristic to date for finding some sort of reasonable balance is that I will just turn down requests to automate tasks that can’t be described in full-ish detail in English (or whatever language you speak at work).

Explaining that this is what it would take in order to get to the point of automation is the only way I’ve found to make the a-ha lightbulbs turn on.

You’d think this was obvious, but I suppose people think AI/ML is magic.


My experience exactly. "Can you automate it?" should always be followed by a discussion of what "it" precisely means.

If you can describe in English not just the happy path but all possible failure modes, then your problem is automate-able. If there are nodes in the real workflow that are missing from your mental flowchart, you need to put in more work to understand and describe your problem before it can be automated. Codifying these decisions in writing and using it as part of your _manual_ process is a mandatory pre-requisite to automation.

Automation is almost entirely limited by the human mental capacity to model the problem precisely. It's not a technical problem at all. This is hard for people to hear - automation means a lot of front-loaded mental work for everyone involved.

In a manual system, you can defer that mental work to runtime and make decisions on the fly.

In an automated system, you're ceding control to a machine. You must first codify the contents of someone's brain with incredible detail such that a computer can make every decision the same way a human operator would under the same conditions. In an automated system, the criteria cannot be ambiguous in any way (if your flowchart diverges depending on whether a file is valid or not, you need battle-tested `file_is_valid` function). It's useless to talk about automating an entire graph when you can't even describe the nodes.

A good model that I use is self-driving cars. We certainly have the technology for 100% self-driving vehicles. But considering the number of edge cases in real driving conditions, most people are very hesitant to cede control to a machine. There's an implicit realization that while automation might be convenient, the potential negative consequences of a machine error are too great and most drivers choose to retain control.

The negative consequences of bad automation in IT are usually less severe but still an enormous drain on resources to babysit the failure states of poorly-conceived automation workflows.


I mostly agree, except for the idea that instructions “cannot be ambiguous in any way”. I think this is more of a nuance than anything, however.

I think the difficulty of the nuance is that we usually don’t realize when our language is ambiguous. Handwriting recognition is a super ambiguous problem, despite being easily solved with great accuracy today: good luck formalizing how it’s actually done and explaining it to a layperson.

The difficulty in most non-trivial AI/ML tasks is in figuring out how to structure the problem (for handwriting recognition, let’s not forget that we have the tremendous benefit of hindsight), which essentially amounts to deciding that some kinds of ambiguity are okay (eg.: things that can be reduced to a statistical or classification problem), and some kinds of ambiguity need to be resolved (eg.: avoiding potholes is probably a good idea, and one way or another this has to be rewarded or encoded, by some mechanism… and great care is required to account for basically everything you’d expect to find in a really good driving manual).

There are some types of ambiguities that AI/ML are very good at… But to your point, there is a degree of “relative unambiguity” that is required.

The difficulty is compounded by human biases too: what is obviously-unclear to a careful observer might appear to be super clear to an overconfident person.


I've gone down the edge case rabbit hole of automation when I discovered AutoHotKey. I love automation, if I can get a computer to do a computer task I'm definitely not doing it manually. But I have to catch myself before I get too involved solving the problem because I want perfection at the beginning.

I'm my experience this rarely happens and I have to iterate repeatedly, I'm fully aware I'm not clever enough to cover all edge cases before they present themselves.

I remember automating one system (web interface so with AutoHotKey you're relying on tab order) that seemed to change everytime you opened it! Semi automation here was a must.


Computers are automated by their very nature.

Everything is automated by definition. If I type five commands, that's a script that can be run again. If I click an icon or choose a menu option, that triggers automation that someone wrote.

The interesting point to consider is how specific your information processing is. My last five commands are probably useless as a script, not because it isn't automated, but because it is far too specific to what I just did and will not happen again.

What we as developers spend our time on when we want to save ourselves from doing the same thing again is generalization. We pick apart a process that already exists and parameterize it. We insert conditionals and, hopefully, error handling.

That's it. Generalization. Not automation. Anyone who uses the latter word is generally not doing it, since using the term requires something un-automated to distinguish it from which in this context is not a useful term.


I'd say that's just the view from one side. If you overdo your automation without considering generalization and parameters, you lose flexiblity.

But when approached from the other side, you can also have generalization without automation: if you don't create a script for your five commands even if they turn out to be used over and over again.

For example, I'm imagining deployment of software releases:

  1. If you don't even have a defined way of deploying a release, you have neither generalization nor automation
  2. If your deployment is well defined, but requires several commands to be executed manually each time, you have generalization, but no automation
  3. If you can deploy a release with a single command or two, you have both generalization and automation
  4. If your release always deploys on a git push automatically, you lost generalization again
The last one is maybe a little odd, but in terms of generalization, I think of it as "being in control of which version is deployed without working against your own scripts".


The very commands are in themselves a script. Your command history is a script. Take bash, for example. Is "a; b" one command or two? The answer is well defined but also useless. Every command is the automation of millions of instructions anyway. This goes for other types of command interaction as well, not just command lines.

Think about what "creating a script" actually entails. That's parameterization, not automation. That may seem like a silly definition game, but it's hinders the understanding of non-IT people to reason about automatic data processing.

The example about software releases is a good example which I think illustrates the limited usability of the concept of automation within IT. The difference between deploying software with two commands or five commands, the difference between your scenario two and three, does not represent differences in automation levels. Not as long as the two (or five, or ten) commands are constant and well defined!

Those five commands could for all practical purposes be regarded as one. There may well be examples where only one command is needed, but one which is more complex than five commands together! Which one would be the most "automated" scenario then?

The difference between your second and third scenario is one of granularity, not automation. The idea being that it is irrelevant how many commands a step consists of, but instead how many steps there are, where each step can be single stepped, paused, and re-run. And that is a difference in granularity, not in automation.

I hope to have shown that the idea of automation is damaging to modelling IT processes. The important scenario of your example is actually the first one. Is the release process well defined? If it is, the rest is implementation. And that implementation is generalization, not automation. Someone who does not recognize that is bound to do a lot of useless unnecessary work, which is something observed in practice many times.


I wrote a tool to bring up a Microservice architecture complete with local load balancing with haproxy.

It was incredibly powerful. It would automatically clone the Microservice, pull its dependencies and build it. And deploy it using chef or ansible. Every Microservice had the same style of operation.

It would use all the same deployment code as production. For rabbitmq clustering for example.

It was extremely useful for local testing.

https://GitHub.com/samsquire/platform-up

The problem was adoption. Most developers worked on a particular Microservice at a time and changes to multiple Microservices at the same time are rarer.

I think local testing is incredibly important for velocity. I want to test the entire platform as a whole so having the tooling to test the entire platform in one go is very powerful.

I think when there is automation that is used rarely, people are less fond of it.


If I’m doing a task for the second or third time, I start planning to automate it.

Automating a task, as by ansible or a script, documents the task procedure.

Documented procedures generally can be verified, iterated, extended, enhanced, and reverted.

Invoking an automated procedure should not be more complex or require more preparation than manually performing the task.

Once a procedure is automated, modifications to the procedure should be implemented via the automation, not manually.

I didn’t mean to lay out a bunch of principles off the top of my head, I’ll stop!


This is related to my interest in customizing my workflow to the point where any marginal customization only makes my life slightly ever so easier. Vim is a good example. My config files are hundreds of lines of code, but I always have the urge to make it even better and modify some settings.

I do think that 80% of my productivity boost with Vim is achieved by only 20% of the plugins and configs, but the perfectionist within me always wants that extra 20% productivity even at the cost of reducing my productivity!


First of all, keep doing that Lucas! You are more brave than I ever was or probably will be! You rawk!!!

> not really for the sake of gaining time to work on something else, but rather that I was felling [sic] in love with the solution. Designing the solution, coding it, testing for all edge cases and... Debugging. A lot.

Lucas has apparently done "QA automation."

Any test dev, the kind you'd want to hire if your managers hadn't already decided to "automate test steps" would tell you not to automate test steps. That's slow and brittle and automation and manual tests are very different.

Do not do manual QA automation, in case that wasn't clear.

A test dev is a frikkin' ninja who follows Wall's 3 Virtues. [1] Lukas, if you can figure out where we are (and the secret knock), you can join us. Honest.

if you join the sibling-hood, you will understand:

> multiple sides to a task and that uncovering every facet

This calls for parameterized or tabular or property-based (synonyms-ish) testing. Look those up for the Language/framework you're using.

> may not be worth it

Well . . . Larry Wall would say if you have to do it over and over we've probably already talked about it for too long . . .

> whatever dull or repetitive task

Yes, automate those. The fact that you're claiming things are not dull or repetitive enough . . . make them seem like a forum. :-P

[1] https://hackernoon.com/larry-walls-three-virtues-of-a-progra...


> if you can figure out where we are (and the secret knock), you can join us. Honest

Best hiring line, ever. Will definitely go look for the hidden circle of test devs.

> tabular or property-based testing

That's interesting, I did not know it was called this way! I have followed this approach in some cases, testing combinations of inputs. It was mainly when trying to automate tracking QA, through passing different dataLayer e-commerce values.


Isn't Cypress, for example, 100% about automating test steps?


I kinda agree. I also suggest you don't need let's say infrastructure-as-code for a single VM deploy; just go click it in the provider's console and automate the important bits (VM setup, deployment tasks), etc.


Sometimes it makes sense to automate failure. This can be way quicker than automating a task and ensures quality. Then you can proceed to automate various steps of the process depending on available time, each task's automability (is this even a word?), position in the critical path, etc.


I have a similar view about self driving cars. Cars should just be semi self driving. Last time I was stuck in traffic I wished the car had auto acceleration and break, I was happy to just steer. I was totally exhausted by switching my leg between break and accelerator.


Many newer cars have this, adaptive cruise control. A lot of newer cars will steer a little, or shake the steering while if you are drifting out of your lane too.


> I was totally exhausted by switching my leg between break and accelerator.

This one deserves an immediate nomination for this week's first world problem.


I love how many kinds of automation there are!

One kind I rarely see is "autonomation", which is basically automating so that a failure prompts a human to fix something and then resume the automatic process. I don't see many people make these, but they are great for tasks that need to run constantly. You could (for example) make an ETL process that halts on errors, allows a developer to fix something, and then resumes right where it left off (rather than starting over from scratch, which would be like throwing out half-finished goods!)

Another is the "incremental automation", where you start out with literally just a checklist of manual tasks, and you slowly automate one piece at a time over a long time. You only automate the pieces that produce the most value or have the most need, so you end up spending your time wisely. If you clock how long it takes to complete each time, you can plot the cost savings over time of your automation, justifying to the business dweebs why they should let you spend more time automating :-)

Yet another is cost-benefit automation. Basically, don't automate it unless you can prove it's going to save you more money to automate it than if you hadn't. There are a lot of different kinds of costs and benefits, so this is more complicated than it seems, but it can be used to justify a costly automation effort (such as hiring an entire team just to do automation!) or reject one. Any time someone says they're going to manage some centralized service or team for a group of other teams, I ask them if they've done the cost-benefit analysis, and if they haven't, I pray for their users...

In general, you should not try to automate things completely (which could be defined as "it not only runs by itself, but also fixes itself") unless there is a very clear cost-benefit. Automate small tasks and run those tasks individually, and if it's convenient, tie the small tasks together into larger automation chunks. About half of the more complicated one-liner shell commands that I run end up in my junk-drawer repo, and eventually I end up using them again.

But my favorite kind is self-service automation. Bottlenecks are the worst! One person or team has to do things for everybody else? Make it self-service! Start by making the cheapest, crappiest, least-reliable, most-embarrassing solution that gives users the ability to do something themselves. Even if it stops working... you just go back to doing the original task for the users. You often hear "well we can hire button-pushers overseas for cheap!" as an excuse to avoid this, but then they're discounting the cost in time from the button-pushers not being efficient enough or misinterpreting requests, etc. Cost-benefit isn't simple! But self-service is (usually) going to give you back time, which is often worth more than money. (I think, anyway.)


Thank you for the detailed list, I totally relate with the cost-benefit approach, it was my main attack angle to justify efforts spent on automation.

> You often hear "well we can hire button-pushers overseas for cheap!" as an excuse to avoid this, but then they're discounting the cost in time from the button-pushers not being efficient enough or misinterpreting requests, etc

And some button-pushing activities are just internal by design and cannot be externalized easily in my opinion


Automation is not only about saving time in each iteration but about saving a lot of time and headache (downtime, money) when you introduce a manual error (plus it has many other benefits like other people can execute the task, auditability etc etc)


I don't know if everyone's work can be automated. I hear people who have automated close to 100% of their work. I don't even understand what kind of job that would be. Maybe someone could illuminate me.


Hi all,

we are working on a couple of problems in this thread in terms of automation. We want to build this into a product.

Feel free to reach out, we would love to hear your perspectives: hn@aloma.io


Deadlines is the biggest killer of automation.



I prefer my automation fully erect.

Why would I want to do work I didn’t have to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: