I've spent many career years in sysadmin / process automation for manufacturing companies, and LOTO is one of many pieces of proper procedure and process that really impresses me about that industry. I worked at a big steel recycling plant, and the zero tolerance effort put toward safety and generally ensuring that this type of stuff was upheld was impressive. Not locking out a machine, or forgetting your lock on a machine was a fireable offense with like, one warning I think. You certainly didn't want to be the guy who was working on a machine that morning, and left your lock on it when you went home while someone else was working on it. They'd call you and get you out of bed real quick if it meant the machine couldn't start up because your lock is on there and you weren't accounted for.
SRE's and operations people can pick up good habits from manufacturing gigs. A lot of the same concepts like uptime, good documentation, procedure, discipline are really important to the business at all levels. When lives are at risk good companies put a large sum of time and money in making sure everyone is on the same page.
I was a VMware admin primarily for years in engineering shops and everywhere I worked constantly had issues with resource allocation, which team had the most resources deployed, who had the most idle vCPU and memory running...
At one company pretty much my sole job for 2 days a week was to look over utilization by team, shutdown idle VM's and check for any that were provisioned with too many resources. I'd have to e-mail the team, power them down and resize the resources and boot it back up, etc.
I automated all that with scripts, from the team e-mails with a nice HTML table report of the pending changes to actually running the operations to resize and power off the machines. I was pretty proud of all that work.
I get sick of all the downplaying that PowerShell gets. I find interacting with the console a breeze, and just as usable as Bash. I can easily perform an ad-hoc API call into an Object, explorer it and export it into a well formatted .csv file for a report in minutes.
We use it in tooling for tons of things that would otherwise be done in Python and have no issues. Especially once you get it set up on Server Core and run the jobs in Cron.
I just don't get all the negativity towards it. There's a reason so many vendors are providing great PS modules for their products: VMWare, AWS, etc.
I find it horrifically verbose. Granted, there are a decent number of aliases, but some pretty basic stuff requires 3 words separated by hyphens.
Way easier to just use bash on WSL on windows 10 machines.
I feel like that makes it readable. Even people who don't know PS very well can read cmdlets and pretty much know what's going on if you use the full verbose cmdlets.
I think when scripts are over 1k LOC the naming of functions can get pretty unwieldly when trying force everything to fit into the approved verb-noun mantra.
I think it boils down to a lot of people having a fundamental misunderstanding of what makes PS different than Bash, i.e., not understanding how significant it is to have processes communicate through piped objects versus piped text, which is far more brittle.
I think it is simpler than that, it's web and unix devops who just don't want to learn new tooling and are upset that it isn't exactly like bash/core-utils.
Since it is impossible to force all inputs to be well-constructed, text acts as the more predictable interface. Once I learn a bunch of command-line tools for processing text, I know exactly how to compose commands or post-process text from a file or extend the usefulness of a program that doesn’t happen to produce output I want.
While there may be something “brittle” from time to time, in my experience this just means it was possible to tackle the problem with text when the alternative would have been to have no way to do it at all.
PowerShell feels like it wants everything to be structured perfectly, and “perfect is the enemy of the good” applies. Sometimes, I don’t care if there is a way to perfectly structure a particular input because many use cases just don’t have stringent requirements.
> Since it is impossible to force all inputs to be well-constructed, text acts as the more predictable interface.
Disagree. If the program decides to change its output formatting even by a small amount (like going from a GNU version of the program to the BSD version, where such things can happen), e.g. changing the column order of two items, whoops, your script is broken. When you deal with object versus textual output the textual representation of a program can change all day long and you're not affected. The only breaking change is a change to the output object format (like renaming or deleting properties).
Writing scripts that rely on text being output in a specific order with a specific format is very brittle in my opinion.
Robust scripts require diligence appropriate for the environment. If you’ve deployed systems with different default tool versions, you factor this in, e.g. by installing a common version that both can use (or just make scripts that can figure out how to work either way). Two versions of a tool isn’t really any different than two versions of a compiled-in API, except that APIs tend to be more rigid (not always, e.g. some languages allow runtime querying).
There’s a common rule in Unix-like systems that essentially says “inputs are liberal, outputs are conservative”. It matters to have programs that can adapt, because this makes a lot of things more practical.
Assuming the system is put together correctly, all inputs certainly ought to be correct, because whatever component is producing the corresponding output simply wouldn't have compiled if they weren't...
SRE's and operations people can pick up good habits from manufacturing gigs. A lot of the same concepts like uptime, good documentation, procedure, discipline are really important to the business at all levels. When lives are at risk good companies put a large sum of time and money in making sure everyone is on the same page.