This kind of tool needs context awareness to be useful.
After the first 30~40 times you’re asked if you want to delete these pods, solving the question coming next becomes automatic. If you’re in a “I’m on my dev env, nothing bad can happen” mindset, that prompt won’t get you out of it, it will just be a tedious step, we’ve seen that time and time again.
It becomes a lot more interesting if it can ask “you’re going to delete from the cluster prod, do you really want to ?” and only do so for production.
Same for “rm -f” really, if it can confirm before you’re deleting a tree of thousands of file, and not when it’s 3 empty directories you created 3 min ago.
This is the same logic as "sudo forces people to think about what they're typing", which it doesn't, it just makes you type sudo and your password really quickly and equally mindlessly.
sudo sticks what you're doing in your auth.log file which is fired off to your syslog server, so you've got a chance of knowing how you messed up, or who else messed up
No it really doesn't, you just get quick at typing "sudo shutdown -r now" and typing your password quick and when do that it on prod it is just as mindless as "shutdown -r now". You can't make people think via extra steps.
When I used to admin Windows servers we had production’s desktop background set to red, that worked well, especially when you had multiple remote desktop sessions open. Wonder if that’s doable for non-GUI enabled Linux servers
The standard way is to set PS1 (the commandline prompt text) to indicate the environment. With escape codes, you can pretty trivially get some flashing red text telling you to be careful.
If used properly the -Confirm parameter in powershell can also have you change the impact level in your environment and ask for confirmation on things that in dev would not be cared about.
Also its underused but the just in time access stuff where you can grant users subsets of permissions (again, per environment) is also pretty impressive.
A not-so standard way is to output a banner via shrc (instead of putting it in the motd, which the ssh daemon will only output for interactive sessions) - which incidentally will also break pretty much all tools using SSH as a transport (scp, rsync, among others).
That was also the case in very old Linux distros when you logged on X as root. I clearly remember something like Fedora or Mandriva doing exactly that - note that graphically logging in as root on a Unix system is not a great idea though, most GUI software isn't designed for it.
It’s pretty easy to insert colors into to prompt using conditional logic in things like bashrc. You can also add them to other system tools like ‘screen’ and (I assume) ‘tmux’. People can override them if they want, but it’s a good starting default (and people rarely override defaults on a fleet of servers).
My prompt only includes username if it's not the usual one, and it's on an orange-red background if it's root.
Similarly, my prompt doesn't include the host when I'm logged in normally, adds it when I'm in a shell via sudo/doas, and colors it when I'm logged in via SSH.
i think a better approach might be to wrap destructive actions with preview functionality that is carefully tuned to summarize succinctly what is unique about resources that are going to be deleted along with the cardinality of the operation.
Only because the creator is on here: oops has two O's. Unless you were making a pun with ops in which case maybe remove the elongation so it doesn't seem like a typo.
I don’t think it may help in a long run, except in cases when you mistyped a command. Mistakes are done because some assumptions are wrong, e.g. wrong cwd, hostname, account, and a captcha-like barrier cannot point that out or make you think about it more. The correct way to prevent mistakes is easy reversibility. Except for very big files/changes we now have an essentially unlimited disk space. Programs should learn to use it to manage undo.
I frequently do `git reset --hard`, knowing what I am doing, until I found that I probably need the code a few days later.
Perhaps I just need a tool to intercept this and backup the affected files somewhere. The files can be deleted after some time (a week? or never?). And when I regret doing resetting the files, I can just dig through the backup and try to find it.
My solution for this is a script that commits all my changes then tags the resulting commit with a basic descriptive name. Something like this:
git discard bad-idea-didnt-work
Which creates a tag discarded-bad-idea-didnt-work, then resets back to the previous commit.
I can still find and rescue the changes later by looking at my tags.
Others have suggested using the stash for this but I prefer to keep my stash stack free for shorter-term contexts, eg. if I'm halfway through something but need to go checkout another branch briefly.
You've already gotten tips based on git. But if you use a JetBrains IDE, you also have the local history there. Or can use the shelve functionality, which is like git's stash but a bit smoother.
I don't know how JetBrains IDEs do 'local histories' but FWIW VSCode recently also implemented "local file history"[1]. I have not had to use it to salvage anything yet, but I'm sure that day will come.
I was surprised to learn a few months ago from https://news.ycombinator.com/item?id=28958613 that the "git reset --hard" does not actually delete the files and you can still recover references to them using "git reflog".
I'm pretty sure most git users (who know what "git reset" is) don't realize this!
You need to commit them first. If you did not commit them, it doesn't work. But yeah, maybe I can just define an alias to add+commit+reset, which should solve this problem.
I just stopped worrying and commit everything into my own repos, even half baked or temporary, making sure that a commit message or a file name signals it clearly. It’s a versioning system for this exact purpose, not a ceremonial sanctuary.
You could just call `git stash` instead of `git reset --hard`. It'll have the same effect on your working directory, while also "backing up" the changes.
iirc stash works more in a stack like manner? Say if you do git stash, and then do git rebase -i to clean up the history, proceed to write some other features, git stash again to clean it up, etc. Can you recover the files for the first stash?
Yes, you can. If you save, apply or pop with git stash with no arguments, it works like a stack by default, but you can also access the other entries. For example `git stash pop stash@{n}`, where n is the index of the entry in the stash "stack", applies the nth stash entry on top of the working tree and then removes it from the stored stashes. The latest entry is at index zero, so stash@{0} would be the last saved/pushed one.
You can see a list of saved stashes and their indices with `git stash list`.
Ah you can use pop with other stashes too, thanks for the tip... though that feels a bit like a semantic misnomer (? unsure what term would be best to describe what I'm thinking - but 'pop' kind of feels like it should always be applicable only to the top element in a stack. I'd be scared that popping stash@{5} would apply all of stashes 1 to 5 at once lol).
I've been using `git stash apply stash@{n}`, which works just fine, but the stash is then not removed from your list and results in clutter.
> though that feels a bit like a semantic misnomer
It does, but then, this is the git command-line interface. :)
My understanding is that the stash pop command is simply an apply plus the removal of the successfully applied stash. The man page reflects this. It does not pop everything above element in the list.
Yeah git reflog will show you the previous commit (in a pretty confusing way tbh but it does the job). The original files will get GC'd by git at some point.
I think there is another use case for a tool like this: predictable auto completion instead of input validation.
For example, if you type this:
`mkdir test && cd test` the tool could realize that the `test` folder was created and offer tab completion for it in the second part. I have long wanted a shell with better auto completion, more prediction etc.
Usually when I make these mistakes I’m running the command on purpose, but my context is somehow confused. So I would probably happily solve the challenge and break everything.
Do people really issue these risky commands (and regret)? Or is it rather variable substitution which comes to play? E.g. referenced to undeclared variable which renders to empty string.
Yes, but probably not in the way that makes the program any useful.
Example: I recently executed `git checkout .` (that incidentally would not be caught by this project) on the wrong repo. Oh the pain. But I would have probably confirmed the "captcha" blindly as I really wanted to execute that... just not on that repo.
Other than rm -rf / (which can happen if you manage to hit enter rather than another key) this feels not terribly useful. Who has accidentally kubectl delete ns prod? Anecdotally, the accidental enter seems like a much more likely scenario (I once ruined a server by chmod -r a+r / ing — that feels like a better use case for accident protection).
I've accidentally deleted all Docker containers on a shared system (who knew - Docker has no user-level security at all). Though this command wouldn't help in that case since I didn't know the command I was running was dangerous.
That sounds like it would be a bit too easy to accidentally go through, especially since Enter was used to initiate the command. Hold down the key just a little too long and it'll be as if you weren't even challenged.
shellfirm not execute the command.
it's just get interrupt when you run the command, check the pattern and prompt your a verification. your shell is the executer
Not a plethora of rust files that compile to a binary.
I would never trust this thing to not cause more issues, unless it is so small and elegant, that I can audit it very easily and be very very sure it is safe.
I don’t know rust, but the rust files in the submission look very straightforward. (Why wouldn’t it? It’s just matching a few hardcoded commands. It’d be easy in any language)
In base you need a hook that get the command before your shell execute the command.
when the shell get the command interrupt this project execute shellfirm before summation in your shell
I just find it hilarious how that guy wanted you to implement this as a shell script so he could “audit it very easily”. I mean, bash-preexec.sh isn’t the worst shell script I’ve seen (It even has a Bats test!), but anyone that thinks shell scripts are easy to audit is full of shit.
Because of the indirection and complexity it causes.
The user is in a shell. The simplest way to add something to their system is a shellscript. No moving parts. You get what you see in the script.
The way the project is set up, there are multiple moving parts. The interaction between the shell, some compiler and a bunch of rust files.
Imagine during the moon landing, Neil Armstrong could not have communicated directly with Buzz Aldrin. But they would had to go through a translator who translates English to German. Who passes the message on to a translator who translates it from German to French. Who then passes it on to Buzz. BOOM!
Hey this project is very small, and not execution command! only prompt you a verification when detect a risky pattern from a yaml pre-define (default or by you)
After the first 30~40 times you’re asked if you want to delete these pods, solving the question coming next becomes automatic. If you’re in a “I’m on my dev env, nothing bad can happen” mindset, that prompt won’t get you out of it, it will just be a tedious step, we’ve seen that time and time again.
It becomes a lot more interesting if it can ask “you’re going to delete from the cluster prod, do you really want to ?” and only do so for production.
Same for “rm -f” really, if it can confirm before you’re deleting a tree of thousands of file, and not when it’s 3 empty directories you created 3 min ago.