I have automatic light/dark mode tied to the daylight cycle on my laptop and the amount of colour emitting applications which break when I turn on light-on-dark mode is astounding.
If you are writing a command line tool and you absolutely insist it must have colours then stick to the ANSI 16 colours or allow end users to customize the colour scheme.
When people insist that code in the database is never acceptable, these are precisely the kinds of situations where the benefits outweigh the downsides.
If I were to come up with a rule: A database should avoid having to rely on stored procedures to maintain invariants, but shouldn't avoid using stored procedures to maintain invariants it would otherwise struggle to maintain.
Most short running scripts call into the standard library or into other packages. I think the trivial subset is too small to bother with such an optimisation.
In isolation (which is what CVSS is all about) this is not a network exploitable vulnerability, even if you can craft an attack chain which exploits it over the network.
So:
AV:N -> AV:L - reason above
AC:L - correct
PR:N -> PR:L - to exploit this you need to get cups to process a PPD file. Ignoring how it got there, writing a PPD file requires low privileges on the local machine (unless I'm wrong and you can't add a printer to cups as a local user by default, in which case this becomes PR:H with an overall score of 7.7). These might be fulfilled by another component of the attack chain, but again, you need to strictly think in terms of the vulnerability in a vacuum.
UI:N -> UI:R - that a user must perform a task after you begin exploitation in order for the exploit to complete is a classical example of required user interaction
S:C - correct, attacking cups and getting root on the whole machine is considered a scope change
C:L -> C:H - Running arbitrary code as root on a machine is a total breach of all confidentiality of the local machine, so not sure why this was marked as low.
I:H - correct
A:L -> A:H - Running arbitrary code as root on a machine lets you do anything to completely disable it permanently. Availability impact is high.
In summary a score of 8.2 (CVSS:3.1/AV:L/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:H) for CVE-2024-47177 in a vacuum.
CVSS scores are meaningless in a vacuum, and in this case it seems the redhat person who calculated them took the "fudge it until it looks bad" approach.
Below is my professional scoring evaluation while trying to keep to the ideas behind CVSS and the spec as much as I can. Although CVSS is used so rarely in my work (as it usually inappropriate) that I may have made some miscalculations.
If I apply the same exact approach to scoring Heartbleed I get:
7.5 CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N
The key differences between Heartbleed and the final code execution issue in the attack chain are that Heartbleed is directly over the network (in a vacuum) whereas the code execution is entirely local (in a vacuum, ignoring the previous elements of the attack chain, assuming they were themselves fixed). Additionally with heartbleed there is no user interaction required which also raises the score. But conversely, the direct impact of heartbleed (ignoring what you can do with the information) is that it is only a confidentiality impact (although you could argue that it can lead to a crash which would be a low availability impact bringing the score up to 8.2).
I don't think this clarifies much about the scores but hopefully you can see why CVSS scores are meaningless without any context. You need to put them in the context of the environment. The other problem is that in an attack chain, the overall outcome might be bad even if all the individual issues score low. But CVSS doesn't apply to attack chains.
At the end of the day, this is a high risk issue (you say many distros have cups listen on loopback, but I think this is not true, 631 tcp is indeed loopback only, but 631 tcp is in fact commonly bound to 0.0.0.0) but only in the context of your laptop which you happen to connect to untrusted networks without a firewall.
In summary:
This problem as a whole primarily affects desktop systems and some servers.
Device running cups exposed to the internet: Critical
Device running cups connected to untrusted (but local/non internet routable) networks: High
Device running cups connected to trusted networks: Medium
There's no standard way of doing file watching across BSDs, Mac OS and Linux.
> it's just a fixed unconditional terminal sequence
Are you referring to the clear feature? Yes, it's fixed. It's also pretty standard in that regard. It's optional so if it breaks (probably on xterm because it's weird but that's about it) you don't have to use it and can just issue a clear command manually as part of whatever you're running in the TTY it gives you. Honestly I don't think the feature is even really needed. I highly doubt cargo-watch needs to do anything with TERM so I am not sure why you mention it (spamming colours everywhere is eye candy not a feature).
But more importantly, this is just a convenience feature and not part of the "CLI". Not supporting long options isn't indicative of a poorly designed CLI. However, adding long option support without any dependencies is only a couple of hundred lines of C.
> And cargo-watch actually parses the `cargo metadata` JSON output
Which is unnecessary and entirely cargo specific. Meanwhile you can achieve the same effect with entr by just chaining it with an appropriate jq invocation. entr is more flexible by not having this feature.
> (guess what's required for parsing JSON in C)
Not really anywhere near as many lines as you seem to think.
> deals with ignore patterns which are consistent in syntax (guess what's required for doing that besides from fnmatch).
Again, entr doesn't deal with ignore patterns because it allows the end user to decide how to handle this themselves. It takes a list of filenames via stdin. This is not a design problem, it's just a design choice. It makes it more flexible. But again, if you wanted to write this in C, it's only another couple of hundred lines.
From my experience doing windows development, windows support probably isn't as painful as you seem to think.
All in all, I imagine it would take under 10k to have all the features you seem to care about AND nothing non-eye-candy would have to be cut (although for the eye candy, it's not exactly hideously difficult to parse terminfo. the terminfo crate for rust is pretty small (3.2k SLOC) and it would actually be that small (or smaller) if it didn't over-engineer the fuck out of the problem by using the nom, fnv, and phf crates given we're parsing terminfo not genetic information and doing it once at program startup not 10000 times per second).
Yes, I think trying to golf the problem is probably not appropriate. But 4M LoC is fucking ridiculous by any metric. 1M would still be ridiculous. 100k would also be ridiculous 50k is still pretty ridiculous.
> There's no standard way of doing file watching across BSDs, Mac OS and Linux.
You are correct, but that's about the only divergence matters in this context. As I've noted elsewhere, you can't even safely use `char*` for file names in Windows; it should be `wchar_t*` in order to avoid any encoding problem.
> Are you referring to the clear feature? Yes, it's fixed. It's also pretty standard in that regard.
At the very least it should have checked for TTY in advance. I'm not even interested in terminfo (which should go die).
> spamming colours everywhere is eye candy not a feature
Agreed that "spamming" is a real problem, provided that you don't treat any amount of color as spamming.
> Which is unnecessary and entirely cargo specific. Meanwhile you can achieve the same effect with entr by just chaining it with an appropriate jq invocation. entr is more flexible by not having this feature.
Cargo-watch was strictly designed for Cargo users, which would obviously want to watch some Cargo workspace. Entr just happens to be not designed for this use case. And jq is much larger than entr, so you should instead consider the size of entr + jq by that logic.
> Not really anywhere near as many lines as you seem to think.
Yeah, my estimate is about 300 lines of code with a carefully chosen set of interface. But you have to ensure that it is indeed correct yourself, and JSON is already known for its sloppily worded standard and varying implementation [1]. That's what is actually required.
> Yes, I think trying to golf the problem is probably not appropriate. But 4M LoC is fucking ridiculous by any metric. 1M would still be ridiculous.
And that 4M LoC is fucking ridiculous because it includes all `#[cfg]`-ignored lines in various crates including most of 2.2M LoC in the `windows` crate. That figure is just fucking incorrect and not relevant!
> 100k would also be ridiculous 50k is still pretty ridiculous.
And for this part, you would be correct if I didn't say the "faithful" reproduction. I'm totally sure that some thousand lines of Rust code should be enough to deliver a functionally identical program, but that's short of the faithful reproduction. This faithfulness issue actually occurs in many comparisons between Rust and C/C++; even the simple "Hello, world!" program does a different thing in Rust and in C because Rust panics when it couldn't write the whole text for example. 50K is just a safety margin for such subtle differences. (I can for example imagine some Unicode stuffs around...)
> As I've noted elsewhere, you can't even safely use `char` for file names in Windows; it should be `wchar_t` in order to avoid any encoding problem.
Yes, this is true. But I think the overhead of writing that kind of code would not be as enormous as 30k lines or anything in that order.
> At the very least it should have checked for TTY in advance. I'm not even interested in terminfo (which should go die).
Maybe. It's an explicit option you must pass. It's often useful to be able to override isatty decisions when you want to embed terminal escapes in output to something like less. But for clear it's debatable.
I would say it's fine as it is.
Also, if isatty is "the very least" what else do you propose?
> Agreed that "spamming" is a real problem, provided that you don't treat any amount of color as spamming.
I treat any amount of color as spamming when alternative options exist. Colours are useful for: syntax highlighting, additional information from ls. Not for telling you that a new line of text is available for you to read in your terminal.
There are many things where colours are completely superfluous but are not over-used. I still think that colours should be the exception not the rule.
> Cargo-watch was strictly designed for Cargo users, which would obviously want to watch some Cargo workspace. Entr just happens to be not designed for this use case. And jq is much larger than entr, so you should instead consider the size of entr + jq by that logic.
Yes jq is larger than entr. But it's not 3.9M SLOC. It also has many features that cargo-watch doesn't. If you wanted something cargo specific you could just write something specific to that in not very much code at all. The point is that the combination of jq and entr can do more than cargo-watch with less code.
> and JSON is already known for its sloppily worded standard and varying implementation [1]. That's what is actually required.
I hope you can agree that no number of millions of lines of code can fix JSON being trash. What would solve JSON being trash is if people stopped using it. But that's also not going to happen. So we are just going to have to deal with JSON being trash.
> And for this part, you would be correct if I didn't say the "faithful" reproduction. I'm totally sure that some thousand lines of Rust code should be enough to deliver a functionally identical program, but that's short of the faithful reproduction. This faithfulness issue actually occurs in many comparisons between Rust and C/C++; even the simple "Hello, world!" program does a different thing in Rust and in C because Rust panics when it couldn't write the whole text for example. 50K is just a safety margin for such subtle differences. (I can for example imagine some Unicode stuffs around...)
Regardless of all the obstacles. I put my money on 20k max in rust with everything vendored including writing your own windows bindings.
> It's an explicit option you must pass. It's often useful to be able to override isatty decisions when you want to embed terminal escapes in output to something like less. But for clear it's debatable.
In terms of UX it's just moving the burden to the user, who may not be aware of that problem or even the existence of `-c`. The default matters.
> I treat any amount of color as spamming when alternative options exist. Colours are useful for: syntax highlighting, additional information from ls. Not for telling you that a new line of text is available for you to read in your terminal.
I'm a bit more lenient but agree on broad points. The bare terminal is too bad for UX, which is why I'm generous about any attempt to improve UX (but not color spamming).
I'm more cautious about emojis than colors by the way, because they are inherently colored while you can't easily customize emojis themselves. They are much more annoying than mere colors.
> It also has many features that cargo-watch doesn't. If you wanted something cargo specific you could just write something specific to that in not very much code at all. The point is that the combination of jq and entr can do more than cargo-watch with less code.
I think you have been sidetracked then, as the very starting point was about cargo-watch being apparently too large. It's too large partly because of bloated dependencies but also because dependencies are composed instead of being inlined. Your point shifted from no dependencies (or no compositions as an extension) to minimal compositions, at least I feel so. If that's your true point I have no real objection.
> I hope you can agree that no number of millions of lines of code can fix JSON being trash. What would solve JSON being trash is if people stopped using it. But that's also not going to happen. So we are just going to have to deal with JSON being trash.
Absolutely agreed. JSON only survived because of the immense popularity of JS and good timing, and continues to thrive because of that initial momentum. It's not even hard to slightly amend JSON to make it much better... (I even designed a well-defined JSON superset many years ago!)
It could be the default is to clear and then I would agree that an isatty check would be necessary. But an isatty check for an explicit option here would be as weird as an isatty check for --color=always for something like ls.
> The bare terminal is too bad for UX
I think it depends on the task and the person. You wouldn't see me doing image editing, 3d modelling, audio mastering, or web browsing in a terminal. But for things which do not suffer for it (a surprising number of tasks) it's strictly better UX than a GUI equivalent.
> emojis
Yes, I dislike these. I especially remember when suddenly my terminal would colour emojis because someone felt it was a good idea to add that to some library as a default. :(
> I think you have been sidetracked then, as the very starting point was about cargo-watch being apparently too large. It's too large partly because of bloated dependencies but also because dependencies are composed instead of being inlined. Your point shifted from no dependencies (or no compositions as an extension) to minimal compositions, at least I feel so. If that's your true point I have no real objection.
Well no, I think you can build a cargo-watch equivalent (with a bit of jank) from disparate utilities running in a shell script and still have fewer total lines.
And sure, the line count is a bit inflated with a lot of things not being compiled into the final binary. But the problem we're discussing here is if it's worth to depend on a bunch of things when all you're using is one or two functions.
As I understand it, whenever doing anything with windows, you pull in hideous quantities of code for wrapping entire swathes of windows. Why can't this be split up more so that if all I want is e.g. file watching that I get just file watching. I know windows has some basic things you inevitably always need, but surely this isn't enough to make up 2M SLOC. I've written C code for windows and yes it's painful but it's not 2M SLOC of boilerplate painful.
Large complex dependency graphs are obviously not a problem for the compiler, it can chug away, remove unnecessary shit, and get you a binary. They're usually not a big problem for binary size (although they can still lead to some inflation). But they are a massive issue for being able to work on the codebase (long compilation times) or review the codebase (huge amounts of complexity, even when code isn't called, you need to rule out that it's not called).
And huge complex dependency graphs where you're doing something relatively trivial (and honestly file watching isn't re-implementing cat but it's not a web browser or an OS) should just be discouraged.
We both agree that you can get this done in under 50k lines. That's much easier to manage from an auditing point of view than 4M lines of code, even if 3.9M lines end up compiled out.
Yeah, I think we are largely on the same page. The only thing I want to correct at this point is that Rust has no OS support yet, so any "system" library will necessarily come out as a third-party crate. Including the `windows` crate in this context is roughly equivalent to including the fully expanded lines of `#include <windows.h>`, which is known to be so massive that it also has a recommended macro `WIN32_LEAN_AND_MEAN` to skip its large portion on typical uses, but that should still count according to you I think? [1] Thankfully for auditors, this crate does come from Microsoft so that gives a basic level of authority, but I can understand if you are still unsatisfied about the crate and that was why I stressed the original figure was very off.
As noted in my other comments, I'm very conscious about this problem and tend to avoid excess dependencies when I can do them myself with a bit of work. I even don't use iterutils (which is a popular convenience library that amends `Iterator`), because I normally want a few of them (`Iterutils::format` is one of things I really miss) and I can write them without making other aspects worse. But I'm also in the minority, I tend to depend on "big" dependencies that are not sensible to write them myself while others are much more liberal, and I do think that cargo-watch is already optimal in the number of such dependencies. More responsibilities and challenges remain for library authors, whose decisions directly contribute to the problem.
[1] I haven't actually checked the number of lines under this assumption, but I recall that it exceeds at least 100K lines of code, and probably much larger.
I mean there is an issue here with inflated line counts. It makes the whole solution more complex and more difficult to troubleshoot. It makes the binary size inflated. It likely makes the solution slower. And, probably most important, it makes auditing very difficult.
I appreciate this is how a startup must run, but must every small tech company be a startup? Where do you find jobs for small companies which are just happy to exist and grow sustainably. Where you can come in at 9:00am, have an uninterrupted hour long lunch break at 12:00pm and stop working at 5:30pm? While also not being paid pennies? I can care about your company and be passionate about my work without having to sacrifice all semblance of any aspect of the rest of my life. If that makes me a "B" or "C" player then that's fine.
They exist, but they are rare. Large tech companies have the benefit of economies of scale. For what you're looking for, you really need to find a niche player, and those folks don't hire very often. OpenDental, Rogue Amoeba, Impexium, and Cronometer are a few.
This reads like what I've named as "consultantware" which is a type of software developed by security consultants who are eager to write helpful utilities but have no idea about the standards for how command line software behaves on Linux.
It ticks so many boxes:
* Printing non-output information to stdout (usage information is not normal program output, use stderr instead)
* Using copious amounts of colours everywhere to draw attention to error messages.
* ... Because you've flooded my screen with even larger amount of irrelevant noise which I don't care about (what is being ran).
* Coming up with a completely custom and never before seen way of describing the necessary options and arguments for a program.
* Trying to auto-detect the operating system instead of just documenting the non-standard dependencies and providing a way to override them (inevitably extremely fragile and makes the end-user experience worse). If you are going to implement automatic fallbacks, at least provide a warning to the end user.
* ... All because you've tried to implement a "helpful" (but unnecessary) feature of a timeout which the person using your script could have handled themselves instead.
* pipefail when nothing is being piped (pipefail is not a "fix" it is an option, whether it is appropriate is dependant on the pipeline, it's not something you should be blanket applying to your codebase)
* Spamming output in the current directory without me specifying where you should put it or expecting it to even happen.
* Using set -e without understanding how it works (and where it doesn't work).
* -z instead of actually checking how many arguments you got passed and trusting the end user if they do something weird like pass an empty string to your program
* echo instead of printf
* `print_and_execute sdk install java $DEFAULT_JAVA_VERSION` who asked you to install things?
* `grep -h "^sdk use" "./prepare_$fork.sh" | cut -d' ' -f4 | while read -r version; do` You're seriously grepping shell scripts to determine what things you should install?
* Unquoted variables all over the place.
* Not using mktemp to hold all the temporary files and an exit trap to make sure they're cleaned up in most cases.
I think Python is overused, but this is exactly what Python is great for. Python3 is already installed or trivial to install on almost everything, it has an enormous library of built-ins for nearly everything you'll need to do in a script like this, and for all of its faults it has a syntax that's usually pretty hard to subtly screw up in ways that will only bite you a month or two down the road.
My general rule of thumb is that bash is fine when the equivalent Python would mostly be a whole bunch of `subprocess.run` commands. But as soon as you're trying to do a bunch of logic and you're reaching for functions and conditionals and cases... just break out Python.
I've been pretty happy with the experience of using Python as a replacement for my previous solutions of .PHONY-heavy Makefiles and the occasional 1-line wrapper batch file or shell script. It's a bit more verbose, and I do roll my eyes a bit occasionally at stuff like this:
call([options.cmake_path,'-G','Visual Studio 16','-A','x64','-S','.','-B',build_folder],check=True)
But in exchange, I never have to think about the quoting! - and, just as you say, any logic is made much more straightforward. I've got better error-checking, and there are some creature comforts for interactive use such as a --help page (thanks, argparse!) and some extra checks for destructive actions.
Golang. You build one fat binary per platform and generally don't need to worry about things like dependency bundling or setting up unit tests (for the most part it's done for you).
I use different languages for different purposes. Although bash euns everywhere, its a walking footgun and thus I only use it for small sub 100 line no or one option Scripts.
the rest goes to one of Python, which nowadays runs almost everywhere, Julia or a compiled language for the larger stuff
If you just want to move some files around and do basic text substitution, turning to Python or another other "full fledged programming language" is a mistake. There is so much boiler plate involved just to do something simple like rename a file.
I have a lot of scripts that started as me automating/documenting a manual process I would have executed interactively. The script format is more amenable to putting up guardrails. A few even did get complex enough that I either rewrote them from the ground up or translated them to a different language.
For me, the "line in the sand" is not so much whether something is "safer" in a different language. I often find this to be a bit of a straw-man that stands in for skill issues - though I won't argue that shell does have a deceptively higher barrier to entry. For me, it is whether or not I find myself wanting to write a more robust test suite, since that might be easier to accomplish with Ginkgo or pytest or `#include <yourFavorateTestLibrary.h>`.
Is it really so bad? A bit more verbose but also more readable, can be plenty short and sweet for me. I probably wouldn't even choose Python here myself and it's the kind of thing shell scripting is tailor-made for, but I'd at least be more comfortable maintaining or extending this version over that:
from subprocess import Popen, PIPE
CMD = ("printf", "x:hello:67:ugly!\nyy$:bye:5:ugly.\n")
OUT = "something.report"
ERR = "err.log"
def beautify(str_bytes):
return str_bytes.decode().replace("ugly", "beautiful")
def filter(str, \*index):
parts = str.split(":")
return " ".join([parts[i-1] for i in index])
with open(OUT, "w") as out, open(ERR, "w") as err:
proc = Popen(CMD, stdout=PIPE, stderr=err)
for line_bytes in proc.stdout:
out.write(filter(beautify(line_bytes), 2, 4))
I would agree though if this is a one-off need where you have a specific dataset to chop up and aren't concerned with recreating or tweaking the process bash can likely get it done faster.
Edit: this is proving very difficult to format on mobile, sorry if it's not perfect.
That way, if something is easier in Ruby you do it in ruby, if something is easier in shell, you can just pull its output into a variable.. I avoid 99% of shell scripting this way.
But if all I need to do is generate the report I proposed...why would I embed that in a Ruby script (or a Python script, or a Perl script, etc.) when I could just use a bash script?
Bash scripts tend to grow to check on file presence, conditionally run commands based on the results of other commands, or loop through arrays. When it is a nice pipelined command, yes, bash is simpler, but once the script grows to have conditions, loops, and non-string data types, bash drifts into unreadability.
I don’t think it’s fair to compare a workflow that is designed for sed/awk. It’s about 10 lines of python to run my command and capture stdout/stderr - the benefit of which is that I can actually read it. What happens if you want to retry a line if it fails?
> I don’t think it’s fair to compare a workflow that is designed for sed/awk.
If your position is that we should not be writing bash but instead Python, then yes, it is absolutely fair.
> the benefit of which is that I can actually read it.
And you couldn't read the command pipeline I put together?
> What happens if you want to retry a line if it fails?
Put the thing you want to do in a function, execute it on a line, if the sub-shell returns a failure status, execute it again. It isn't like bash does not have if-statements or while-loops.
My point is that if you take a snippet designed to be terse in bash, it’s an unfair advantage to bash. There are dozens of countless examples in python which will show the opposite
> And you couldn't read the command pipeline I put together?
It took me multiple goes, but the equivalent in python I can understand in one go.
> Put the thing you want to do in a function, execute it on a line, if the sub-shell returns a failure status, execute it again. It isn't like bash does not have if-statements or while-loops.
But when you do that, it all of a sudden looks a lot more like the python code
I have not really been a fan of ChatGPT quality. But even if that were not an issue, it is kinda hard to ask ChatGPT to write a script and a test suite for something that falls under export control and/or ITAR, or even just plain old commercial restrictions.
XONSH is a Python-powered shell
Xonsh is a modern, full-featured and cross-platform shell. The language is a
superset of Python 3.6+ with additional shell primitives that you are used to
from Bash and IPython. It works on all major systems including Linux, OSX, and
Windows. Xonsh is meant for the daily use of experts and novices.
Haven't heard of it before personally, and it looks like it might be interesting to try out.
I stopped caring about POSIX shell when I ported the last bit of software off HP-UX, Sun OS, and AIX at work. All compute nodes have been running Linux for a good long while now.
What good is trading away the benefits of bash extensions just to run the script on a homogeneous cluster anyways?
The only remotely relevant alternative operating systems all have the ability to install a modern distribution of bash. Leave POSIX shell in the 1980s where it belongs.
Except that'll pick up an old (2006!) (unsupported, I'm guessing) version of bash (3.2.57) on my macbook rather than the useful version (5.2.26) installed by homebrew.
> -z instead of actually checking how many arguments you got
I think that's fine here, though? It's specifically wanting the first argument to be a non-empty string to be interpolated into a filename later. Allowing the user to pass an empty string for a name that has to be non-empty is nonsense in this situation.
> You're seriously grepping shell scripts to determine what things you should install?
How would you arrange it? You have a `prepare_X.sh` script which may need to activate a specific Java SDK (some of them don't) for the test in question and obviously that needs to be installed before the prepare script can be run. I suppose you could centralise it into a JSON file and extract it using something like `jq` but then you lose the "drop the files into the directory to be picked up" convenience (and probably get merge conflicts when two people add their own information to the same file...)
> Except that'll pick up an old (2006!) (unsupported, I'm guessing) version of bash (3.2.57) on my macbook rather than the useful version (5.2.26) installed by homebrew.
Could you change that by amending your $PATH so that you're preferred version is chosen ahead of the default?
I think the `#!/bin/bash` will always invoke that direct file without searching your $PATH. People say you can do `#!bash` to do a $PATH search but I've just tried that on macOS 15 and an Arch box running a 6.10.3 kernel and neither worked.
They're definitely both critiquing the script in the OP for the same thing in the same way. They're in agreement with each other, not with the script in TFA
The 1brc shell script uses `#!/bin/bash` instead of `#!/usr/bin/env bash`. Using `#!/usr/bin/env bash` is the only safe way to pick up a `bash` that’s in your $PATH before `/usr/bin`. (You could do `#! bash`, but that way lies madness.)
As far as quick and dirty scripts go, I wouldn’t care about most of the minor detail. It’s no different to something you’d slap together in Ruby, Python, or JS for a bit of automation.
It’s only when things are intended to be reused or have a more generic purpose as a tool that you need them to behave better and in a more standard way.
I had some similar thoughts when seeing the script.
For better user friendliness, I prefer to have the logging level determined by the value of a variable (e.g. LOG_LEVEL) and then the user can decide whether they want to see every single variable assignment or just a broad outline of what the script is doing.
I was taken back by the "print_and_execute" function - if you want to make a wrapper like that, then maybe a shorter name would be better? (Also, the use of "echo" sets off alarm bells).
Most of the time, "echo" works as you'd expect, but as it doesn't accept "--" to signify the end of options (which is worth using wherever you can in scripts), it'll have problems with variables that start with a dash as it'll interpret it as an option to "echo" instead.
It's a niche problem, but replacing it with "printf" is so much more flexible, useful and robust. (My favourite trick is using "printf" to also replace the "date" command).
This one becomes very apparent when using NixOS where /bin/bash doesn’t exist. The vast majority of bash scripts in the wild won’t run on NixOS out of the box.
BOFH much? It’s not as if this script is going to be used by people that have no idea what is going to happen. It’s a script, not a command.
Your tone is very dismissive. Instead of criticism all of these could be phrased as suggestions instead. It’s like criticising your junior for being enthusiastic about everything they learned today.
I know, but honestly when I see a post on the front page of HN with recommendations on how to do something and the recommendations (and resulting code) are just bad then I can't help myself.
The issue is that trying to phrase things nicely takes more effort than I could genuinely be bothered to put in (never mind the fact I read the whole script).
So instead my aim was to be as neutral sounding as possible, although I agree that the end result was still more dismissive than I would have hoped to achieve.
"I can't help myself" is not a valid excuse. If you seriously cannot bother to phrase things less dismissively, then you shouldn't comment in the first place.
One of the best guidelines established for HN, is that you should always be kind. It's corny and obvious, and brings to mind the over-said platitude my mom, and a million other moms, used to say: "if you don't have anything nice to say, don't say anything at all."
Your concession was admirable, but your explanation leads me to think that you misunderstand the role you play in the comments. You are not supposed to be a reaction bot; HN is not the journal for your unfiltered thoughts and opinions.
Despite how easy it would be, you cannot and must not simply write replies. Absolutely everything (yes, everything) written here should assume the best, and be in good faith. Authors and the community deserve that much.
This goes for other sites as well, but especially for a community that strives for intellectual growth, like Hacker News.
I don't agree that I have any responsibilities on the internet. (edit: Outside of ones I come up with.)
> One of the best guidelines established for HN, is that you should always be kind.
Kindness is subjective, I was not trying to be actively unkind. It's just that the more you attempt to appear kind across every possible metric the more difficult and time consuming it is to write something. I had already put in a lot of effort to read the article, analyse the code within it, and analyse the code behind it. You have to stop at some point, and inevitably someone out there will still find what you wrote to be unkind. I just decided to stop earlier than I would if I was writing a blog post.
> "if you don't have anything nice to say, don't say anything at all."
This is not a useful adage to live by. If you pay someone to fix your plumbing and they make it worse, certainly this won't help you. Likewise, If people post bad advice on a website lots of people frequent and nobody challenges it, lots of people without the experience necessary to know better will read it and be influenced by it.
> You are not supposed to be a reaction bot; HN is not the journal for your unfiltered thoughts and opinions.
I think it's unkind to call what I wrote an unfiltered thought/opinion/reaction. You should respect that it:
* Takes a lot of time and experience before you can make these kinds of remarks
* Takes effort to read the post, evaluate what is written in it, write a response, and verify you are being fair and accurate.
* Takes even more effort to then read the entire script, and perform a code review.
If I had looked at the title and headlines and written "This is shit, please don't read it." then I think you would have a point but I didn't do that.
More to the point, a substantial number of people seem to have felt this was useful information and upvoted both the comments.
> Despite how easy it would be, you cannot and must not simply write replies. Absolutely everything (yes, everything) written here should assume the best, and be in good faith. Authors and the community deserve that much.
I prefaced my first comment by pointing out that the people who make the mistakes I outlined are usually well meaning. My critique was concise and could be seen as cold but it was not written in bad faith.
> pipefail when nothing is being piped (pipefail is not a "fix" it is an option
I think it’s pretty good hygiene to set pipefail in the beginning of every script, even if you end up not using any pipes. And at that point is it that important to go back and remove it only to then have to remember that you removed it once you add a pipe?
Pipefail is not a fix. It is an option. It makes sense sometimes, it does not make sense other times. When you are using a pipeline in a script where you care about error handling then you should be asking yourself exactly what kind of error handling semantics you expect the pipeline to have and set pipefail accordingly.
Sometimes you should even be using PIPESTATUS instead.
They taste horrible compared to a good coffee, but they do taste better than crappy coffee shop espresso served in any country where espresso is not a common drink.
Plenty of people find them to be quite tasty and many people don't particularly like what you are calling good coffee. Coffee snobbery is like most snobbery it says more about the person being snobby.
I don't disagree about snobbery, but there's a huge difference between drinks such as instant coffee, drip-machine coffee and correctly brewed, fresh coffee.
I can't bring myself to drink instant coffee any more and will choose any kind of tea in preference to having to suffer the insult of instant "coffee". Drip-machine coffee at least uses real coffee, but in most people's hands leads to a horribly over-extracted brew and typically it's using supermarket-bought stale pre-ground coffee.
If you want to taste nice/good coffee, it can be made relatively cheaply. Buy some whole roasted beans (pre-ground coffee goes stale before it even hits the supermarket shelves), a decent hand burr grinder and an Aeropress device.
If you want to go for an electric bean grinder, ensure that it's a burr grinder as blade grinders are not suitable - they produce a wide variety of particle sizes which means that the small/dust bits get over-extracted and the larger particles are under-extracted. You could try sieving out the unwanted larger and smaller bits, but it's easier to just get a good grinder. For using an Aeropress, you can get away with a "cheaper" burr grinder, but if you want to make home espressos, then you probably want to be spending a LOT on a decent grinder as that will make the most difference to the quality of the espresso.
For a lot of people it is a difference without a distinction. Different strokes for different folks.
I myself make my morning coffee in a moka pot and prefer it over for example a pour over coffee. I recently got a Ninja Luxe Cafe machine which I use for espresso and fast cold brew coffee drinks. It makes decent espresso and the money, time and mental energy for getting a slightly better espresso through a more expensive machine is just not worth it to me. At my gitlfriend's I drink Nespresso because that is what she has. It tastes fine. It is definitely not horrible as many above have claimed.
I don't have a problem with people choosing to drink instant "coffee", but there's such a clear difference that and actual coffee, no matter the preparation method.
I'd agree that getting better quality espressos can get expensive very quickly, and moka pots make very nice coffee for the price (definitely diminishing returns for high end equipment).
To me, the best bang for your buck comes with grinding beans to order, as that means that your coffee isn't oxidising so much before you even brew it. Personally, I'm a fan of immersion brewing techniques, so an Aeropress is my weapon of choice - it's relatively cheap and can make outstanding coffee (also highly portable - I take it along with a hand grinder when camping).
And sorry, but I do find Nespresso to taste very flat and stale the few times that I've tried it. I'd rather just have a good cup of tea than a stale cup of coffee.
You don't know what I am calling a good coffee. I appreciate some people don't like particularly sour coffee because it's unusual for them but you can do a lot better than nespresso in the realm of that style of darker roast coffee too.
It's really simple, people are happy with it because actually good coffee is so rare that they have nothing to compare against. I also doubt most of these people have ever drank a well brewed light roast espresso anyway so I think your blanket statement about "many people don't particularly like what you are calling good coffee" is bullshit.
> I appreciate some people don't like particularly sour coffee
Is that normal in 'good' espresso? I've had some coffee from shops with a particularly sour taste and I don't like it. It's not really because it's 'unusual' to me, I just really don't like it.
PS: I don't drink it with milk nor sugar. Just plain espresso with nothing (just in the morning I do like it americano with water). I like the strong taste and I hate milk :)
Some level of acidity is normal in any coffee, an increased level is common to lighter roasts. You can get a good darker roast espresso which won't be very noticeably acidic or you can get a good lighter roast espresso which tastes like fruit juice. At that point it's a personal preference.
Or maybe they have tried it and don't like it. Maybe it is your taste that is screwed up and that is why you like light roast espresso. But either way even if Neapresso isn't the best coffee it isn't horrible.
The vast majority of people have not tried well brewed coffee. Of those who have tried well brewed coffee, the vast majority have not tried well brewed light roast. The main reason is similar to how most people have not tried well prepared food. The reason is not, as you seem to insinuate, because I am a coffee snob who is incapable of conceding that different people have different tastes. Another reason is because in most of the world people drink coffee with various inclusions like heaps of sugar (Italy) and heaps of milk (the rest of the world) and because, as a result, almost no cafe out there optimises for black coffee outside of countries like Italy.
Most people who drink coffee black (even in Italy) seem to do it either because they got used to how bad it tastes, or because of dietary reasons, or because they feel it makes them look more manly.
> Maybe it is your taste that is screwed up and that is why you like light roast espresso.
Again, you are assuming completely incorrectly and baselessly that I am claiming that light roast espresso is better than nespresso pods. I am not claiming this in the slightest. I am claiming that any good dark roast espresso is miles ahead of anything you can get out of a nespresso pod. Both in terms of not tasting like wood and in terms of not tasting like light roast.
Even petrol station cafes in Italy can produce much better results than a Nespresso machine and Italy continues to love drinking dark roasted Robusta/Arabica blends with heaps of sugar.
Likewise, any petrol station cafe in Italy can produce espresso which is so much better than what you can get in a Starbucks that it's difficult to conceive of why Starbucks even offers espresso any more. (And again, this is me talking about coffee in a country where people are used to drinking dark roast and would also likely be at least weirded out by the taste of light roast coffee.)
> But either way even if Nespresso isn't the best coffee it isn't horrible.
It's horrible in comparison to a high quality brew in the same way that McDonalds is horrible compared to anything you would get at a well respected high end restaurant.
You only think it's not horrible because you have limited experience. Yes, even in the realm of dark roast espresso.
Edit: I would also like you to consider the possibility that you personally (and maybe most people) simply do not have a sense of taste which is discerning enough to taste the difference between what someone with a more discerning taste would consider "good" coffee versus "horrible" coffee. This doesn't mean that we can't make quality judgements about coffees once they're above a certain level of awfully bitter/sour/astringent and unpleasant, but it does mean that maybe for you and for the vast majority of people, you shouldn't worry about "excellent" coffee and should instead just get on with your life.
I would like to just state that I am overjoyed to hear that you have found coffee you like to drink every day. There are lots of people out there whose experience of coffee has always been terrible (and it is my belief that most of them, with some help to explore, could probably find something they find inoffensive or even tasty). But rather than telling people who seem care more about coffee than you that they are snobs and their opinions of coffee are wrong, maybe you should also accept that not everyone has the same sense of taste as you.
I know the above sounds contradictory when I called Starbucks and Nespresso "horrible" but I would like to clarify things by saying that while I find Nespresso horrible in the grand scheme of things, if it makes you happy, you shouldn't listen to my opinion of it and instead enjoy it.
On the other hand, if you do find coffee to be generally unpleasant, horrible, or even sub-par, then I encourage you to explore and consider opening your mind to the possibility that there is a coffee out there that you would enjoy.
Every morning I make a coffee for someone I love, which I would personally not enjoy drinking very much, but which she enjoys immensely. I offer her to try the coffee I like, and on most occasions she is either indifferent about it or hates it and this is fine, I do not try to explain to her that she is wrong, I just try to cater to her particular tastes with a high quality coffee and preparation method that she likes.
> Most people who drink coffee black (even in Italy) seem to do it either because they got used to how bad it tastes, or because of dietary reasons, or because they feel it makes them look more manly.
That's a pretty big generalisation IMO.
I just like coffee black because I tried it one time (there was no sugar) and I was surprised how nice it tasted. The only time I drink it with sugar now is with the typical Spanish "cafe con hielo" in summer because the cold brings out the bitterness more. I've never taken milk because I hate it in general.
But I truly like it. I've always hated the starbucks "milkshake" idea of coffee and I like it strongly tasting.
It's not a generalisation, it's an observation, most people do not drink black coffee. Even in Italy. And most of those that do, when asked, seem to provide a justification not based in flavour.
After paying a monthly premium for my coffee-lifestyle of I'd say 100-200 EURs just by buying most premium sorts of beans for ages, not including drinking at cliché hip coffee bars of competition-winning baristas (face-tatooed mustaches are to be expected) and having endless discussions about taste and techniques with the average tech co-worker, I happily reverted to the one and only, un-hipersterable, supermarket instant.
I do know how good coffee tastes like and would say: It's not worth it.
Please add that last bit, as it makes the difference between a statement and an opinion.
I am happy for you that you tried things and decided they were not worth it for you. At least you took it a step further than a lot of people who hold such opinions about speciality coffee. But I think you should open your mind to the idea that the people who think it IS worth it are not just pretending.
Or maybe it is you who likes to feel superior to people with an interest in coffee?
e.g. "It makes decent espresso and the money, time and mental energy for getting a slightly better espresso through a more expensive machine is just not worth it to me."
If you think that time, money and experience can only produce a "slightly better espresso" then I will just say you are not experienced enough to make such a claim. Rather than accepting that maybe to some there is a big difference, you call them snobs.
I think I would agree that most people don't like their experience of black coffee.
Whether that's because there is no coffee or brewing method out there that, when served black, would satisfy them, or because they've just not had the chance to taste what they would consider good coffee, of that I am not sure.
If you are writing a command line tool and you absolutely insist it must have colours then stick to the ANSI 16 colours or allow end users to customize the colour scheme.
reply