I sometimes wonder why we don't see ruby used for shell stuff more often. It inherited most of the good stuff for shell scripting from Perl, and Perl took a lot of it's syntax from sh and sed and awk, so almost anything you can do in shell script you can do in ruby, but with an option of making it gradually less terse and more readable, while having sane variables and data handling from the start.
Also ruby is great in allowing complexity to grow smoothly, no sudden hiccups. You start with just one line (everything goes into module main implicitly), extend it to a single-file script, require some built-in libraries, then add a module or helper class in the same file, and only then maybe extract those files to required files, add gems, whatever. No boilerplate whatsoever, no jumps, no big rewrites.
meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why, it's not friendly for os manipulation at all, and number crunching power is not needed in many, many tasks of that sort.
I think the quality of a language for shell scripting is often secondary. What’s of greater significance is where it is at. I.e., does it have it already installed? The answer with Linux and Bash is almost always “yes”. Not so with ruby.
The moment you start asking the user to install things, you’ve opened up the possibility for writing a program rather than a shell script. The lifecycle of a piece of software is almost always one of growing responsibility. This cycle is devastating when it happens to shell scripts. What was once a simple script slowly becomes creaking mass of untestable, poorly understood code playing in the traffic of swimming environments (which grep you got, buddy?).
I guess I’m saying that once you open up the possibility of writing a program, you generally take that option and are usually happier for it. In the “write a program” world, ruby is still good, but it becomes a far harder question to answer whether ruby is still the right choice. There are a lot of languages with a lot of features engineers like.
That's true of Python and Perl as long as you keep using only the features built in in the core language (standard lib or whatever they call it.) The same applies to Ruby.
My scripting language is bash in at least 99% of cases. I used to program in Perl when I need some complex logic. I stopped using it some 10 or 15 years ago when I switched to Ruby for two reasons: I became more familiar with it than with Perl and it's easier to manage data structures whenever I need something complex or classes. That doesn't happen often in scripts but as I wrote, I use bash for all the normal stuff.
I use Python for the scripts that start an HTTP server because it has the http.server module in the standard lib and it's very simple to write handlers for GET, POST and all the other HTTP verbs. The last example was a script to test callbacks from an API. I just implemented two POST and PUT methods that print the request data and return 200 and a {} JSON. I think that to do the same in Ruby I would need to install the webrick gem.
With a big difference -- Perl and Python will always be installed on these machines, whereas Ruby might need two deployment steps: (1) copy file, (2) install Ruby!
Perl is deeply underappreciated and needs a lot more love. One of the keynotes at the polyglot conference that I run is going to be Perl talk and I'm really looking forward to it.
That was the reason Perl was what I switched too from bash when I was working on Solaris boxes; it was miles ahead of what was possible with bash AND it was already present. If I remember an older version of Python was also installed but by then Perl had already got me reeled in and I felt Python to be too "verbose" compared to Perl (I eventually changed my opinion when I got a bit more experience under my belt).
Ha - I actually haven't changed my opinion about verbosity, Python is still more verbose and I will choose Perl for throwaway scripts even today; I just have a greater appreciation of readability of Python code compared to the free-for-all style-fest of Perl code (admittedly written by a bunch of devs with little code style enforcement). Perl is great for smaller scripts but I'm talking about many thousands lines of code and the lack of native object orientation, messy error handling, lack of a decent repl etc start to take their toll.
One usually needs modules to easily do something more advanced, but yes, Perl is almost always installed. Although I find Ruby much more ergonomic, I still reach for Perl as well because I know it better and don’t have to open the documentation so often.
Sure, especially every bash script that goes over 200-250 lines is super readable. /s
Or when you have to start using all the combinations of characters to achieve f.ex. proper iteration through an array without word splitting. Etc. to infinity.
I've danced this dance hundreds of times and got sick of it. Gradually moving away from scripts and to Golang programs and so far it has been an improvement in almost every way, I'd say easily in 90% of the cases.
It's tongue in cheek, and he's right. I am a old man Sunos/VMS/Linux admin. Having root used to be my god given right.
However I haven't worked at a company in years that gives anyone access to root anywhere except your own local machine or maybe in rare cases a dev box that is destroyed and rebuilt at will.
yea as soon as I read through the post, I ssh'd into one of my many Ubuntu servers, ran `ruby -v` and then noped out. From past experience I want nothing to do with trying to wrangle RVM or rbenv and then making sure the paths work properly.
Nowadays `apt install ruby` on an ubuntu box will give you a reasonably up to date ruby that's more than adequate to run scripts. This is not like the old days where a script written on Ruby 1.87 would break on 1.9.
> I sometimes wonder why we don't see ruby used for shell stuff more often.
The reason we don't see Ruby used more for shell stuff is because Python won this particular war. It's already installed on basically every Linux distribution out there, and this simple fact outweighs all other language considerations for probably >95% of people who are writing shell scripts in something that isn't Bash.
Personally, I don't much like Python, and even though Ruby is not my favorite language either, I find it much better than Python for this kind of work. But I don't get to decide how Debian builds their infrastructure, so in the end, I tend to use Python.
Yes, Python won the war, which is a pity. Linux distributions started getting bloated at the same time they switched to Python for everything. Yum hanging inexplicably and such things never occurred before.
The BSDs do not have this problem (yet!). I hope they stay sane and keep using Perl/sh.
this whole argument is silly. In my time on this site, I have seen someone suggest that every language is good for shell scripting including C.
Python and bash are used in the real world most often because convincing your sysadmin/infra/boss guy to install ruby for one script is a hard sell when you already have good-enough tools built into the system that don't add risk/complexity.
If a client has certified a specific Linux distro as an approved platform, that's what we use.
We can either deliver a single executable (Go) or a Python script, as python is preinstalled on their distro.
If we'd want to use Ruby, it'd be a huge hassle of re-certifying crap and bureauracy and approvals and in that time we'd have the Python solution already running.
Without a root account or inclusion in sudoers list, quite hard. There's millions of people that don't control the machines they work and spend most time with.
Depends on you, your team, your target hardware/os, your project, and many other factors. Considering all of those things, the hurdle of installation might just be too large for it to be worth it.
It's not. This is a non-issue. Every web shop is writing bash to twiddle a build script on servers they also manage, which includes the ability to install any package they want.
Mitigating the risk of downloading a script from the internet and executing it
-- even from a "trusted" website or package manager -- is absolutely a good reason not to use it.
Any decent distro has it. So you don't need to execute any random scripts, just install it or prepare the image with it for your OS install. That's it.
I don't really get this whole defaults being a blocker for tools choice.
I never started using python, ruby or node because all of them were a pain to use for me - this was 7-8 years ago, so maybe it has changed a lot. But even 2-3 years ago I had lots of issues just running one python project. Module, not module, pip or not...
Way too confusing, compared to go for example. Or hell, even Java/Kotlin when you use an IDE and it autoconfigures most things.
You need to introduce a build and release process then to do this then which still detracts from it being simple or the selling point being it's already installed.
Python ships with venv support. It’s not that difficult to bootstrap a venv before running your script, and that’s only if you actually need tooling other than stdlib, which you probably don’t.
There are plenty of ways to have the venv automatically activate (and de-activate) when you enter/leave the directory for the project. direnv [0], mise [1], or various shell hooks.
There are useful libraries, I’m not saying there aren’t. I just dislike it when people include one as a dependency when they really didn’t need it.
I think golang is used because you can easily create a single static binary, which is incredibly easy to distribute. I often find non-trivial CLI tools written in Python cumbersome because of the dependency wrangling necessary.
Plus it can be run on any machine, while golang needs to be compiled for the specific architecture you'll be running it on. No messing about trying to get the right build.
I actually think its a less of a problem than many imagine. If you have different architectures it actually is better and more predictable because it's compiled, also it's incredibly easy to compile even for noobs
I spent a weekend going through all my old python scripts with Gemini and ChatGPT, rewriting them to Go just because of this.
Most of them were so old that I would have had to skip like 3 generations of package managers to get to the one that's used this year (dunno about next year) if I wanted to upgrade or add dependencies.
With Go I can just develop on my own computer, (cross)compile and scp to the destination and it'll keep working.
> I often find non-trivial CLI tools written in Python cumbersome because of the dependency wrangling necessary.
I'm thinking of trying out Mojo in large part because they say they're aiming for Python compatibility, and they produce single-file executables.
Previous to that I was using PyInstaller but it was always a little fragile (I had to run the build script a couple of times before it would successfully complete).
Currently I'm using pipx and Poetry, which seems pretty good (100% success rate on builds, and when my 5-line build script fails it's because of an actual error on my part).
Which is a round-about way of asking everyone:
Does anyone have any other good way(s) to build single-file executables with Python?
Fun fact, you can use D language as compiled scripting using rdmd with powerful and modern programming features although it has much faster compilation than comparable C++ and Rust [1]. The default GC make it intuitive and Pythonic for quick scripting more than Go. Its recent native support for OS lingua franca C is the icing on the cake [2].
From the website, "D's blazingly fast compilation allows it to be used as a high level, productive scripting language, but with the advantages of static type checking" [3].
[1]Why I use the D programming language for scripting (2021):
Bootstrapping and different behavior for different versions and not being able to use the dependency ecosystem really make it a lot more difficult than people realize if you’re trying to “script” at scale.
I’ve used rust for this task but people get mad that I’m calling it a “script”. “That’s not a script that’s a program” which…sure. But so maybe we need another term for it? “Production-scripts” or something.
My experience is rewriting Ruby and bash buildpacks for the open spec CNCF Cloud Native Buildpack project (CNB) https://github.com/heroku/buildpacks
I agree that Ruby is easier to start and grow complexity, that would be a good place to start.
This complaint comes up enough that I'm surprised nobody's created the Ruby equivalent of GraalVM, to compile a Ruby script, all its deps, and a WPOed subset of the Ruby runtime, into a native executable.
Given Ruby's lackluster type system, the amount of assumptions a compiler can take is significantly reduced. Moreover, analyzing that at a simple IR level would prevent it from understanding of what is actually referenced vs what isn't without making compile times take eternity and performing complex flow analysis with simulation.
Even with a good type system, a trimmer/linker has to be enlightened of many special idioms and patterns and perform flow analysis, and in the case of dynamically typed languages or languages with reflection - to analyze reachability of reflectable members and trim otherwise spaceous metadata. It took significant amount of work in .NET's ILLink in order for it to be able to produce as compact binaries/libraries as it does today with .NET's AOT build target, and it still required a degree of metdata compression and dehydration of pointer-rich data structures (something which, funnily enough, Go doesn't do, resulting in worse binary size).
Unlike GraalVM Java, as far as I can tell TruffleRuby doesn't provide a bundler that can create a single executable out of everything, but in principle I don't see why it couldn't.
I'm not sure I'd try replacing shell scripts with natively compiled Python binaries. That said, I use a Kotlin Scripting based bash replacement in my own work that has many useful features for shell scripting and is generally much more pleasant. You have to "install" it in the sense of having it extracted somewhere, but it runs on Win/Mac/Linux and can be used without root etc.
I wasn't so much imagining each shell script being replaced with a binary, as I was imagining deploying a single static binary "sysadmin-DSL interpreter binary" — where that "interpreter" is just "Ruby with a custom prelude, all packed together" — such that I could then name that interpreter in the shebang line for my admin scripts written in that DSL.
The ability to type check and unit test your code is also valuable. This is possible with many languages but with Go it requires basically zero configuration.
Pex was also the solution I landed on after evaluating several non-container options for distributing a Python project to arbitrary Linux hosts.
It works well but with one huge caveat: although you bring the stuff required to reconstitute the venv with you, you’re actually still using the system’s python executable and stdlib!! So for example if you want to make a project targeting all supported Ubuntu LTS versions, you have to include the wheels for every possible python version you might hit.
Ultimately this boils down to there not really being a story for statically compiled python, so in most normal cases you end up wanting a chroot and at that point you’re in a container anyway.
Nuitka has worked for me for everything Ive tried (in house dev tools). I didnt end up using it for work because I can rely on a pristine system Python with the right version so pex makes more sense.
There are other options I didnt look too much into, e.g. Beeware
I wish an easy cross-platform PEX or shiv [1] were a thing.
Binary dependencies are the biggest reason I prefer the new inline script metadata spec (https://packaging.python.org/en/latest/specifications/inline...) and `pipx run`.
Luckily, they're pretty great.
They have changed how I write Python scripts.
The way inline script metadata works is that your script declares arbitrary dependencies in a structured top comment, and a compliant script runner must provide them.
Here is an example from a real script:
pipx implements the spec with cached per-script virtual environments.
It will download the dependencies, create a venv for your script, and install the dependencies in the venv the first time you invoke the script.
The idea isn't new: you could do more or less the same with https://github.com/PyAr/fades (2014) and https://github.com/jaraco/pip-run (2015).
However, I only adopted it after I saw https://peps.python.org/pep-0722/, which PEP 723 replaced and became the current standard.
It is nice to have it standardized and part of pipx.
For really arbitrary hosts with no guarantee of recent pipx,
there is https://pip.wtf and my venv version https://github.com/dbohdan/pip-wtenv.
Personally, I'd go with `pipx run` instead whenever possible.
You can also combine the two.
Something I have done is script dependencies in inline script metadata and dev dependencies (Pyright and Ruff) managed by Poetry.
Tons of scripts rely on coreutils (sed, awk, grep, head) to manipulate data.
All of those have wildly different behavior depending on their "flavors" (GNU vs Busybox vs BSD) and almost all of them depend on libc being installed.
That's not my experience at all. Shell is often glue between different utilities and unless it's being run in a controlled environment like a docker container, you have no idea what's on the base machine.
Every compiled language can do it until you run into issues with glibc vs musl or openssl version or network stack defaults and remember you weren't as static as you thought.
> ... maybe extract those files to required files, add gems, whatever.
CPAN is the killer feature of Perl. It just works. First off, most of the time I don't need a CPAN module for doing shell scripting in perl. Perl itself is rich enough with the file manipulations that are needed for any script of less than 100 lines.
My experiences with Ruby and installing gems have been less pleasant. Different implementations of Ruby. Gems that don't compile / work on certain architectures. Breaking changes going forward where a script that was written 2 years ago doesn't work anymore. Sometimes it's someone was doing something clever in the language that doesn't work anymore. Other times its some gem got updated and can't be used that way anymore. ... which brings us to ...
I believe that Go's advantages come into play when the program gets more complex that that 100 line size and it becomes a "program" rather than a "script" that has complexity to deal with. Furthermore, executables built in Go are most often statically linked which means that someone upgrading the libraries doesn't break what is already working.
Instability. Ruby has not been the same language for very long. Migrating to 1.9 was a huge hassle for many firms. This may seem like a long time ago in tech years; but then there was Ruby 2.0; and shell scripts, meanwhile, have stayed the same the whole time.
A secondary reason is that Ruby has been very slow for much of its life, which means that for situations where you need to run a huge stack of scripts -- init systems, for example -- it would be punishing.
Ruby does have a terse and intuitive syntax that would make for a good system shell. Although it has some magic, it is less magical and confusing than shell itself. Ruby provides many basic data types that experience has proven are useful for shell scripting -- like arrays and dictionaries -- and they are integrated in a much cleaner and clearer way than they are integrated into widely used shells like Bash.
System tools that are written in Go may still make sense to write in Go, though. Go, it is true, does not have a nice terse syntax for short scripts and one liners; and it doesn't have a default execution model where everything is in main and so on; but that is because it is not a scripting language. Other languages used to write system tools and system services -- like C, C++, Java and Rust -- don't have those things either.
> Migrating to 1.9 was a huge hassle for many firms.
This seems contrary to my experience. We took a large project from 1.8 to 1.9 to 2.0 to 3.0, and it was much easier than we expected. It was a lot easier than our Python 2 to 3 conversations were.
> It was a lot easier than our Python 2 to 3 conversations were.
Python's is (present tense very much intended) notoriously one of the worst-managed transitions in programming language history, so that's not exactly a ringing endorsement.
Your first two points don’t seem valid, in my experience.
The Ruby 2.0 migration wasn’t that interesting from a compatibility perspective; it certainly wasn’t anything like Python 2 -> 3.
And Ruby is __not__ slow compared to bash. I don’t where these myths get started, but someone needs to justify the Ruby-is-slow thing with actual data.
> I don’t where these myths get started, but someone needs to justify the Ruby-is-slow thing with actual data.
As an outside observer of the Ruby world,
I have an impression that it was Ruby MRI that was slow.
CPU-bound synthetic benchmarks like the much-criticized Benchmarks Game showed Ruby ≤ 1.8 a good deal slower than CPython 2.
Here is an illustrative comment from that time: https://news.ycombinator.com/item?id=253310.
People also complained about early Rails,
and the perception of Ruby's performance got mixed up with that of Rails.
Then YARV came out, and Ruby became several times faster than its MRI former self on different benchmarks (https://en.wikipedia.org/wiki/YARV#Performance).
With YARV, Ruby gradually caught up to "fast" interpreted languages.
Now interpreted Ruby seems as fast as CPython 3 or faster in synthetic benchmarks
(for example, https://github.com/kostya/benchmarks/blob/7bf440499e2b1e81fb...), though still behind the fastest mainstream interpreters like Lua's.
Ruby is even faster with YJIT.
Alternatively, you can use Crystal instead of Go. Its syntax is almost Ruby one, except mostly for some typing. Standard library is also similar. Binary size and speed is also go-like
Mentioning 1.9 migration and ruby being slow? Python 2 to 3 was waaaaay worse and more negatively impactful, and equally slow (slower in most cases).
Ruby never had US market penetrative as perl or python, which were basically invented in the US, and congregated people from the academic realm. These things aren't decided based on meritocracy (no things ever are).
That's disingenuous. Python 3 was released around 2008, rails popularity was still rising. The community refused to upgrade for at least 10y, and several prominent libraries took as much to provide first grade python 3 support.
Ruby 1.8 to 1.9 migration was by contrast way milder. It took 6 or 7 patch releases and around five years to release 1.9.3, the first from the 1.9 series people actually considered stable, but after that the community migrated because it was *significantly* faster than 1.8 . Python 3 on the other hand was slower overall than python 3 at least until 3.6. The fact that the community stuck with python through it all does say a lot about human psychology and sunken cost syndrome.
> I sometimes wonder why we don't see ruby used for shell stuff more often
The best piece of code that I worked on was an ETL in pure Ruby. Everything in modules, simply to read, no crazy abstractions, strange things like __main__, abstract clssses or whatever.
Maybe others can chime in, but the main difference that is found in ruby developers is that they really have fun with the language making everything with a higher lever of software craftsmanship that other folks in the data space, e.g. Python of Julia.
I'd argue that writing Chef in Ruby (and Erlang) was absolutely to its detriment. Yeah, it was popular. It was also a debugging and scaling nightmare (not that Opscode helped that any).
In fact one of the reasons I rage quit megacorp for a second time was that I was required to use an Enterprise Chef instance that would log people out at random every 0-3600 seconds. I could throw plenty of deserved shade at my coworkers but Opscode didn't understand their product any better and I wasted more than enough time on conference calls with them.
I love ruby, and I'm using it for 18 years, but I've spent half a year on chef a decade ago and it was one of the worst wastes of time I had ever. Nothing to do with the language, everything to do with architecture of the thing.
Yes. Shopify, GitHub, Stripe, GitLab and more, but in large part due to Rails, not Ruby specifically (although Ruby is one of the great things about Rails).
It still is a slow language that does not offer anything over competitors that are an order of magnitude faster to justify its performance characteristics.
You can kind of figure it out by skimming the comments here. Most mainstream languages have decent-to-great tools built in for scripting, so the difference isn't that huge. So people just prefer to script in the language they already prefer in general, or that the project is written in.
Docker images fill a different role. They shouldn't have everything installed on them as that broadens the attack footprint. They should be doing one thing, and one thing only. If it's a "run this executable that was built" - then only what is needed should be there.
Installing python and other general purpose tools gives any attacker that gets into a docker container many more tools to work with for getting out.
For docker, the trend isn't "build a general purpose machine" but rather "what can we slim this down to that only has the bare minimum in it?" This can be taken all the way to the distroless images ( https://github.com/GoogleContainerTools/distroless ) and means that the security team won't be asking you to fix that CVE that's in Python that you don't use.
If, however, you do need python in an image because that image's purpose is to do some python, then you can pull a python image that has the proper release.
> meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why, it's not friendly for os manipulation at all
I'm not sure where you're going with this: My experience of Ruby and Go is that:
1. Go is a lot easier to do OS manipulation type stuff.
2. Go is a lot easier to modify down the line.
TBH, #2 is not really a consideration for shell-scripts - the majority of the time the shell script is used to kick off and monitor other programs, transforming an exact input into an exact output.
It's glue, basically, and most uses of glue aren't going to require maintenance. If it breaks, it's because the input or the environment changed, and for what shell is used for, the input and the environment change very rarely.
lots of CLIs are written in Python, absolutely, but many started more recently are almost exclusively Go unless there is serious interest in using Rust. It's almost certainly the ease of cross compilation plus the ability for users to run it without changes to their system.
Just because you don’t see it doesn’t mean it’s not the most-used shell scripting language. For example, when I was at AWS it was used for templating in something like 90% of all pipeline tooling
> meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why
What? Go is used because distributing a static binary without any dependencies is way better than asking each and every user to download an interpreter + libraries.
So stop using 3rd party libraries. Seriously, the number of times I’ve seen people importing requests to do a single HTTP GET, or numpy to use a tiny portion of its library is absurd. You can do a hell of a lot with the stdlib if you bother to read the docs.
Not using third party libraries does not help against py2->py3 and changes between 3.x point versions.
It's only relatively recently that I could really expect that the target system would have python3, and then I'd also have to deal with some really annoying errors (like python3 barfing on non-ASCII comments when reading a source file with "C" locale, something that used to work with python2 IIRC, and definitely was an issue with "works on my machine" devs).
venvs are horrible, even compared to bundler.
But the python2 era left imprint on many who think it's just going to be there and work fine.
a) put a `binding.irb` (or `binding.pry`) in any rescue block you may have in your script - it'll allow you to jump in and see what went wrong in an interactive way. (You'll need a `require 'irb'` in your script too, ofc)
b) I always use `Pathname` instead of `File` - it's part of the standard library, is a drop in replacement for `File` (and `Dir`) and generally has a much more natural API.
c) Often backticks are all you need, but when you need something a little stronger (e.g. when handling filenames with spaces in them, or potentially hostile user input, etc), Ruby has a plethora of tools in its stdlib to handle any scenario. First step would be `system` which escapes inputs for you (but doesn't return stdout).
d) Threads in Ruby are super easy, but using `Parallel` (not part of the stdlib) can make it even easier! A contrived example: `Parallel.map(url_list) { |url| Benchmark.measure { system('wget', url) }.real }.sum` to download a bunch of files in parallel and get the total time.
MacOS has Ruby 2.6 installed by default which is perfectly serviceable, but it's EOL and there are plenty of features in 3+ that make the jump more than worthwhile.
Ruby's a great language- I've always enjoyed its ergonomics and clarity. But its editor tooling hasn't kept up with its one-time competitor, Python.
I've mostly been in the Python ecosystem for the past few years and the LSP investment from Microsoft has really shown. Rich Python support in VSCode is seamless and simple. Coming back to Ruby after that caught me off guard - it feels like I'm writing syntax-highlighted plain text. There's an LSP extension from Shopify, but it's temperamental and I have trouble getting it working.
Editor support isn't everything (the actual language design is still the most important), but it definitely affects how eager I am to use it. I basically never choose Ruby over Python given the option, which is too bad. Ruby's a cool language!
Funny, I have the complete opposite experience with python. Constantly turned off inline errors and warnings because they were almost always wrong - packages I installed “could not be found” by the LSP, it constantly worried about type issues that were no longer incorrect, it didn’t pick up function changes across files, etc etc.
Then you think “maybe I just have the wrong lsp” only to realize there are half a dozen that all behave differently and nobody can agree on.
I tried them all, they all turned even my simplest of scripts between 10-50% red. I think half of my ire towards python came from the fact that the LSP situation was so awful, I just had to get used to reading code with a bunch of “errors” in my face, or turn them off completely… I could never decide which was worse
Hmm... I use python-lsp-server and it just works almost every time. The main thing is your editor being aware of your venv. Which editor do you use? I use direnv and the direnv support in Emacs so each project has its own venv, then install the project plus all dev dependencies (like type stubs) in that venv. The LSP server itself is installed globally, though.
I tried last month and it was still a mess. The old Ruby extension used to work fine and the new LSP one from Shopify doesn't want to work for whatever reason.
> ... the new LSP one from Shopify doesn't want to work for whatever reason.
Sorry, but calling it "a mess" simply because you can't get it to work is quite unfair.
I've been using the LSP from Shopify since it came out, it works great, is very stable and updates come in on a regular basis.
Love the Shopify effort on the LSP, I even reported issues. But, no, it's not stable or easy to setup unless you use it on specific scenarios. Also I have very high cpu usage nas crashes every day.
I would say it's quite fair. It's not just me but several coworkers, other people in this thread, and reviews on the actual VSCode extension itself. I sank several hours trying to fix whatever issue it has with my system and continued to run into problems. I'll give it another shot when I'm back on Ruby projects.
I mean, I was trying to set up editor support for a Ruby script last week. So unless it was improved really recently, it's not where I'd like it to be.
> but it's temperamental and I have trouble getting it working.
> I was trying to set up editor support
Not sure what problems you had exactly, but saying that editor tooling is bad, simply because you can't get it to work, is not fair.
I've been using the LSP from Shopify since it came out, it works great, is very stable and updates come in on a regular basis.
For both Ruby and Python most of time I use Emacs without an LSP, and it works better with Ruby than Python.
I never search _exactly_ why Ruby became so less popular than Python, but I think that at least two things:
- Having its popularity too much dependent of Rails;
- Not having any other killer app (e.g. Python is not only Django, but NumPy, TensorFlow, Pandas, Flask and so on);
So, even if I love Ruby, Python is still my main language for most of things as it has a huge ecosystem and it is easy to use. But for writing shell scripts the advantages that I mentioned in the article generally I don't care too much about those things.
Even from a systems perspective there are more tools available in Python. Pyroute2 vs netlinkrb which was last updated 8 years ago and only does a few things. It’s sad because ruby had the potential, but ruby developers prefer to focus on the Rails ecosystem.
I couldn't spot any way to make something like `ls -j` (j is an illegal option) throw an exception in ruby (as opposed to simply outputting the system error message).
The closest I could find is what you suggest (checking $?), or using something like this [1], which would require changing syntax:
system('ls -j', exception: true)
Would be great to know if there's some easy callback or, ideally, a global setting one can make so a ruby exception is thrown if there's an error running system commands using the backtick syntax.
It's not required to check $? after each command. It's only when calling out to the shell. How would you propose to handle that? Throw an exception on a shell command which fails?
Some would say calling out to shell is an anti pattern by itself. Others would say exceptions are an anti pattern. (Just use appropriate return type and there's no need for exceptions ever!)
This is only true for backticks, which are somewhat intended for non-serious use. If you want exceptions for subprocess failure, `system` does the trick.
There’s really no need for this kind of over-the-top response.
There are lots of techniques to make it obvious that errors can happen and need to be handled.
1. Make it difficult to ignore that a function can return an error. This is the golang approach. Errors are part of the return values and unused return values are compiler errors.
2. Make it impossible to use parts of the return value having an error state. Rust does this with the Result sum type and pattern matching.
3. Tailor for the happy-path and throw exceptions in case there are any errors. With optional means to catch them if recovering errors is desired. This is how most other languages function.
Hiding the error status in another variable, that is super easy to overlook and that the programmer might not even know exists, then continuing despite this variable not being checked will inevitably introduce bugs allowing faulty data in your system.
> It seems like a waste of precious syntax to dedicate backticks to running shell commands.
Except from Bash (where backticks also have the same purpose as Ruby), I only remember seeing backticks in:
- Kotlin, for naming functions with phrases. The only use case that I remember for it was to creating more meaningful names for test functions. I don't think that it was so useful...
- Lisp dialects for quasiquotes, which is meaningless in Ruby
- Haskell, for making functions infix, and I can't see why would it be useful in Ruby (Ruby has it own way to make infix methods)
- JS, for creating templates. In Ruby we can use double quotes for that
> "thou shalt not parse the output of 'ls'"
Yes, but it was only an example of associating backticks and the language features. Of course it is not ideal, but it is an example that everyone will understand. You'll probably won't want to use it (even because you can do it with Dir.glob('*').map { |f| f.length }).
The same happened later in the text when I used a regex to find the Git branch.
> You really want to read the directory and map over the stat function (or equivalent) to get the length.
Yay I’m glad someone else knows about this and thinks it’s awesome too :) It’s crazy I didn’t find it until I was digging through bundler docs looking for something completely unrelated
I work for a company that has a large Rails monolith. Although we use many more languages than just ruby these days, we still have a ton of scripting, config, and tooling that is all in Ruby. It's a joy to work with IMO.
Another common pattern I see is people using embedded ruby in a shell script. It does make it a little harder to read/understand at a glance, but it's nice for being able to do most simple things in sh but drop into ruby for things that sh/bash suck at.
That said, I get a feeling that the people that joined once we'd added stuff outside of the Rails monolith and don't know/use Ruby are...not big fans.
I had the same experience. Somebody in our company inherited a Ruby script and was trying to modify it and was stuck. They came to me exasperated. The error message was something really trivial like addition is not defined for some object type. If you don’t understand the base level concepts of the language it’s going to be a very bad time. Sadly people are not that interested in learning about Ruby nowadays and look at it as a huge imposition to deal with. I love it still.
Ruby is a wonderful language worth learning (and it's really not difficult to pick up) but I see way more people push against learning it than they do other things (eg python).
Ruby is an amazing language. I've seen some systems it's not already installed on and hop over to something like perl/python in those cases, but Ruby is by far my preferred hammer for small scripts. The code is beautiful.
Small nit: your note in Feature 4 is actually supposed to be in Feature 5, I assume.
Ruby is my favorite shell scripting language, I used it last year for a complex ffmpeg automation script (use blackdetect to detect long stretches of power-off and split a video file into much smaller components), and Ruby made it a breeze when I know it would have been a real struggle to get working in bash or powershell
I have what probably sounds like a niche use case, where most of the boxes I work on don't have access to the Internet.
So for me, "is it installed in the base distribution" is the difference between being able to start immediately no matter which box I'm concerned with, and spending months trying to upstream a new program to be installed with our OS image team.
I took a look around a vanilla Debian 12 box, and didn't see Ruby in there [1]. So, sadly, although I really like the way Ruby looks, I'm going to have to stick with Bash and Python 3 for the hard stuff.
These are great points, if you already have Ruby in your stack.
For me, when Bash ain't enough I upgrade to the Project language (PHP, Python, JS, etc). For compiled project I reach for LSB language (Perl, Python) before introducing a new dependency.
Ruby's fine for small to medium-sized projects. It's pretty great for small DSLs.
In my professional experience on medium to large-sized projects, its lack of explicit typing, the habit of Rubyists to use its fairly-substantial metaprogramming capabilities at the drop of a hat, and (for projects that include pieces or all of Rails) the system's implicit library loading make such projects a nightmare to reason about and maintain.
I'm sure these aspects are manageable with a small group, or a very, very disciplined group, but when you have large-sized projects that have been continually worked on for ten, fifteen, more years, you're just not going to be able to guarantee that everyone involved will have the combination of knowledge and discipline require to ensure the thing remains easy to understand, extend, and maintain.
This has been my experience too reading the Gitlab code. It's absolutely impossible to follow - you can't use static typing to follow flow because there isn't any, and you can't even grep for identifiers because half of them are dynamically generated. Every time I've wanted to understand something I've been unable to even find the relevant code.
Contrast that with gitlab-runner (Go) or VSCode (Typescript) both of which I have been able to not only read easily but contribute features to. VSCode is at least as big a codebase as Gitlab.
That experience has made me want to avoid Ruby like the plague!
Years ago, this was almost exactly my experience at my first software job, a large Rails monolith in the Ruby-on-Rails heyday. Obviously I was inexperienced, but aside from that, the application code itself was so hard to understand that I was seriously considering whether I was cut out for professional software development. It was just impossible to tell where things were defined, where data was coming from, when things were actually executing, etc.
It was only after working on other projects in other languages (Clojure, Java, Scala, Nodejs, Elixir, Rust) that I started to realize that maybe not all languages lead to teams writing code that is this difficult to follow.
People always say "oh, you can write terrible code in any language", and while this is true, it's a tautology. It doesn't actually tell you anything useful. I now think there is actually a pretty large spread in what kinds of code the various languages/frameworks encourage people to write. I'm not saying it guarantees what kind of code people will write, but, just for example, there absolutely is a difference between what the average Clojure programmer will write and what the average Java programmer will write.
If all of the various languages and frameworks and libraries all just ended up with the same effort producing the same results, no one would ever make anything new, because it would be pointless to do so.
> [A]side from that, the application code itself was so hard to understand that I was seriously considering whether I was cut out for professional software development. It was just impossible to tell where things were defined, where data was coming from, when things were actually executing, etc.
If the enormous pile of Ruby projects that I and the folks I work with at $DAYJOB had been the things I was required to work on and maintain at my first dayjob, I would have (no joke) left the field to go become a lawyer.
I'm so, so, so glad that I started off with C++ and got to work at a couple of C++ shops which got me to understand in my bones the importance of good tests and comprehensible-to-a-mere-mortal code.
> ...there absolutely is a difference between what the average Clojure programmer will write and what the average Java programmer will write.
Oh yeah, definitely. After being worn down by the many years I've burned at my current position, I've become VERY skeptical of the "Let's use $LANGUAGE_OR_TOOL because it will make it easy to hire!" argument. The quality of the stuff that the average user of that tool produces matters A TON.
Implicit library loading isn’t even the real problem, it’s the global namespace. I think it, more than anything else, makes large projects difficult to manage due to the inability to grok the dependency graph.
> Implicit library loading isn’t even the real problem, it’s the global namespace.
I'm not sure what you mean by this? Ruby gives you pretty much the same namespacing powers that most mainstream languages do.
If your argument is that noone should EVER be able to hoist something into the global namespace, then I'm going to have to pretty strenuously disagree with you. Power tools are good to have for when you need them... but it's very important to have the restraint to only use them when you need them and leave them on the shelf when you don't.
Ruby is unsuitable for large projects for the same reason python is, but it also lacks the huge ecosystem and labor pool that python has. When I'm interviewing and a company tells me ruby is the main language, I end the call.
“X is unsuitable for large projects” when there are many readily discovered existence proofs to the contrary (including but not limited to $1B+ businesses, massive communities, deep thoughtful and disciplined engineering spokespeople, etc.) strikes me as a common trope here on HN.
(Which is not to say X is flawless even at scale or a clear best fit along all axes. That is true for no X that I know of.)
The fact that a language is used for large, successful businesses/projects doesn't mean another language wouldn't be better. It's just terribly difficult to measure such things.
I think JavaScript as a backend language was a mistake, and that Node has single-handedly caused more damage to the entire tech industry than any other aspect.
That hasn’t stopped billions of dollars of revenue from being created with it.
At least Ruby is unpopular enough (compared to Node) that people who know it are probably decent at their job.
Nobody actually follows the LSB as written. But it remains useful as an idea of "stuff that was installed in old distros so usually has some newer version available".
Assuming you can survive all the incompatible interpreter changes for Python etc., the main annoyance with LSB proper is shared library versions.
Many distros come with Ruby standard, it's not very big and I think it has a lot to recommend it over Python or Perl when it comes to lightweight scripting for sysadmins and day to day automation. I wouldn't necessarily pull it into an open source project where they weren't already present, but I would definitely choose it for personal use well before Python or Perl
Ubuntu, Debian, Arch, CentOS - none of these, as far as I know, ship with a Ruby interpreter by default. I’d like to be wrong, but I don’t think many do.
Sadly, 9 out of 10 environments lack a Ruby interpreter out of the box. Are you going to add 5 minutes to your docker build to compile Ruby? Probably not.
Luckily, I've found that Perl has most of the best features of Ruby, and it's installed everywhere. It's time to Make Perl Great Again.
Why on earth would you add the compilation of a Ruby interpreter to your docker build? Just install it through the package manager of whatever distro your image is built on.
Overall a nice lite write up! Bash is great, but it occasionally becomes untenable, usually around the time where HTTP and whatnot becomes involved. Same goes for shell exec exit codes, you can use an API like popen3 for this: https://ruby-doc.org/stdlib-2.4.1/libdoc/open3/rdoc/Open3.ht...
You mention using threads and regex match global variables in the same write up. Please use the regex match method response instead of the $1 variables to save yourself the potential awful debugging session. It even lets you access named capture groups in the match response using the already familiar Hash access API. Example: https://stackoverflow.com/a/18825787
In general, just don’t use global variables in Ruby. It’s already SO easy to move “but it has to be global” functionality to static class methods or constants that I’ve encountered exactly zero cases where I have NEEDED global variables. Even if you need a “stateful constant” Ruby had a very convenient Singleton mixin that provides for a quick and easy solution.
Besides, if you actually WERE to take advantage of them being global VARIABLES (reassigning the value) I would confidently bet that your downstream code would break, because I’m guessing said downstream code assumes the global value is constant. Just avoid them, there’s no point, use constants. This applies to any language TBH, but here we’re talking about Ruby :)
No problem at all, and you don’t need to be a specialist to be excited and write something up :) I hope this didn’t come across as judgy, I’ve just lived through many a Ruby bug heh
The things which make Ruby good for shell scripts are, to a large degree, things it inherited from Perl. Which was, and is, a great language for scripting.
People use it a lot less these days, for a lot of reasons, some better than others. I myself do simple stuff in bash, and pull out some Python for more complex scripting. My Perl chops have withered, last time I was paid to use it was 21 years ago, but it really is a great scripting language, and you'll find it installed on a great deal more systems than Ruby.
One of these days I'll give Raku a spin, just for old time's sake.
My first application of Ruby was to use it for shell auto-completions. I'm so grateful that I learnt Ruby first, and then Rails. Ruby is a great language to get some utility working out real fast. Rails is great for MVP. I fail to understand why people bitch about Ruby/Rails by comparing them to other languages/frameworks.
I love using Ruby for shell scripting, but there are also a ton of little nits I have to fix whenever I'm doing it.
For example: Ruby has no built-in for "call a subprocess and convert a nonzero exit status into an exception", ala bash `set -e`. So in many of my Ruby scripts there lives this little helper:
def system!(*args, **kwargs)
r = system(*args, **kwargs)
fail "subprocess failed" unless $?.success?
r
end
And I can't ask "is this command installed" in an efficient built-in way, so I end up throwing this one in frequently too (in this instance, whimsically attached to the metaclass of the ENV object):
class << ENV
def path
@path ||= self['PATH'].split(':').map{ |d| Pathname.new(d) }
end
def which(cmd)
cmd = cmd.to_s
self.path.lazy.map{ |d| d + cmd }.find{ |e| e.file? && e.executable? }
end
end
• High-level wrapper methods like `Pathname#readable_when_elevated?` that run elevated through IO.popen(['sudo', ...]) — the same way you'd use `sudo` in a bash script for least-privilege purposes
• Recursive path helpers, e.g. a `Pathname#collapse_tree` method that recursively deletes empty subdirectories, with the option to consider directories that only contain OS errata files like `.DS_Store` "empty" (in other words, what you'd get back from of a git checkout, if you checked in the directory with a sensible .gitignore in play)
...and so forth. It really does end up adding up, to the point that I feel like what I really want is a Ruby-like language or a Ruby-based standalone DSL processor that's been optimized for sysadmin tasks.
Didn't realize that! That's one snippet I can maybe eliminate now. (As to why I didn't know: the first thing in the RDoc for Kernel#system is still "see the docs for Kernel#spawn for options" — and then Kernel#spawn doesn't actually have that one, because it doesn't block until the process quits, and so returns you a pid, not a Process::Status. I stopped looking at the docs for Kernel#system itself a long time ago, just jumping directly to Kernel#spawn...)
But come to think of it, if Kernel#system is just doing a blocking version of Kernel#spawn → Process#wait, then shouldn't Process#wait also take an exception: kwarg now?
And also-also, sadly IO.popen doesn't take this kwarg. (And IO.popen is what I'm actually using most of the time. The system! function above is greatly simplified from the version of the snippet I actually use these days — which involves a DSL for hierarchical serial task execution that logs steps with nesting, and reflects command output from an isolated PTY.)
I agree. Ruby is a _fantastic_ language for getting things done quickly whose credibility was unfairly maligned by Rails.
Unbelievably easy to read, and, with rspec, it is stupid easy to write tests for. No need to fuss with interfaces like you do with Golang; yes, that is the right thing to do, but when you need to ship _now_, it becomes a pain and generates serious boilerplate quickly.
I've switched to Golang for most things these days, as it is a much safer language overall, but when shell scripts get too hard, Ruby's a great language to turn to.
~ % irb
WARNING: This version of ruby is included in macOS for compatibility with legacy software.
In future versions of macOS the ruby runtime will not be available by
default, and may require you to install an additional package.
irb(main):001:0>
Are there distributions where this doesn't happen? I feel like I'm often installing or compiling new versions of packages and languages because they're outdated on Ubuntu
If you've reached for something with more structure than the string-soup that is bash, why would you then embrace the string-soup that is invoking subprocesses with backticks?
Involving an entirely separate parser just to split your args up feels like a footgun to me.
IMO that feature snatches defeat from the jaws of victory because if the command fails (exit code >0) it doesn't raise an exception. Same issue with the system(...) method by default.
I spend most of my time writing Rails or other backend Ruby, and I prefer my system-level scripts in bash. Philosophically I don't want to have to manage dependencies or broken gems (though inline deps obviate that, and it's not like I've never had to wrestle with apt)
Either dependency issues with other gems, or gems that break due to some sort of library in the OS. If you pin all of your versions, and use it in one place, that's less of an issue, but many scripts are designed to have some level of portability (even if it's to a new instance of the server)
In my experience, Bundler has improved a lot with regard to resolving dependency issues over the years.
And OS libs are only really depended on by a few gems, no? 99% of them don't use FFI or call OS libs.
Moreover, how often do you really move a script to a completely different OS, where you don't know which OS libs are installed?
And wouldn't those missing OS libs also be a problem when writing the script in Bash or any other language?
Side note: Firefox is offering to translate this blog post from Portuguese for me though the content is clearly in English. I noticed the `<html>` element has a `lang="pt"` attribute. The site is generated by Jekyll, which I have not used in years, so I'm wondering if this is a site-level setting or could be overridden in frontmatter...
I need to check it because my mother language is Portuguese and my personal page was initially only in Portuguese (everything there has also a version in Portuguese if you click the Brazilian flag). Some years ago I started to write in English but I didn't want to delete my older posts.
Personally, I find Ruby's syntax more natural, (I'm going to be heavily biased though, having written Ruby for 10+ years). But for example, let's say I wanted to make a hash (dict) of files, keyed by their size (for some unknown reason), in Ruby it would look like:
result = {}
for file_path in glob.glob('*'):
if os.path.isfile(file_path):
result[os.path.getsize(file_path)] = file_path
Or capitalizing a string in Ruby:
string.split(' ').map(&:capitalize).join(' ')
And in Python:
words = string.split(' ')
capitalized_words = [word.capitalize() for word in words]
result = ' '.join(capitalized_words)
Python seems to be more convoluted and verbose to me, and requires more explicit variable declarations too. With the Ruby you can literally read left to right and know what it does, but with the Python, I find I have to jump about a bit to grok what's going on. But maybe that's just my lack of Python experience showing.
result = {os.path.getsize(f): f for f in os.listdir() if os.path.isfile(f)}
result = ' '.join(word.capitalize() for word in string.split(' '))
result = ' '.join(map(str.capitalize, string.split(' ')))
result = string.title()
Can you give an example?
I can't think of a single situation where whitespace matters in Ruby (unless of course you forget to put a space between two commands or something silly).
`foo + bar` and `foo+bar` are `foo()+bar`, but `foo +bar` is `foo(+bar)`
ternary ? : also has some interesting whitespace dependent mixups with symbols, but I cannot remember what. I think that parser has many gotchas like that, but they are really really rare to bite you, because ruby's magic follows human intuition as much as possible.
To simulate “types” in complex shell scripting I typically involve a lot of json objects acting as my data structures and the file system for a rudimentary database. They’re horrifying ugly, but they tend to work pretty reliably.
The `ip` network utility now supports JSON ouput via the `ip --json` invocation since a while back. I would love to see more tools implementing this as an option!
A scripting language needs a way to declare dependencies in a locked-down way, inside of the script that requires them. They must be portable across platforms.
Can you easily chain these, though? (gzcat some.txt|grep foo|sort -u|head -10 etc?). Especially lazily, if the uncompressed stream is of modest size, like a couple of gigabytes?
I'm not sure what you mean by lazily here, but internally[0] it creates real anonymous pipes[1] between the spawned processes, so the data does not go through the ruby process at all.
I'm currently working with 150MB worth of gzipped JSON - marshalling the full file from JSON to ruby hash eats up a lot of memory. One tweak that allows for easier lazy iteration over the file (while keeping temporary disk Io reasonable) is to pipe it through zcat, jq in stream mode to convert to ndjson, gzip again - for a temp file that ruby zlib can wrap for a stream convenient for lazy iteration per read_line...).
Generally marshalling a gig or more of JSON (non-lazily) takes a lot of resources in ruby.
Some do, some don't. JSON is a special case as a valid JSON file needs to be a single array or object literal - event driven (SaX style) parsing needs to be a hack (like jq stream mode). In theory json_streamer or yajl should help, but I couldn't get a combination to return a proper lazy iterator.
With file as ndjson it was easier, if a little sparsely documented (Zlib::new or #wrap?):
my_it = Zlib::GzipReader.wrap(some_ndfile).lazy
obs = my_it.each_line.lazy.map do |line|
JSON.parse line
end.first(4)
When we can get a line at a time marshalling the whole line isn't an issue.
My issue is more that it is tricky to nest ruby IO objects and return a lazy iterator - especially nesting custom filters along the way - at least more tricky than it should be.
Apparently there's a third party frame work that does seem promising:
Hell yeah! I’ll never forget being fooled by HN that you have to use Perl if you want portable scripts that aren’t bash, writing a whole script with it, and having a coworker politely, yet firmly, tell me that I am dumb and it should just be Ruby… and it was a script I was checking into a Rails app! It’s even trivial to include dependencies with bundler inline https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...
You can pin a gem to a specific version, of course.
`gem "mygem"` installs the latest version.
`gem "mygem", "~> 4.0.0"` installs >= 4.0.0 but < 4.1.0, which is what you probably want when using Semantic Versioning, which most gems adhere to, to get the latest patch version.
`gem "mygem", "4.0.10"` installs exactly that version.
You can also use %x(ls some/dir), which is a syntax used in other places too, like %w[word word word] (array of word strings, notice you can choose the delimiter).
While I write most of my scripts in Ruby and enjoy doing so, there is one gripe I have with it: its slow start-up time.
On my machine, running an empty Ruby script takes about 100ms, compared to <10ms for Python, Perl, Lua, Bash.
One can mitigate the problem somewhat using the `--disable-gems` flag, but that's not a good general solution.
Yeah.. old ruby have interesting build system. It first build miniruby with is
single selfcontained binary having all core ruby functionality. Then, it is used to run some more .rb build scripts to finaly make fully functional ruby.
Actyally that miniruby is pretty usefull. For basic stuff. Also, its very easy to build static ruby containing all extra stuff you care about, like Win32 API for example.
I myself have custom ruby binary on Win32 (Cygwin) with GRX library added in, So I can do basic graph stuff directly from ruby :) That stuff is written in it:
> It first build miniruby with is single selfcontained binary having all core ruby functionality.
> For basic stuff. Also, its very easy to build static ruby containing all extra stuff you care about, like Win32 API for example.
Wow, this reminds me abit of MRuby. So I could basically ship a script with the self-contained ruby executable instead of having to force users to install Ruby on their machine?
How can I get ahold of miniruby? Or is there any resource somewhere that I can dig into?
Hmm, I think this is not supported, but probably not very hard to do it at least for Windows (PE executable).
Hard to say about new Ruby versions like 3.x or even 2.x. 1.8.x have pretty simple building process, just grab the old source. You can use --enable-static builds there. Miniruby is build by default, always, because its part of building process.
Also, if you are platform are you targeting? Win32 only?
Im not sure what you mean by this... Anyway, if you want to peek at my stuff, take a look at those 2 urls below. One contains various Ruby 1.8.7 Win32 builds (compiled using older Mingw). Other one is my gperf. All this will work on Win2000 and up. In case of gperf, Windows needs to be en_US build. Anything other will simple not work, because Microsoft with they wisdom, localized perf counter names instead aliasing them! Not smart, heh..
What I meant is that compiling cosmopolitan with miniruby would maybe make the miniruby executable APE-format (actually portable executable). That would allow me to run the same executable on all these mentioned platforms.
But Idk, I’m just thinking loudly and probably missing a huge detail which would make the idea non-working.
Also, thanks for the linked resource, I’ll definitely give it a go!
Ahh, this stuff.. Never done that :) Im happy enough to provide just .rb + ruby.exe (MiniRuby) for Win32. UNIX/Linux is no issue, since can be easly installed or compiled. In worst case I can provide static ruby too there.
For me, the problem with shell scripting is that I do it only occasionally. I’ve always been tempted by higher-level languages. The arrival of tools like ChatGPT has made me much more comfortable writing scripts of intermediate complexity. I find shell scripting more interesting now.
Similar case for me, but I think the author says it well:
> That is, most of the cases Bash for me is enough, but if the script starts to become complex, I switch to Ruby.
Even if ChatGPT lets you bang out more complex shell scripts easily, if you have to come back to it later on to fix an error or add a new feature, it's really hard to understand it (if you don't deal with such scripts on a daily basis).
If you start with Ruby (or Python or similar) from the beginning, it's much easier to understand and extend later on.
Cool, I'm sold, added Ruby to my to learn list! I use Python a lot but I don't think it's as good for writing scripts as bash or Perl. Ruby looks like it fits that "better but not much harder" category much better.
Ruby is packaged for all significant Linux distros. It isn't installed by default in many, but installing it is a trivial one-time command, and it takes relatively little space.
You can say similar things for Bash or Python (or Perl, JS, etc):
> Too many ways to do the same thing.
In Bash you can use backticks, $(), and bash -c for the same thing. And also [ ], [[ ]], test. And so on.
> Not packaged by default on most Linux.
You can install it. Some distros (e.g. Alpine) even don't ship Bash. If you are restricted to only what comes by default in an operating system, then probably a simple script written in sh may be the solution.
> Monkey patching makes things even harder to debug.
Just because you can doesn't mean that you should. I won't label something as "hard to debug" for features that I don't need to use, I label as "hard to debug" for features that I _need_ to use. Then I can say: Bash is hard to debug.
Hearing people still mention "monkey patching" always makes me chuckle...
I haven't "monkey patched" anything in Ruby in > 5 years, and I don't see it in any of the popular libraries/gems anymore either.
If Rails could be written in Python, I would probably prefer that. Ruby has too much of Perl. If I didn't choose to write a critical piece of software in Perl, which caused me numerous sleepless nights hunting for a missing quote or other weird character, I might have thought it was cool. By my experience with PHP and Perl is why I prefer Python. Clever is not what I ever want a language to be.
Those are different languages, with different characteristics. Perhaps the only two that can be directly compared as a 100% replacement to another are Ruby and Python.
Honestly same, but only because Python has types relatively sorted. While I actually enjoy the language, Ruby is… struggling… and after supporting multiple Rails apps in the wild for close to a decade, what me and my team miss the most is the ability to hint types easily.
Ruby is slow and encourages an esoteric convention over configuration style of OO code.
If you enjoy it, more power to you. However, Python is everyone's second favorite or least favorite language, and it runs laps around Ruby any day. Then there's Go if you need some extra oomph!
Ruby performance has improved a lot in the last few years, especially with the introduction of the JIT (YJIT), and it keeps improving with every new release. I think the notion of Python being the faster of the two may be outdated. You can see some comparisons in the Benchmarks Game [1] (ignore the reference to PHP and Erlang in the URL, seems to be a typo by the website's maintainer, although my link will break if they fix it).
Also I think the convention over configuration mantra belongs in the Ruby on Rails world rather than Ruby itself. Ruby as a language is far less opinionated about how you do things, especially for small shell scripts.
> Ruby is slow and encourages an esoteric convention over configuration style of OO code.
I think you are commenting on Rails, not Ruby when you refer to "convention over configuration". Even in Rails I'm not sure what "esoteric" means in your comment and that approach isn't even related to the object-oriented concepts.
I was arguing with myself about adding additional context. It looks like I should have put it in.
In my professional experience, Go cannot consistently avoid producing binaries that fail when run on an incompatibly-different version of glibc than they were build and linked on.
Also. for most deployment situations, Go's "single, enormous binary" doesn't matter.
* In The Cloud(TM), you have full control over what you deploy. So, so, so often you have a custom VM or Docker image that you just squirt out there and automation to keep it updated and rebuilt, which makes deployment just trivial.
* On Windows, you have Windows Installer, which you can instruct to check for and install any prereqs you require. (It's incredible how all of the pieces to build a proper package manager have been in Windows since the early 2000s, and yet no package manager came out of MSFT.)
* On OSX, you either use the App Store or one of the handful of package managers... but I expect nearly zero non-Power-Users use the package managers. I guess there are also the nutballs who do 'curl $INSTALL_SCRIPT | sudo bash'.
* On Linux... well nearly zero non-Power-Users run desktop Linux, and those that do likely already have Ruby, Python, Perl, and GCC installed.
We're talking about code that calls external commands here. If one wants performance it is already doing wrong by calling external commands. Don't think it is relevant here.
> encourages an esoteric convention over configuration style of OO code.
I can't see that. Its OO is not so different from other languages. Perhaps you are thinking about Rails (as I said in the first paragraph).
Java for the win. Have used java for any kind of programming or running shell commands or literally anything since last 5 years professionally and personally.
Language is just a tool, anything and everything works if you love it enough
In fairness, Java is still pretty slow to start. If you're looking for non-traditional "scripting" languages, you might look into Erlang. 'erl_call' is a particularly interesting little helper script that's packed into the standard system.
Also ruby is great in allowing complexity to grow smoothly, no sudden hiccups. You start with just one line (everything goes into module main implicitly), extend it to a single-file script, require some built-in libraries, then add a module or helper class in the same file, and only then maybe extract those files to required files, add gems, whatever. No boilerplate whatsoever, no jumps, no big rewrites.
meanwhile, a lot of tooling nowadays is written in Go, and I have no idea why, it's not friendly for os manipulation at all, and number crunching power is not needed in many, many tasks of that sort.