always, golang is overly opinionated regarding where modules and binaries are stored. I don't like that and I've blown my local development environment into pieces because of that (looking at you GRPC, yikes)
But also, imagine that you, like me, need to test Python, Java+Kotlin+Gradle and NodeJS+Angular stuff. Do you really want to install _all that_ natively ? Just for a couple of merge reviews, and even if not, do you _really_ want to install all that natively ? The answer is always, IMHO, a resounding and clear no.
> It was a Visual Basic 6 program, so I just took two half days going through every EXE & DLL related to Windows and VB, eventually finding the difference. Tedious but not rocket science. Is it to avoid these cases?
For example, but also much worst, as mentioned in the OP it's to prevent the very real possibility of crippling your OS's language runtimes and also to stay productive.
IMHO the solution for the problem of devs in the same company having different environments is not Adsf and its competitors like Mise, but things like Nix/Guix and Docker.
At work, because everyone uses Mac, we ended up using Kandji to achieve the same thing: everyone has the same tools and environments, but that is only if you already have to use that due to security audits and stuff like that.
If I had a small company myself I would probably setup everything with Guix as I really like the way it works, more than Nix (though only because I prefer Lisp config files and because Guix doesn't suffer from any polemics like the flake soap opera).
So that in any directory you can type `<command name>`? asdf Node installs aren't like Python virtual environments; they're centrally installed and one Node version (and its packages) is shared across all diretories that want it + global tools.
> So that in any directory you can type `<command name>`?
Any tool installed via asdf is available on any directory as long as you are accessing that directory via a shell spawned with a .profilerc or similar which contains your asdf configuration.
> asdf Node installs aren't like Python virtual environments
Correct, neither should they be.
> they're centrally installed and one Node version (and its packages) is shared across all diretories
sure, they are, and that's by design. You're conflating a runtime manager with a package manager. Venvs are _not_ runtime manager, the moment you need another Python version you're done for. asdf.vm is _not_ a package manager, the moment you want package isolation while working on an asdf install is the moment you install yourself pipenv, poetry, pdm or use python venvs for that.
> that want it + global tools.
which is achieved as I've shown below. Still, there was no reason in your usecase to modify or play with globally installed tools besides asdf, through which you can then define global runtime versions and those global versions will hold your global tools, usable wherever.
OP here, although I hoped I took an example that was relatable, it seems it wasn't as relatable as I expected.
> Is it just me that never even wants to get to the problems that asdf attempts to solve?
You aren't alone, the scenario isn't ideal. However, brew's Python installation on MacOS as are Debian's and Ubuntu's are _extremely_ brittle. You are one cask, formulae or apt package away from needing to do a weekend-long spelunking session or a full blown system re-install if you have deadlines.
PyEnv is a pain to set up, and maintain, which is what I used in the years before as well as after Python 2 was deprecated and projects started slowly migrating to newer python versions.
> That example in the article of managing multiple python 2.7 versions sounds like a horror story.
It is a horror story, but is very common.
Have you tried to install and maintain Java, Kotlin and Graddle installations for a given project although your machine is not primarily a Java, Kotlin, Graddle box? That is a real nightmare, not so much with asdf.vm.
You shouldn't need asdf to work with JVM stuff.
I would suggest learning how to use SDKMAN: https://sdkman.io/
It will manage the JDK for you. Usage is basically this:
# Install a JDK, that version is now default
sdk install java <version>
# Another one, it asks if you want to change the default
sdk install java <another-version>
# List available and installed versions
sdk list java
# Change which one you're using in this shell
sdk use java <version>
That's all.
You can also manage Gradle/Maven installations with SDKMAN, but that's not necessary, usually, because most JVM projects include a "wrapper" script which downloads the needed Maven/Gradle version for you.
This works regardless of whether your project also needs Kotlin/Groovy etc. as those are just managed by Gradle/Maven (the only exception I can think of is if you use Kotlin Multiplatform as that will depend on the platform dependencies as well).
So once you know SDKMAN, you can manage any JVM-based project with just this:
sdk use java <jdk-version-used-by-project>
./gradlew build # or ./mvnw package
If you need to do anything else, you should complain to the project authors as this is really all you should need!
> brew's Python installation on MacOS as are Debian's and Ubuntu's are _extremely_ brittle
I've been using python installed using homebrew and haven't found any issues.
In homebrew you can install a specific python version like python@3.11 and using venvs avoids most of the issue (I think you can't install packages outside of a venv in python 3.12 or higher).
My experience with Poetry has been mixed the last couple of times I've tried it, it attempts to do way to many things but often failed to do them properly. Determining dependencies and proper packaging and upload to PyPI are the ones which come to mind.
asdf.vm together with pipenv is my go-to for Python environment management.
As for Poetry, it is constantly improving and has gotten very popular. It should not be dismissed, especially for larger projects since its dependency management is so much better than pipenv. This is a good primer: https://diegoquintanav.github.io/poetry-primer.html
pyenv + pip tools is all one needs. Supposedly uv is gunning to be a drop in for both. I think there’s a good chance uv pulls it off and becomes defacto for this use case.
I think it’s fair to see appeal in poetry, but ultimately the maintainers have created a culture that feels a bit too non-collaborative to outside ideas or use cases beyond what they more narrowly envisage. That said, my perspective may just be tainted by multiple poor experiences interacting with the maintainers.
I can’t speak to rbenv or npm, but IMO it’s better of use well known and canonical tools for each rather than a more unknown mega tool that wraps or replaces these.
pyenv isn’t perfect, and isn’t what I’d use for prod images, but for dev setup it’s relatively bulletproof and less issue-prone than any alternative I’ve seen.
this makes me itchy, pulling the whole internet without looking into your development machine sounds like a very bad idea. It's the equivalent of an IDE's "do you trust this project" but on steroids.
Not tried pipenv, I'm a bit tied by what the company uses.
I did try using something similar to asdf (can't remember the name, think it changed), but it still didn't really solve the problem of OS dependencies and things needing to be compiled, and the problems arising from me not running the same OS as the application would run on. A dockerfile solves that, my system is a carbon copy of the prod environment.
For as much improvement as there has been with what can be distributed via PyPI, there are still some domains that have gnarlier dependencies than wheels happily handle alone, and you either need to reach for the system package manager (and loose the ability to really control the dependency environment from that mismatch), or take advantage of the Conda ecosystem.
My org does a lot of work combining machine learning with oceanographic and climate modeling, which are both domains that have deep dependency chains that don't always mesh well, especially as our researchers mix in R and other languages as the same time, and the Conda ecosystem helps us a ton with that, but there are issues that `conda` and `mamba` don't help us out with.
Pixi takes a swing at some of what the Conda ecosystem hasn't been great at (at least without a lot of manual custom ceremony) that Cargo, Poetry, Pipenv, PDM, and other dependency and workflow management tools have demonstrated can be done such as lock files, cross platform dependency management, task running, and defining multiple related environments.
_COME ON TARS!_