It's so weird to me that anyone would prefer their project's dependencies be at some arbitrary location on disk rather than simply inside the current directory. But sure, whatever, do what you like. What's even weirder to me is that people would advocate stopping anyone else from doing this. There are comments on that thread even denigrating users who don't prefer this as "not understanding directories." Are you kidding me?!
The devs in this thread are all arguing from the perspective of "not making things more confusing," but it's like they haven't even considered that things are so confusing at this point, that people are just abandoning Python altogether. And how can anyone in good conscience recommend it as a first programming language to some poor noob who's going to spend hours dealing with the packaging mess every time they try to code a new project or return to their last one?
I have a bunch of small python projects that I come back to once every six months, and every time, it's a nightmare to remember how to activate the venv for them, and then also to remember which venv it's actually using. Not to mention I need to search the internet to remember the difference between venv, pyenv, virtualenv, pyenv-virtualenv, and at one point even "virtualenv-burrito." Then it's a constant struggle with ~/.bash_profile and ~/.bashrc to figure out why the activate script isn't working. Do I need to activate it? Do I need to exec it in subshell? Once I finally figure it out, should I put this in my .bash_profile or is it a waste of time to check whether to activate a venv on every command I type? Two hours later, if I'm lucky, I can actually start coding on the project where I left it six months ago.
It's a total nightmare and ultimately the main contributor to what turned me off of Python. TypeScript has its problems, but at least I've got a yarn workflow with immutable lockfiles that pretty much guarantees I can jump back into a project how I left it, and I don't need to also worry about some state in my home directory, or environment variables missing in my shell, or activating some opaque venv in every new tmux pane. I can see the dependencies in node_modules. They're right there. Nothing outside of my current directory is relevant to the project at all.
This same criticism applies to go and their $GOPATH. Rust is slightly better. I've started keeping a note in Obsidian for each of these languages, called e.g. "cold jumping back into Rust," which lessens the pain each time, but I haven't made these notes for Python yet and I'm not exactly thrilled every time I need to jump back into a Python project.
Often I use Python to write systems of CLI scripts to do data analysis or synthesis.
Notably this is not a Jupyter notebook where the code and data are mashed together but rather I have the code in one directory, activate a venv, then I cd to a directory where the data files I am working are kept. The node or maven model where your scripts run relative to the current directory is not what I want at all.
That's a good point. I guess if I were doing the equivalent in TypeScript (which I would never do, because I'd rather write Bash for that... but this is a separate mental issue of mine), I would `npm install -g` to install packages globally. That's roughly equivalent to using the system python environment. And actually this specific use case you mention is one that doesn't have a great story with node. Currently the best solution IMO is to use Zx [0] which you could reasonably install globally, and which pre-packages some common scripting dependencies.
I think the major difference between Python and JS "environments" is that with Python virtualenvs, the `python` binary itself is (usually?) part of the environment. But with JS, you generally have one `node` installed on your system, and then multiple "environments" aka projects each with their own yarn.lock and package.json. Of course you can have multiple `node` versions using something like `nvm`, but it's definitely less coupled to the per-project environments than `python` is to whatever virtualenv dependencies you have.
i write some Python scripts that are just a single file and import only from the standard library. If you’re competing with bash this is fine. For instance I have a python script that burns compact discs like
unpacks an archive, validates the content with sox, rewrites the .cue file, uses wodim to do the writing, etc. i have other scripts that do “iot” sorts of things and those really need to import from libraries to access a message queue or other APIs.
With python it is often common to require a special python too, for instance i had a script that ran too slow with cpython but was fine with python but i can’t use pypy for everything as pytorch doesn’t work.
> This same criticism applies to go and their $GOPATH.
FYI, $GOPATH hasn't been a thing in years. Nowadays, the experience around coming back to old projects should be on par with what you've described for Typescript.
I've never actually coded anything in Go, but I've had a few occasions where I had to build a binary from source, and I remember having headaches with it then. There is still some $GOPATH stuff in my ~/.bashrc for some reason.
Incidentally, and perhaps ironically given the context of this discussion, the last time I interfaced with Go was - about six months ago - when I tried to globally install a binary and ran into some issues [0] which I assume were caused by the same $GOPATH change you're talking about.
The devs in this thread are all arguing from the perspective of "not making things more confusing," but it's like they haven't even considered that things are so confusing at this point, that people are just abandoning Python altogether. And how can anyone in good conscience recommend it as a first programming language to some poor noob who's going to spend hours dealing with the packaging mess every time they try to code a new project or return to their last one?
I have a bunch of small python projects that I come back to once every six months, and every time, it's a nightmare to remember how to activate the venv for them, and then also to remember which venv it's actually using. Not to mention I need to search the internet to remember the difference between venv, pyenv, virtualenv, pyenv-virtualenv, and at one point even "virtualenv-burrito." Then it's a constant struggle with ~/.bash_profile and ~/.bashrc to figure out why the activate script isn't working. Do I need to activate it? Do I need to exec it in subshell? Once I finally figure it out, should I put this in my .bash_profile or is it a waste of time to check whether to activate a venv on every command I type? Two hours later, if I'm lucky, I can actually start coding on the project where I left it six months ago.
It's a total nightmare and ultimately the main contributor to what turned me off of Python. TypeScript has its problems, but at least I've got a yarn workflow with immutable lockfiles that pretty much guarantees I can jump back into a project how I left it, and I don't need to also worry about some state in my home directory, or environment variables missing in my shell, or activating some opaque venv in every new tmux pane. I can see the dependencies in node_modules. They're right there. Nothing outside of my current directory is relevant to the project at all.
This same criticism applies to go and their $GOPATH. Rust is slightly better. I've started keeping a note in Obsidian for each of these languages, called e.g. "cold jumping back into Rust," which lessens the pain each time, but I haven't made these notes for Python yet and I'm not exactly thrilled every time I need to jump back into a Python project.