The biggest issue, in my opinion, is in dependency management. Python has a horrible dependency management system, from top-to-bottom.
Why do I need to make a "virtual environment" to have separate dependencies, and then source it my shell?
Why do I need to manually add version numbers to a file?
Why isn't there any builtin way to automatically define a lock file (currently, most Python projects just don't even specify indirect dependency versions, many Python developers probably don't even realize this is an issue!!!!!)?
Why can't I parallelize dependency installation?
Why isn't there a builtin way to create a redistributable executable with all my dependencies?
Why do I need to have fresh copies of my dependencies, even if they are the same versions, in each virtual environment?
There is so much chaos, I've seen very few projects that actually have reproducible builds. Most people just cross their fingers and hope dependencies don't change, and they just "deal with" the horrible kludge that is a virtual environment.
We need official support for a modern package management system, from the Python org itself. Third party solutions don't cut it, because they just end up being incompatible with each other.
Example: if the Python interpreter knew just a little bit about dependencies, it could pull in the correct version from a global cache - no need to reinstall the same module over and over again, just use the shared copy. Imagine how many CPU cycles would be saved. No more need for special wrapper tools like "tox".
In the end, it needs to find the import in the PYTHONPATH, so there's no magic involved, and there are multiple robust options to choose from.
So instead of bashing Python for not shoveling down an opinion on you, it's up to the developers to choose which tools they want to use.
If they don't choose one and are unable to freeze their dependencies, it's not a Python problem, but IMO lack of skill and seniority.
The reason why Python gets extra criticism for this is because it likes to tell people that there should be one obvious way to do it and that it comes with batteries included yet it's dependency management system is just crap and doesn't follow that at all.
Are you saying “There’s more than one way to do it”?
Nothing to complaint as every language has their own set of good and bad. This is what makes it interesting, there is always a room to improve and make things better.
Python has always stood out for me as a particularly odd way of doing it. It feels a bit like more like C, but with a package manager that's not quite as nice as other scripting languages have.
It's good. Projects should use it.
868 lines, including os.rmtree calls and stuff.
Also installable via pip, but... "not recommended", and:
Poetry was not installed with the recommended installer.
Cannot update automatically.
It is not that long ago that PyPI hosted malicious (typo-squatting) packages: https://news.ycombinator.com/item?id=15256121
"curl | bash" is a bad habit to get into. It works under certain circumstances, like making sure it's an SSL connection from a source you trust. But it's just a bad habit for the average person to get into.
(Poetry have done this, for what it's worth)
The point is that in reality you’re orders of magnitude more likely to be compromised by ads in your browser, an undetected flaw in legitimate code, or a compromised maintainer than GitHub having deployed custom infrastructure to target you. If you’re being target by a government, why would they do this instead of using the same TLS exploit to serve you a dodgy Chrome or OS update which is harder to detect and will work against 100% of targets?
How about this for a reason, where are the checksums when I’m curling and piping? How do I validate in an automated fashion the validity of this file I’m piping into an interpreter? When installing a package it’s quite easy to have redundant copies of an index with checksums pointing to a repository hosting the actual code. The attack surface is much smaller vs a curl | python
This is bad practice, stop promoting it or downplaying it’s security issues.
Edit: smaller instead of larger
> This is bad practice, stop promoting it or downplaying it’s security issues.
I’m trying to get you to do some security analysis focused on threats which are possible in this model but not the real alternatives (download and install, install from a registry like PyPI or NPM, etc.). So far we have “GitHub could choose to destroy their business”, which seems like an acceptable risk and about the same as “NPM could destroy their business”.
I am doing security analysis. If this file changes and I’m using it in built server images then I have no way of automatically validating the changes are good without doing the checksumming myself and managing this data. What we have is a server that can be hacked and the files are unable to be verified by checksum
If you install it via pip you need to update it via pip, the alternative would be insane. And the reason it's not recommended is that it doesn't let you use multiple Python versions, but if you're only using one version then installing by pip works fine.
And if you need "to create a redistributable executable with all your dependencies". You can either use pyinstaller  or nuitka  both of which are very actively maintained/developed and continually improving.
Frankly I’ve been burned enough that I won’t use any new packaging technology for Python because everyone thinks they’ve solved it, but once you’re invested you run into issues.
Anyone considering it for production usage should note that package installs in the current versions are much slower than pip or Pipenv. This might affect your CI/CD.
Looking at the home page it's not immediately obvious to me. For example, the lock file it creates seems to be the equivalent of writing `pip freeze` to the requirements file. I see a quick mention of isolation at the end, it seems to use virtual environments, does it make it more seamless? What's the advantage over using virtualenv for example?
So `poetry add` (it's version of pip install) doesn't require you to have the virtualenv active. It will activate it, run the install, and update your dependency specifications in pyproject.toml. You can also do `poetry run` and it will activate the virtualenv before it runs whatever shell command comes after. Or you can do `poetry shell` to run a shell inside the virtualenv.
I like the seamless integration, personally.
Is there anything wrong with pip freeze > requirements.txt and then pip install -r requirements.txt ? This would install the exact versions
pip-sync was then called to install it in given environment, any promotion from devint -> qa -> staging -> prod, was just copying the requirements.txt from environment earlier and calling pip-sync.
Then bar releases version 0.2.2
So your deps want bar 0.2.1,
but foo now wants bar 0.2.2
This breaks your pip install.
EDIT: there are a few other gotchas (please respond to this post if you know of any more)
e.g. from https://medium.com/knerd/the-nine-circles-of-python-dependen...
"If two of your dependencies are demanding overlapping versions of a library, pip will not necessarily install a version of this library that satisfies both requirements" e.g. https://github.com/pypa/pip/issues/2775
All of a sudden, a numpy release pulls in a new version for a pandas build (that incidentally breaks for py2)
This without involving a "~=", but rather because pandas needs to build from source, and chooses the latest numpy build to do so.
No they are not.
Pip freeze does not resolve transitive dependencies, nor does pip know what to do if your transitive dependencies conflict with each another.
I don't think this is correct:
$ python3 -m venv /tmp/v
$ /tmp/v/bin/pip install flask
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10.1->flask)
$ /tmp/v/bin/pip freeze | grep MarkupSafe
This is true, but because Python exposes all libraries in a single namespace at runtime, there isn't actually anything reasonable to do if they genuinely conflict. You can't have both, say, MarkupSafe 1.1.1 and MarkupSafe 1.1.0 in PYTHONPATH and expect them to be both accessible. There's no way in an import statement to say which one you want.
However, it's notable that pip runs into trouble in cases where transitive dependencies don't genuinely conflict, too. See https://github.com/pypa/pip/issues/988 - this is a bug / acknowledged deficiency, and there is work in progress towards fixing it.
This would be fixable with a sys path hook, were pip so inclined
(Also it's not clear what those changed semantics would be.)
> remainder of the file as Ruby and not Python
That's a little excessive
Getting this right and reliable would be a) a considerable language design project in its own right and b) confusing to users of Python as it is documented, and in particular to people testing their modules locally without pip. It wouldn't be as drastically different a language as Ruby, but it would certainly be a different language.
How? Doesn't pip freeze literally list all packages that's installed in the current environment besides basic toolings such as setuptools (and you could even instruct it to list those as well)?
Ruby does the same thing with Gemfile.lock. npm does the same thing with package-lock.json.
pip isn't actually part of Python proper.
there is no language that is devoid of shortcomings - so to any new (<3 yrs exp) python users, please ignore the above comment entirely as it has no bearing on anything practical that you are doing/will do. and all experienced python users know that there are ways to work around the shortcomings listed here and beyond.
this is my psa for the python community!
I personally would consider this to be a strictly worse comment because it does not go into detail about what’s lacking like the parent comment does.
>there is no language that is devoid of shortcomings
So we should just silence all criticism like a dictatorship?
>as it has no bearing on anything practical that you are doing/will do.
Are you saying no python user has to deploy their application ever? If that's what you really mean then your comment is just pure trolling.
What "personal issues" do you think the author has? The frustrated tone comes from the frustrations the author explicitly outlines; unless you think this shouldn't be so, you are turning this into an ad-hom.
> the entire comment could have just been 1 line "We need official support for a modern package management system, from the Python org itself."
Why? because you don't appreciate the detail on why we need such a thing? These issues certainly get in the way of producing production apps; not in the sense that they make it impossible, but they make the process harder and slower than it needs to be.
Node's dependency managers npm/yarn just copy the versioned dependencies from their cache folder into the local node_modules folder and remove transitive dependencies duplicates when possible by flattening them into node_modules.
So I spent 2 hours and rewrote it in Java 8 with Maven.
Issues all gone. Node has some work to do before I’ll consider touching it again.
Once your project is setup, dependency management is what you do once in two weeks perhaps. Rest is just writing code.
The same issues exist in C, C++, Java and nobody seems to be complaining about those at the same volume.
This is why it doesn't seem so bad if you're programming in those languages.
Poetry is not the most used (or known) dependency management in Python.
What needs to happen is standardization - this has been done in java because of it's maturity. There's almost no java project that isn't using the standard maven dependency management (even projects that don't use maven, such as gradle projects, would use maven dependency management, and export themselves as an artifact usable via maven).
First of all, Poetry locks every dependency (even transitive ones) to the version you know works. This solves the problem of the project not using dependency management properly.
Secondly, setup.py allows you to specify your dependencies there, so most libraries use and specify that, which means that that isn't that much of a problem in Python. Sure, sometimes it is, but I haven't run into that particular problem very often.
(To be fair, modifying it to use venv is... non-trivial).
If you look at the issues section in its github repo, you'll see that there are some pretty basic bugs there which are very annoying or disruptive. Moreover it seems the author has almost left the boat and a handful of contributors have to tidy things up.
Just to illustrate my point. I think a package manager that takes a few minutes to install a single tiny package, or don't prevent you from adding non-existing packages (e.g. spelling mistake), or doesn't let you install a new package without trying to upgrade all other packages isn't really production-ready. These are known issues since November last year.
Let's take an example:
pipenv install oslo.utils==1.4.0
Could not find a version that matches pbr!=0.7,!=2.1.0,<1.0,>=0.6,>=2.0.0
Dependency hell is everywhere.
Excuse me, but as a long time Python user I have to disagree. I started using Rust two years ago and Rust’s dependency managment is easily the best thing I ever saw (keep in mind that I didn’t see everything, so there is a chance there are better things out there).
The project-/dependency-manager Cargo¹ is more “pythonic” than anything Python ever came up with and where others mumbled ”dependency hell is everywhere” the Rust people seem to have thought something like: ”there must be a way to do this properly”.
The whole thing gives me hope for Python, but angers me everytime I start a new python project. Poetry is good, but there should be an official way to do this.
It saddens me to see, that some people seem to just have given up on dependency managment alltogether and declared it unsolvable (which it is not, and probably never has been).
Cargo, though, has a silver bullet. If it can't find a solution to determine a single version for a package, it simply includes more than one version in the object code. That would take a lot of work to duplicate in Python.
Unfortunately, pip came along and took over, despite lacking support for that. It would have been nice if a better packaging tool had replaced easy_install, but alas.
- module A uses v1 and uses trait v1.A
- module B uses v2 and uses trait v2.A
- will be reported as "A does not implement trait A"
This is the other frustrating thing: there is this stockholm syndrome effect, because people are so used to dependency management being horrible, they think there are just no good dependency management systems, and they give up.
* From python for anything web + data science. Again, why not have your whole stack be in one language?
* From lack of hype. Rails is still evolving, but a lot of packages are not seeing releases (I have used packages 3-4 years old). This indicates to me that the energy isn't there in the community the way it used to be. I have seen the consultants who are always on the bleeding edge move on to elixir.
That said I have seen plenty of startups using ruby (really rails) and staffing when I hired for ruby wasn't an issue.
I do help run a local ruby meetup and attendance is good but not exceptional (15-40 people every month). So that may skew my viewpoint.
I’ve written an API once from scratch. Actually twice. First time in Modena, because it was all the hype, but it was arcane. Then Sinatra, where I ended up creating all of the above. Rails is excellent for APIs.
Rust is nice, but I’m not sure if I’d like it for all of an API. I don’t like go. Crystal seems great, because it’s typed and it’s also super fast.
With the rise in ML and data science over the past years, Python finally has a killer app that no other scripting languages come close to touching. I migrated completely from Ruby when I started dabbling in ML, Pandas, etc.
I used a different screen (having people make change based on an arbitrary amount, so if the input was 81, you'd return [25, 25, 25, 5, 1], as we were in the USA) and it was also helpful. I didn't track the number of people that it stymied though.
(I always feel weird talking about interview questions publicly, but honestly anyone who prepares that diligently deserves to go to the next stage. If anyone's reading this because they're preparing for an interview with me and I ask this question, just mention this comment and I'll be impressed.)
Wish there are more useful projects written in ruby :)
I had managed to get a job as a Java developer a long time ago, but at that time all I could do was barely write toy apps in Java and I had no exposure to stuff like design patterns. The whole experience left me in a bad place. Now after all these years, Ruby feels like a breath of fresh air, and the texts that I have come across on the subject - Design Patterns in Ruby, Practical OOD in Ruby, Ruby under a Microscope etc. have increased my interest in the language.
But more and more frequent articles on Ruby's decline are pretty disheartening.
In JS, which also has a single interpreter installed across the system (or multiple if you use nvm), the packages aren't installed "directly" into the interpreter, which removes the need for things like virtual-envs, thus making life a lot easier. I wish Python did something like this.
That being said, pipenv is making things easier. However, I think pipenv is a workaround more fundamental problems.
pip install -r requirements.txt --target=python_modules
PYTHONPATH=python_modules python myscript.py
EDIT: a question: when I have to use Python, I like to break up my code into small libraries and make wheel files for my own use. How do you handle your own libraries? Do you have one special local directory that you build wheel files to and then reference that library in your requirements.txt files?
We didn't build wheels. We had a centralized git host (Gitlab, but any of them works) with all our libraries, and just added the git url (git+https://...) to the requirements.txt
I have found that I would rather code my own versions of some libraries so I have control over it. Even if there is some extra long term maintenance and some up front dev costs, it's paid off already a number of times.
Why should Python have some "official" method to do this? Flexibility is a strength, not a weakness. Nobody ever suggests that C should have some official package manager. Instead the developers build a build system for their project. After a while every project seems to get its own unique requirements so trying to use a cookie-cutter system seems pointless.
"There should be one-- and preferably only one --obvious way to do it.": https://www.python.org/dev/peps/pep-0020/
Sometimes a 'benign dictator' single approach has benefits...
> nix-shell -p python3Packages.numpy python3Packages.my_important_package
It solves every problem you quoted before.
Poetry is python specific and does not solve the problems that pip/pypi has with native C/C++/Rust/etc modules.
Nix/guix solves all of that
The dependencies of such projects are easy to specify in Nix. Moreover, it's easy to reproduce the environment across machines by pinning nixpkgs to a specific version.
However a lot of the issues that you mentioned (such as lock file and transitive dependencies) can be handled by pipenv, which should be the default package manager
Yup, I love Python over my current language in my job(JS/TS).
But I really dislike handling conflicts using pip, requirements.txt and virtualenv.
So much so that I will take JS node_modules over it.
I seems to have some neat functionality wrt dep handling (and I'd never really heard of it before).
There is and it's called docker.
The other issues could indeed be fixed with something like poetry.
I agree, although a lot of it has to do that there's so much misinformation about the web, and many articles recommending bad solutions. This is because python went through many packaging solutions. IMO the setuptools one is the one that's most common and available by default. It has a weakness though, it started with people writing setup.py file and defining all parameters there. Because setup.py is actually a python program it encourages you to write it as a program and that creates issues, setuptools though for a wile had a declarative way to declare packages using setup.cfg file, you should use that and your setup.py should contain nothing more than a call to setup().
> Why do I need to make a "virtual environment" to have separate dependencies, and then source it my shell?
Because chances are that your application A uses different versions than application B. Yes this could be solved by allowing python to keep multiple versions of the same packages, but if virtualenv is bothering you you would like to count on system package manager to keep care of that, and rpm, deb don't offer this functionality by default. So you would once again have to use some kind of virtualenv like environment that's disconnected from the system packages.
> Why do I need to manually add version numbers to a file?
You don't have to, this is one of the things that there's a lot of misinformation about how to package application. You should create setup.py/cfg and declare your immediate dependencies, then you can optionally provide version _ranges_ that are acceptable.
I highly recommend to install pip-tools and use pip-compile to generate requirements.txt, that file then works like a lock file and it is essentially picking the latest versions within restrictions in setup.cfg
> Why isn't there any builtin way to automatically define a lock file (currently, most Python projects just don't even specify indirect dependency versions, many Python developers probably don't even realize this is an issue!!!!!)?
Because Python is old (it's older than Java) it wasn't a thing in the past.
> Why can't I parallelize dependency installation?
Not sure I understand this one. yum, apt-get etc don't parallelize either because it's prone to errors? TBH I never though of this as an issue, because python packages are relatively small and it installs quickly. The longest part was always downloading dependencies, but caching solves that.
> Why isn't there a builtin way to create a redistributable executable with all my dependencies?
Some people are claiming that python has a kitchen sink and that made it more complex, you're claiming it should have even more things built in, I don't see a problem, there are several solutions to package it as an executable. Also it is a difficult problem to solve, because Python also works on almost all platforms including Windows and OS X.
> Why do I need to have fresh copies of my dependencies, even if they are the same versions, in each virtual environment?
You don't you can install your dependencies in system directory and configure virtualenv to see these packages as well, I prefer though to have it completly isolated from the system.
> There is so much chaos, I've seen very few projects that actually have reproducible builds. Most people just cross their fingers and hope dependencies don't change, and they just "deal with" the horrible kludge that is a virtual environment.
Not sure what to say, it works predictable to me and I actually really like virtualenv
> We need official support for a modern package management system, from the Python org itself. Third party solutions don't cut it, because they just end up being incompatible with each other.
setuptools with declarative setup.cfg is IMO very close there.
> Example: if the Python interpreter knew just a little bit about dependencies, it could pull in the correct version from a global cache - no need to reinstall the same module over and over again, just use the shared copy. Imagine how many CPU cycles would be saved. No more need for special wrapper tools like "tox".
There is a global cache already and pip utilizes it even withing an virtualenv. I actually never needed to use tox myself. I think most of your problems is that there are a lot of bad information about how to package a python app. Sadly even the page from PPA belongs there.
I think people should start with this: https://setuptools.readthedocs.io/en/latest/setuptools.html#...
Yes it still has some of the problems you mentioned, but it fixes some others.
Facebook has an interesting talk about how much electricity 1% performance improvement saves.
If it is trying to process ML data, or running in some cloud provider, or deployed in some IoT device supposed to run for years without maintenance, then maybe yes.
And precisely, for ML code all python libraries run extremely optimized natively compiled code. The language overhead is a minimal consideration. And for business domain code language performance is rarely the limiting factor.
Are you suggesting that accounting only cares about the AWS bill but not at all about the salary of developers?
Includig a couple that are as dynamic as Python.
1. What is the impact of a continuous long-running process? That is, if instead of trying to calculate a result and then shut down, I'm running a web server 24/7, what's the impact of an interpreted language over a compiled language? (Assume requests are few and I'm happy with performance with either.) This not models web servers but things like data science workloads where one wants to conduct as much research as possible, so a faster language will just encourage a researcher to submit more jobs.
2. According to https://www.epa.gov/energy/greenhouse-gases-equivalencies-ca... , 1 megawatt-hour of fossil fuels is 1559 pounds of carbon dioxide. The site you link calculates an excess of 2245 joules for running their test programs, which is approximately .001 pounds of carbon dioxide, or roughly what a human exhales in half a minute. (Put another way, if using the interpreted language saved even one minute of developer time, it was a net win for the carbon emissions of the program.)
OK so you're asking about steady-state electricity consumption of a process that's idling? I would bet that it's still lower for a more energy-efficient language, but let's say purely for the sake of argument that they're both at parity, let's say (e.g.) 0. Now what happens when they both do one unit of work, e.g. one data science job? Suppose you're comparing C and Python. C is indexed at 1 by Table 4, and Python at 75.88. So even ignoring runtime, the Python version is 75 times more power-hungry than the baseline C. And this is for any given job.
> a faster language will just encourage a researcher to submit more jobs.
Sure, that's a behavioural issue. It's not a technical issue so I can't give you a technical solution to that one. Wider roads will lead to more traffic over time. What people will need to realize is that if they're doing science, shooting jobs at the server and 'seeing what sticks' is not a great way to do it. Ideally they should put in place processes that require an experimental design–hypothesis, test criteria, acceptance/rejection level, etc.–to be able to run these kinds of jobs.
> if using the interpreted language saved even one minute of developer time, it was a net win for the carbon emissions of the program
I don't understand, what does a developer's time/carbon emission have to do with the runtime energy efficiency of a program? They are two different things.
Sure, but they don't, and perhaps that's a much bigger issue than interpreted vs. compiled languages - either for research workloads or for commercial workloads. People start startups all the time that end up failing, traveling to attract investors, flying people out to interview them, keeping the lights on all night, heading to an air-conditioned home and getting some sleep as the sun is rising, etc. instead of working quietly at a 40-hour-a-week job. What's the emissions cost of that?
> I don't understand, what does a developer's time/carbon emission have to do with the runtime energy efficiency of a program? They are two different things.
This matters most obviously for research workloads. If the goal of your project is "Figure out whether this protein works in this way" or "Find the correlation between these two stocks" or "See which demographic responded to our ads most often," then the cost of that project (in any sense - time, money, energy emissions) is both the cost of developing the program you're going to run and actually running it. This is probably most obvious with time: it is absolutely not worth switching from an O(n^2) algorithm to an O(n) one if that shaves two hours off the execution time and it takes you three hours to write the better algorithm (assuming the code doesn't get reused, of course, but in many real-world scenarios, the better algorithm takes days or weeks and it shaves seconds or minutes off the execution time). Development time and runtime are two different things - for instance, you can't measure development time in big-O notation in a sensible way - but they're definitely both time.
Developers continue breathing even when they aren't programming.
When talking about the footprint of a company or a project, then you need to restrict the calculations to the resources they actually use. So if a project uses tools to get a product out quicker that means they’ve spend less human-hours, which have a co2 cost associated with them. Then you can weigh the cost of that tool versus the Human Resources both in a financial sense but also with respect to emissions.