Python has a lot of problems that really slow down development, but they are all fixable.
The biggest issue, in my opinion, is in dependency management. Python has a horrible dependency management system, from top-to-bottom.
Why do I need to make a "virtual environment" to have separate dependencies, and then source it my shell?
Why do I need to manually add version numbers to a file?
Why isn't there any builtin way to automatically define a lock file (currently, most Python projects just don't even specify indirect dependency versions, many Python developers probably don't even realize this is an issue!!!!!)?
Why can't I parallelize dependency installation?
Why isn't there a builtin way to create a redistributable executable with all my dependencies?
Why do I need to have fresh copies of my dependencies, even if they are the same versions, in each virtual environment?
There is so much chaos, I've seen very few projects that actually have reproducible builds. Most people just cross their fingers and hope dependencies don't change, and they just "deal with" the horrible kludge that is a virtual environment.
We need official support for a modern package management system, from the Python org itself. Third party solutions don't cut it, because they just end up being incompatible with each other.
Example: if the Python interpreter knew just a little bit about dependencies, it could pull in the correct version from a global cache - no need to reinstall the same module over and over again, just use the shared copy. Imagine how many CPU cycles would be saved. No more need for special wrapper tools like "tox".
I've always seen it like this: Not everyone builds reproducible software with Python (or in general) and how you handle dependencies can vary. Python leaves it open how you do it: globally installed packages, local packages, or a mix of both.
In the end, it needs to find the import in the PYTHONPATH, so there's no magic involved, and there are multiple robust options to choose from.
So instead of bashing Python for not shoveling down an opinion on you, it's up to the developers to choose which tools they want to use.
If they don't choose one and are unable to freeze their dependencies, it's not a Python problem, but IMO lack of skill and seniority.
You can have both: provide a sane default for most users and allow people to roll their own.
The reason why Python gets extra criticism for this is because it likes to tell people that there should be one obvious way to do it and that it comes with batteries included yet it's dependency management system is just crap and doesn't follow that at all.
Yes :-) It's fair to say Python's approach to dependency management doesn't follow the Zen of Python, but there's a simple way documented in the tutorial:
https://docs.python.org/3/tutorial/venv.html
The fact that there's more than one way to do things in Python is why i've found it so easy and flexible, I have no idea why that goober put this motto in the zen
It's general design guideline and I like Zen of python PEP-20. Explicit is better than implicit and most packaging system in python are explicit which I like. Been using it for over 15 years after perl and been happy with it.
Nothing to complaint as every language has their own set of good and bad. This is what makes it interesting, there is always a room to improve and make things better.
I think they could learn a lot from Rust, which has a very usable, clearly defined way of listing and making dependencies. You can decide how you want to handle individual dependencies (version number, version range, git commit hash, wildcard, etc). I'm not sure how binary dependencies work (i.e. something from your system's package manager), but I've used projects that use them, so the problem is solvable.
Python has always stood out for me as a particularly odd way of doing it. It feels a bit like more like C, but with a package manager that's not quite as nice as other scripting languages have.
It's from the days when Perl was Python's main rival (the late 90s / early 00s). Perl has complex syntax and the"there's more than one way to do it" motto.
Syntactically, and especially in early Python, there were fewer ways of doing things than in Perl and Python people saw that as a positive.
Wait, are you seriously complaining about executing code you downloaded from the internet, that installs a package manager - i.e. a piece of software that downloads executable code from the internet?!
I think what the comment you are replying to are getting at is the fact that installing pip packages from the Internet and importing them in your python app is not that different from piping code from the Internet into your python executable. In both cases python code from the Internet will be executed with your user privileges from within Python. Unless you audit every python package you consume, you might as well accept a curl https://example.com | python installer too.
Yeah, I hate this trend. Unfortunately, you can't pip install poetry because it needs to manage packages, so I guess a different way was necessary. Still, OS-specific packages would be nice, I guess they just need volunteers.
It’s running over HTTPS from an auditable source. Is that _really_ so much worse than a pip install, and can you explain in detail why you believe that to be true?
I teach my kids to use the right tool for the job, because using the wrong tool for the job can lead to injuries. But I violate this all the time, myself. It's just a good habit to get into.
"curl | bash" is a bad habit to get into. It works under certain circumstances, like making sure it's an SSL connection from a source you trust. But it's just a bad habit for the average person to get into.
Yes, funny, but seriously, where's the threat model where you've analyzed the risks of installing code from GitHub over HTTPS and found it to be less secure?
To be clear, either of these methods can have problems, it's not unique to curl and your shell of choice. Some of the better open source projects will say up front that if you are concerned about this kind of thing, feel free to read the installer script and decide for yourself if everything's kosher.
Yes, my point was that if you're worried about running someone else's code the answer is to audit that code rather than the transport layer. There are valid concerns with HTTP or in scenarios where something could be targeted to a single user, but neither of those are relevant to 99% of the time people raise this complaint.
There's always the risk that the script will fail to completely download and leave your system in a broken state. This can be mitigated against by the script authors by wrapping everything in a function which is called on the last line, but how do you know they've done that without downloading the script and checking first?
Do you believe GitHub has that infrastructure deployed? If not, this is a blind alley to worry about. If so, what other precautions have you taken to avoid compromised tarballs, unauthorized pushes to repos with auto-deployment pipelines, etc.?
The point is that in reality you’re orders of magnitude more likely to be compromised by ads in your browser, an undetected flaw in legitimate code, or a compromised maintainer than GitHub having deployed custom infrastructure to target you. If you’re being target by a government, why would they do this instead of using the same TLS exploit to serve you a dodgy Chrome or OS update which is harder to detect and will work against 100% of targets?
So because ads can compromise us we should ignore the security of package managers?
How about this for a reason, where are the checksums when I’m curling and piping? How do I validate in an automated fashion the validity of this file I’m piping into an interpreter? When installing a package it’s quite easy to have redundant copies of an index with checksums pointing to a repository hosting the actual code. The attack surface is much smaller vs a curl | python
This is bad practice, stop promoting it or downplaying it’s security issues.
HTTPS has checksums, and note that we’re specifically talking about installing from Github, where every change is tracked.
> This is bad practice, stop promoting it or downplaying it’s security issues.
I’m trying to get you to do some security analysis focused on threats which are possible in this model but not the real alternatives (download and install, install from a registry like PyPI or NPM, etc.). So far we have “GitHub could choose to destroy their business”, which seems like an acceptable risk and about the same as “NPM could destroy their business”.
HTTPS doesn’t know if the file changed on the server so that doesn’t count here.
I am doing security analysis. If this file changes and I’m using it in built server images then I have no way of automatically validating the changes are good without doing the checksumming myself and managing this data. What we have is a server that can be hacked and the files are unable to be verified by checksum
> Also installable via pip, but... "not recommended", and:
If you install it via pip you need to update it via pip, the alternative would be insane. And the reason it's not recommended is that it doesn't let you use multiple Python versions, but if you're only using one version then installing by pip works fine.
They could just as easily add the same code to setup.py, and then pip would run it as soon as you run pip install. There's generally no security difference between curl | python and pip install.
I agree. Most of the issues the parent mentions have been solved with poetry and pipenv.
And if you need "to create a redistributable executable with all your dependencies". You can either use pyinstaller [0] or nuitka [1] both of which are very actively maintained/developed and continually improving.
Pipenv is plagued with problems and issues. It takes half an hour to install dependencies to our project. The —keep-outdated flag doesn’t (didn’t?) work, so I don’t know if my pipfile is being modified because the constraints require changing versions or because the package manager is errantly updating versions to latest. There are mixed messages about the kind of quality the project aims for. I would not recommend.
Frankly I’ve been burned enough that I won’t use any new packaging technology for Python because everyone thinks they’ve solved it, but once you’re invested you run into issues.
Anyone considering it for production usage should note that package installs in the current versions are much slower than pip or Pipenv. This might affect your CI/CD.
Could you give some details as to why it's better than other more commonly used tools (pip, venv, ...)?
Looking at the home page it's not immediately obvious to me. For example, the lock file it creates seems to be the equivalent of writing `pip freeze` to the requirements file. I see a quick mention of isolation at the end, it seems to use virtual environments, does it make it more seamless? What's the advantage over using virtualenv for example?
I'm not an expert on the internals, but virtualenv interactions feel more seamless. When you run poetry, it activates the virtualenv before it runs whatever you wanted.
So `poetry add` (it's version of pip install) doesn't require you to have the virtualenv active. It will activate it, run the install, and update your dependency specifications in pyproject.toml. You can also do `poetry run` and it will activate the virtualenv before it runs whatever shell command comes after. Or you can do `poetry shell` to run a shell inside the virtualenv.
Python's dependency hell is what made me first look at Julia. I develop on Windows (someone has to :) ), and it was just impossible to get all of the numerical libraries like pydstool, scipy, FEniCS, Daedalus, etc. playing nicely together... so I gave Julia a try. And now the only time I have issues getting a package to run are Julia packages which have a Python dependency. Python is a good language, but having everything in one language and binary-free is just a blessing for getting code to run on someone else's computer.
I've had a good experience with pip-tools (https://github.com/jazzband/pip-tools/) which takes a requirements.in with loosely-pinned dependencies and writes your requirements.txt with the exact versions including transitive dependencies.
Same here, in my team we had immediate dependencies defined in setup.cfg when PR was merged, a pip-compile was run and generated requirements.txt and store it in central database (in our case it was consul because that was easiest to get without involving ops).
pip-sync was then called to install it in given environment, any promotion from devint -> qa -> staging -> prod, was just copying the requirements.txt from environment earlier and calling pip-sync.
Take my upvote. This has helped us a ton. So nice that it resolves dependencies. Only issue we're running into is that we don't use it to manage our dependencies for our internal packages (only using it at the application level). I've been advocating we change so that we simply read in the generated requirements.txt/requirements-dev.txt in setup.py
Late to the party but `pip-tools` also has a flag for its `pip-compile` flag: `--generate-hashes`. It generates SHA256 hashes that `pip install` checks.
"If two of your dependencies are demanding overlapping versions of a library, pip will not necessarily install a version of this library that satisfies both requirements" e.g. https://github.com/pypa/pip/issues/2775
This is what I've always done. Develop using a few dependencies, freeze, continue development with reproducible builds. It has always included the sub-dependencies in the list so, as far as I can tell, this works great for that case...
> nor does pip know what to do if your transitive dependencies conflict with each another
This is true, but because Python exposes all libraries in a single namespace at runtime, there isn't actually anything reasonable to do if they genuinely conflict. You can't have both, say, MarkupSafe 1.1.1 and MarkupSafe 1.1.0 in PYTHONPATH and expect them to be both accessible. There's no way in an import statement to say which one you want.
However, it's notable that pip runs into trouble in cases where transitive dependencies don't genuinely conflict, too. See https://github.com/pypa/pip/issues/988 - this is a bug / acknowledged deficiency, and there is work in progress towards fixing it.
It would change the semantics of the language. You could also write a sys.path hook to interpret the remainder of the file as Ruby and not Python, were pip so inclined....
(Also it's not clear what those changed semantics would be.)
Import system is pluggable, so the semantics are there to be customized. Sure, it could be abused (as many things in Python), but an import hook that checks for a vendored dependency with specific version, seems like a reasonable way to resolve the problem above.
But it changes the semantics of the rest of the language, e.g., if two modules interoperate by passing a type of a third module between themselves, and now there are two copies of that third module, they can't communicate any more.
Getting this right and reliable would be a) a considerable language design project in its own right and b) confusing to users of Python as it is documented, and in particular to people testing their modules locally without pip. It wouldn't be as drastically different a language as Ruby, but it would certainly be a different language.
>Pip freeze does not resolve transitive dependencies
How? Doesn't pip freeze literally list all packages that's installed in the current environment besides basic toolings such as setuptools (and you could even instruct it to list those as well)?
I'm not sure about conflict resolution when but I run a pip freeze it adds TONS of dependencies outside of the 2-3 I had in my app because those were the dependencies of my dependencies.
I think that is what you want. Having all your dependencies, including their dependencies, explicitly specified (including name and version) is what gives you reproducible builds.
Ruby does the same thing with Gemfile.lock. npm does the same thing with package-lock.json.
You can do that with any library. You can issue Django commands by running `python -m django`; that doesn't change the fact that Django is a completely separate project from Python.
It catches way too much. IPython, black and the testing libraries are _not_ a part of my actual dependencies and shouldn't be installed in production. A good UI for a dependency manager at the very least distinguishes between dev and production context, and ideally lets me define custom contexts.
Why do you want to update your dependencies if they work? Isn't the whole point of dependency management to avoid using different versions of dependencies than the ones they have been tested on?
Security fixes, performance enhancements, new features. There are many reasons. But the point is you update in a controlled manner. You don't just push the latest version of everything out on to prod, but you also don't keep pushing the same version that worked a year ago.
saddened to see this poorly constructed comment berating python at the top of this thread. the author seems to have some personal issues with the language given the generally frustrated tone of the comment. the entire comment could have just been 1 line "We need official support for a modern package management system, from the Python org itself." which would be consumed as constructive feedback by all readers with the right context. but somehow the author chooses to "vent" adding unnecessary drama to something that does not get in the way of writing high quality production grade python apps (general purpose, web, ai or otherwise)
there is no language that is devoid of shortcomings - so to any new (<3 yrs exp) python users, please ignore the above comment entirely as it has no bearing on anything practical that you are doing/will do. and all experienced python users know that there are ways to work around the shortcomings listed here and beyond.
> the author seems to have some personal issues with the language given the generally frustrated tone of the comment
What "personal issues" do you think the author has? The frustrated tone comes from the frustrations the author explicitly outlines; unless you think this shouldn't be so, you are turning this into an ad-hom.
> the entire comment could have just been 1 line "We need official support for a modern package management system, from the Python org itself."
Why? because you don't appreciate the detail on why we need such a thing? These issues certainly get in the way of producing production apps; not in the sense that they make it impossible, but they make the process harder and slower than it needs to be.
I actually quite liked that post. I use python maybe once a year or less, and don't enjoy the experience. That post distinguished some of the details which in my rare usage I see simply as a gloopy mess.
It is funny - half of the real desire/need for containers comes back to these sorts of issue with both node and Python. And then they bring in their own different challenges.
I have been programming with node for the last 3 years and I never had any dependency issues with node (at least for 3rd party dependencies). I cannot say that with python that requires using some tool be it docker or virtualenv to isolate them from the already installed ones.
Node's dependency managers npm/yarn just copy the versioned dependencies from their cache folder into the local node_modules folder and remove transitive dependencies duplicates when possible by flattening them into node_modules.
Lucky! I wrote a small internal app in node for my company that relied on an IMAP library. 3 months after launch, someone upgraded the library and my app stopped working. Stack traces were incomprehensible. No “how to upgrade” documentation in sight.
So I spent 2 hours and rewrote it in Java 8 with Maven.
Issues all gone. Node has some work to do before I’ll consider touching it again.
Having such a basic part of a programming language be awful is inexcusable. It's not just that it takes a lot of time; even if it took no extra time, you're still wasting extra space on your computer, risking breakage on external updates, and compromising security because you can't even tell what code you're running.
50% of C++ devs don't use a package manager and 27% rely on a system package manager [1]. You don't hear C++ devs complaining about these issues not because they're happy with the state of dependency management in C++ but because there's a very low rate of adoption for package management systems. That, and the state of dependency management in C++ was so bad for so long that it's viewed as a fact of life.
Also with C and C++ your dependencies compile with your code into a single binary, unless you explicitly opt into using a library, and when you do it becomes a package manager's issue not yours.
That's vastly oversimplifying the problem. "DLL hell" is a term for a reason. The vast amount of effort and complexity Microsoft has put into managing this problem is proof that dependency management for C and C++ is not a solved problem.
We definitely care about it. And I don't know why you think C++ developers can just push issues onto packagers when 50% don't use any kind of package management system. Meaning compiling libraries from source.
If a project you want to depend on isn't using a dependency management framework, how would you then make it work in your project? You will have to do extra work to define the transitive dependencies!
What needs to happen is standardization - this has been done in java because of it's maturity. There's almost no java project that isn't using the standard maven dependency management (even projects that don't use maven, such as gradle projects, would use maven dependency management, and export themselves as an artifact usable via maven).
Javascript has an even worse problem, so python isn't alone me thinks...
First of all, Poetry locks every dependency (even transitive ones) to the version you know works. This solves the problem of the project not using dependency management properly.
Secondly, setup.py allows you to specify your dependencies there, so most libraries use and specify that, which means that that isn't that much of a problem in Python. Sure, sometimes it is, but I haven't run into that particular problem very often.
Last time I gave it a go, I found it was pretty strongly welded to virtualenv, rather than using Python's own (much less problematic) venv. I came away less than enthused as a result.
(To be fair, modifying it to use venv is... non-trivial).
Pipenv has a good approach to management, similar to npm, but the implementation is buggy. It gained popularity by being recommended very early on in the official python documentation while being misleadingly advertised as production-ready.
If you look at the issues section in its github repo, you'll see that there are some pretty basic bugs there which are very annoying or disruptive. Moreover it seems the author has almost left the boat and a handful of contributors have to tidy things up.
Just to illustrate my point. I think a package manager that takes a few minutes to install a single tiny package, or don't prevent you from adding non-existing packages (e.g. spelling mistake), or doesn't let you install a new package without trying to upgrade all other packages isn't really production-ready. These are known issues since November last year.
Excuse me, but as a long time Python user I have to disagree. I started using Rust two years ago and Rust’s dependency managment is easily the best thing I ever saw (keep in mind that I didn’t see everything, so there is a chance there are better things out there).
The project-/dependency-manager Cargo¹ is more “pythonic” than anything Python ever came up with and where others mumbled ”dependency hell is everywhere” the Rust people seem to have thought something like: ”there must be a way to do this properly”.
The whole thing gives me hope for Python, but angers me everytime I start a new python project. Poetry is good, but there should be an official way to do this.
It saddens me to see, that some people seem to just have given up on dependency managment alltogether and declared it unsolvable (which it is not, and probably never has been).
There's a pattern to this. The later the dependency manager was created, the better it is. This is a hard problem space where each new language got to use the lessons learned on the earlier ones.
Cargo, though, has a silver bullet. If it can't find a solution to determine a single version for a package, it simply includes more than one version in the object code. That would take a lot of work to duplicate in Python.
Unfortunately, pip came along and took over, despite lacking support for that. It would have been nice if a better packaging tool had replaced easy_install, but alas.
They definitly gave this a thought while designing the library system (”crates”) for the language. I am not sure if it is feasible to retrofit such a solution to something like python.. Python 4 maybe?
Rust's cargo, JS's yarn and the grand daddy of them all Ruby's bundler address all these issues. Even newer versions of Gradle support a workflow where you specify the versions you know you want and just on everything else, including transitive dependencies, down.
This is the other frustrating thing: there is this stockholm syndrome effect, because people are so used to dependency management being horrible, they think there are just no good dependency management systems, and they give up.
GitHub is still the host for many of them, but there are Modules, so you get proper versioning and all that even when the place you end up getting them from is GitHub.
I think ruby is alive and well for a lot of startups. I do think it is being squeezed on three sides though.
* From javascript. If you have a app like front end, you are going to use js. Why not have the whole stack be js and have your developers use only one language.
* From python for anything web + data science. Again, why not have your whole stack be in one language?
* From lack of hype. Rails is still evolving, but a lot of packages are not seeing releases (I have used packages 3-4 years old). This indicates to me that the energy isn't there in the community the way it used to be. I have seen the consultants who are always on the bleeding edge move on to elixir.
That said I have seen plenty of startups using ruby (really rails) and staffing when I hired for ruby wasn't an issue.
I do help run a local ruby meetup and attendance is good but not exceptional (15-40 people every month). So that may skew my viewpoint.
"From JavaScript" also includes another side: When your frontend is in JS, your backend can be a simple REST API. And building a REST API requires much less framework than building a server-side-rendering webapp does, so it's tempting to use Go or Rust or whatever you like.
You’ll need (probably) at least:
-Database connection
-An ORM
-Middleware against attacks / rate limiting
-Caching
-Jobs / workers
-A rendering engine for email and maybe pdf
-Some sort of admin/backend
-Logging
-Validation
I’ve written an API once from scratch. Actually twice. First time in Modena, because it was all the hype, but it was arcane. Then Sinatra, where I ended up creating all of the above. Rails is excellent for APIs.
Rust is nice, but I’m not sure if I’d like it for all of an API. I don’t like go. Crystal seems great, because it’s typed and it’s also super fast.
Agreed. I still think that ruby is great for jamming out an API (far better in terms of development speed than go or rust) but a lot of the great gems that can speed up development assume server side rendering. That plus the fact that go/rust/whatever are probably more "interesting" and faster (at runtime) than ruby is an additional obstacle (for ruby!).
I loved Ruby, but unfortunately it didn't hold on to any kind of "killer app" role after Rails clones showed up in other language. I've switched to Elixir/Phoenix in that space and not looked back.
With the rise in ML and data science over the past years, Python finally has a killer app that no other scripting languages come close to touching. I migrated completely from Ruby when I started dabbling in ML, Pandas, etc.
So, as someone who spends maybe 20% of their time hiring, it's still a very effective screen. You wouldn't believe how many people can't do it. People at big companies, respected places. It's surprising.
I used a different screen (having people make change based on an arbitrary amount, so if the input was 81, you'd return [25, 25, 25, 5, 1], as we were in the USA) and it was also helpful. I didn't track the number of people that it stymied though.
Yah, that's also a good one. I like the variant that asks how many different ways you can make change for a given amount and a given array of currencies.
(I always feel weird talking about interview questions publicly, but honestly anyone who prepares that diligently deserves to go to the next stage. If anyone's reading this because they're preparing for an interview with me and I ask this question, just mention this comment and I'll be impressed.)
I am trying to find a place in the industry - again, starting from RoR. I absolutely love Ruby. And all this talk of "Ruby dying" makes me feel sad. The rational thing to do is to move on, and learn something popular, like node.js but the more I see Ruby in action, I just can't pull myself away from it.
I had managed to get a job as a Java developer a long time ago, but at that time all I could do was barely write toy apps in Java and I had no exposure to stuff like design patterns. The whole experience left me in a bad place. Now after all these years, Ruby feels like a breath of fresh air, and the texts that I have come across on the subject - Design Patterns in Ruby, Practical OOD in Ruby, Ruby under a Microscope etc. have increased my interest in the language.
But more and more frequent articles on Ruby's decline are pretty disheartening.
Ruby's future may actually not be Ruby itself. Probably the major problem with Ruby is its performance, which is slow even compared to other interpreted languages. While I'm not sure it is really production ready yet, Crystal is very interesting -- it's a native compiled statically typed language that nevertheless feels very much like Ruby. Check it out if you haven't.
I saw Tenderlove's interview on SE Radio, about Ruby internals. He seemed optimistic, but also because he's been working on a performance related project for the past few years now. Anyway, I'm hopeful.
Python is uniquely ill-suited for dependency management compared to many other languages. For some reason dependencies are installed into the interpreter itself (I know what I just said is very imprecise/inaccurate but I think it gets the point across).
In JS, which also has a single interpreter installed across the system (or multiple if you use nvm), the packages aren't installed "directly" into the interpreter, which removes the need for things like virtual-envs, thus making life a lot easier. I wish Python did something like this.
That being said, pipenv is making things easier. However, I think pipenv is a workaround more fundamental problems.
Thanks for that! I am tired of messing with conda, virtualenv, etc. and since I use simple Makefiles to build and run most of my code, I can easily stick with the standard latest stable version Python installation when using your trick.
EDIT: a question: when I have to use Python, I like to break up my code into small libraries and make wheel files for my own use. How do you handle your own libraries? Do you have one special local directory that you build wheel files to and then reference that library in your requirements.txt files?
> How do you handle your own libraries? Do you have one special local directory that you build wheel files to and then reference that library in your requirements.txt files?
We didn't build wheels. We had a centralized git host (Gitlab, but any of them works) with all our libraries, and just added the git url (git+https://...) to the requirements.txt
It's not a "model," but if you're able to 1) use fewer dependencies 2) use stable dependencies 3) use dependencies with fewer dependencies, it helps with dependency hell. I've even made commits to projects to reduce their dependency count.
I have found that I would rather code my own versions of some libraries so I have control over it. Even if there is some extra long term maintenance and some up front dev costs, it's paid off already a number of times.
A little off topic, but this is why I really like a Common Lisp with Quicklisp: library dependencies are stored in a convenient location locally and the libraries I write can be treated the same way (with a trivial config change to add Quicklisp load paths to my own library project directories).
Indeed. We have some Python modules written in Rust. It needs Rust nightly, because pyo3 requires Rust nightly. The Rust crate relies on libtensorflow. Unit tests for the Python module use Python and pytest. And we use our own build of libtensorflow (optimizations for AVX and FMA).
The dependencies of such projects are easy to specify in Nix. Moreover, it's easy to reproduce the environment across machines by pinning nixpkgs to a specific version.
I installed Nix operating system on an old lap top earlier this year, and indeed it does solve a lot of development and devops problems. I retired this spring, so I only played with Nix out of curiosity, but if I still had an active career as a developer I would use Nix.
I agree that builtin tools suck for dependency management.
However a lot of the issues that you mentioned (such as lock file and transitive dependencies) can be handled by pipenv, which should be the default package manager
Python was a scripting language. All those problems are caused by people using it like something it isn't. Python has way outlived it's usefulness and it's about time we move on to something better.
The virtualenv thing just galls me. Sure, pipenv aped rbenv - appropriately, I might add - but until they supplant virtualenv as the recommended way to have separate environments, I'll pass.
.NETs solution to this was the project file, a configuration file that lists the compiler version, framework version, and dependencies (now including NuGet packages and their versions).
> The biggest issue, in my opinion, is in dependency management. Python has a horrible dependency management system, from top-to-bottom.
I agree, although a lot of it has to do that there's so much misinformation about the web, and many articles recommending bad solutions. This is because python went through many packaging solutions. IMO the setuptools one is the one that's most common and available by default. It has a weakness though, it started with people writing setup.py file and defining all parameters there. Because setup.py is actually a python program it encourages you to write it as a program and that creates issues, setuptools though for a wile had a declarative way to declare packages using setup.cfg file, you should use that and your setup.py should contain nothing more than a call to setup().
> Why do I need to make a "virtual environment" to have separate dependencies, and then source it my shell?
Because chances are that your application A uses different versions than application B. Yes this could be solved by allowing python to keep multiple versions of the same packages, but if virtualenv is bothering you you would like to count on system package manager to keep care of that, and rpm, deb don't offer this functionality by default. So you would once again have to use some kind of virtualenv like environment that's disconnected from the system packages.
> Why do I need to manually add version numbers to a file?
You don't have to, this is one of the things that there's a lot of misinformation about how to package application. You should create setup.py/cfg and declare your immediate dependencies, then you can optionally provide version _ranges_ that are acceptable.
I highly recommend to install pip-tools and use pip-compile to generate requirements.txt, that file then works like a lock file and it is essentially picking the latest versions within restrictions in setup.cfg
> Why isn't there any builtin way to automatically define a lock file (currently, most Python projects just don't even specify indirect dependency versions, many Python developers probably don't even realize this is an issue!!!!!)?
Because Python is old (it's older than Java) it wasn't a thing in the past.
> Why can't I parallelize dependency installation?
Not sure I understand this one. yum, apt-get etc don't parallelize either because it's prone to errors? TBH I never though of this as an issue, because python packages are relatively small and it installs quickly. The longest part was always downloading dependencies, but caching solves that.
> Why isn't there a builtin way to create a redistributable executable with all my dependencies?
Some people are claiming that python has a kitchen sink and that made it more complex, you're claiming it should have even more things built in, I don't see a problem, there are several solutions to package it as an executable. Also it is a difficult problem to solve, because Python also works on almost all platforms including Windows and OS X.
> Why do I need to have fresh copies of my dependencies, even if they are the same versions, in each virtual environment?
You don't you can install your dependencies in system directory and configure virtualenv to see these packages as well, I prefer though to have it completly isolated from the system.
> There is so much chaos, I've seen very few projects that actually have reproducible builds. Most people just cross their fingers and hope dependencies don't change, and they just "deal with" the horrible kludge that is a virtual environment.
Not sure what to say, it works predictable to me and I actually really like virtualenv
> We need official support for a modern package management system, from the Python org itself. Third party solutions don't cut it, because they just end up being incompatible with each other.
setuptools with declarative setup.cfg is IMO very close there.
> Example: if the Python interpreter knew just a little bit about dependencies, it could pull in the correct version from a global cache - no need to reinstall the same module over and over again, just use the shared copy. Imagine how many CPU cycles would be saved. No more need for special wrapper tools like "tox".
There is a global cache already and pip utilizes it even withing an virtualenv. I actually never needed to use tox myself. I think most of your problems is that there are a lot of bad information about how to package a python app. Sadly even the page from PPA belongs there.
It's not that bad if you use the right tools. The two main options are an all-in-one solution like poetry or pipenv, and an ensemble of tools like pyenv, virtualenvwrapper, versioneer and pip-tools. I prefer the latter because it feels more like the Unix way.
Why should Python have some "official" method to do this? Flexibility is a strength, not a weakness. Nobody ever suggests that C should have some official package manager. Instead the developers build a build system for their project. After a while every project seems to get its own unique requirements so trying to use a cookie-cutter system seems pointless.
They almost rescinded that in PEP 357 and ultimately did so in PEPs 468/469. PEP -23 updated the standard library to match, but not until 3.9.1.a. Until then, beware the various blog posts you'll find on Google talking about this concept on 1.x.
It was always an ideal to aim for rather than a strict rule. I don't see any of those PEPs changing the balance enough to claim the principle was dead (maybe a bit injured...)
In general, leaving such things open leads to a proliferation of different 'solutions', as multiple people try to solve the issue... leading to the additional confusion and cognitive load of trying to find a single solution which suits your use-case and works, when often none of them are perfect.
Sometimes a 'benign dictator' single approach has benefits...
The virtual env is really the thing that has stopped me from using python. It's a lovely language but the tooling around it needs a lot of help. I'm sure it will get there though. I mean if the js folks can do it, certainly python can.
If it is running on its own computer, for shell scripting.
If it is trying to process ML data, or running in some cloud provider, or deployed in some IoT device supposed to run for years without maintenance, then maybe yes.
Right, but when you're at that point in performance considerations you already have a team of specialists working on multiple angles in performance.
And precisely, for ML code all python libraries run extremely optimized natively compiled code. The language overhead is a minimal consideration. And for business domain code language performance is rarely the limiting factor.
If your team size is 1 then you're not doing yourself any favor thinking about performance beyond basic usability when dev productivity is a far higher priority.
Thanks, that definitely looks like useful data as a starting point.
1. What is the impact of a continuous long-running process? That is, if instead of trying to calculate a result and then shut down, I'm running a web server 24/7, what's the impact of an interpreted language over a compiled language? (Assume requests are few and I'm happy with performance with either.) This not models web servers but things like data science workloads where one wants to conduct as much research as possible, so a faster language will just encourage a researcher to submit more jobs.
2. According to https://www.epa.gov/energy/greenhouse-gases-equivalencies-ca... , 1 megawatt-hour of fossil fuels is 1559 pounds of carbon dioxide. The site you link calculates an excess of 2245 joules for running their test programs, which is approximately .001 pounds of carbon dioxide, or roughly what a human exhales in half a minute. (Put another way, if using the interpreted language saved even one minute of developer time, it was a net win for the carbon emissions of the program.)
> What is the impact of a continuous long-running process?
OK so you're asking about steady-state electricity consumption of a process that's idling? I would bet that it's still lower for a more energy-efficient language, but let's say purely for the sake of argument that they're both at parity, let's say (e.g.) 0. Now what happens when they both do one unit of work, e.g. one data science job? Suppose you're comparing C and Python. C is indexed at 1 by Table 4, and Python at 75.88. So even ignoring runtime, the Python version is 75 times more power-hungry than the baseline C. And this is for any given job.
> a faster language will just encourage a researcher to submit more jobs.
Sure, that's a behavioural issue. It's not a technical issue so I can't give you a technical solution to that one. Wider roads will lead to more traffic over time. What people will need to realize is that if they're doing science, shooting jobs at the server and 'seeing what sticks' is not a great way to do it. Ideally they should put in place processes that require an experimental design–hypothesis, test criteria, acceptance/rejection level, etc.–to be able to run these kinds of jobs.
> if using the interpreted language saved even one minute of developer time, it was a net win for the carbon emissions of the program
I don't understand, what does a developer's time/carbon emission have to do with the runtime energy efficiency of a program? They are two different things.
> What people will need to realize is that if they're doing science, shooting jobs at the server and 'seeing what sticks' is not a great way to do it. Ideally they should put in place processes that require an experimental design–hypothesis, test criteria, acceptance/rejection level, etc.–to be able to run these kinds of jobs.
Sure, but they don't, and perhaps that's a much bigger issue than interpreted vs. compiled languages - either for research workloads or for commercial workloads. People start startups all the time that end up failing, traveling to attract investors, flying people out to interview them, keeping the lights on all night, heading to an air-conditioned home and getting some sleep as the sun is rising, etc. instead of working quietly at a 40-hour-a-week job. What's the emissions cost of that?
> I don't understand, what does a developer's time/carbon emission have to do with the runtime energy efficiency of a program? They are two different things.
This matters most obviously for research workloads. If the goal of your project is "Figure out whether this protein works in this way" or "Find the correlation between these two stocks" or "See which demographic responded to our ads most often," then the cost of that project (in any sense - time, money, energy emissions) is both the cost of developing the program you're going to run and actually running it. This is probably most obvious with time: it is absolutely not worth switching from an O(n^2) algorithm to an O(n) one if that shaves two hours off the execution time and it takes you three hours to write the better algorithm (assuming the code doesn't get reused, of course, but in many real-world scenarios, the better algorithm takes days or weeks and it shaves seconds or minutes off the execution time). Development time and runtime are two different things - for instance, you can't measure development time in big-O notation in a sensible way - but they're definitely both time.
Correct, and computers continue running. I'm referring to the carbon emissions of the development project itself. The faster the development is done, the sooner you can get on with developing other things.
It's a valid objection to the statement they replied to. Saving developer time does not equate to lower emissions, so it is incorrect to call it a "net win".
Sure and trains burn fuel even when you aren’t using them. But if we look at you carbon footprint it doesn’t seem wise to factor in every single train on the planet in your specific account har because they don’t all air still when you aren’t using them.
When talking about the footprint of a company or a project, then you need to restrict the calculations to the resources they actually use. So if a project uses tools to get a product out quicker that means they’ve spend less human-hours, which have a co2 cost associated with them. Then you can weigh the cost of that tool versus the Human Resources both in a financial sense but also with respect to emissions.
Remember NOT to jump into Python for your new product if don't know Python. If you are developing for a young startup, have time crunch, then stick to what you know.
IF you do not have a language, or know Python a bit, then pick Python. Here are some of the reasons why I stick to Python (young startup/web APIs):
- OOPs is not too strict (might give a headache to some folks)
- Mixins, lambda, decorators, comprehensions - Pythonic ways make me feel productive easily
- Create a data Model, drop into a shell, import and try things
- Can do that on a live server
- Do the same with Controllers, or anything else actually
- really nothing fancy needed
- Command line processing, SSH, OS integration, etc. has so many great libs
- Python Dict someone looks like JSON (this is purely accidental but useful)
- Debugger support even in free IDE like PyCharm Community Ed is great
- Integration to a world of services is so easy, even ones you do not commonly see
- Documentation - many libs have consistent structure and that helps a LOT
- Really large community, perhaps only smaller than Java
- The even larger group of people using Python in all sorts of domains from Biotech to OS scripts
What I would like improved in the language would be an even longer list. Every language has a list like that, but when you are focused on being scrappy and building a product, yet making sure that software quality does not take a big hit, Python helps a lot.
> Remember NOT to jump into Python for your new product if don't know Python. If you are developing for a young startup, have time crunch, then stick to what you know.
Are you saying this as a general maxim (don't try to learn a new tech under pressure) or because of characteristics specific to Python, that make it worse in such a situation than any other language/ecosystem?
I know it's only a one off anecdote but two developers in my team who previously didn't know python used it for a small time-critical project and it was a brilliant success.
The main point I am trying to make is that not all of us are language experts. I absolutely know the vital role that language experts play and that we need to solve the issues. But companies need to build in the meanwhile. Python is easier to learn, this is why it is such a popular language in Universities.
I am not a high IQ person to even grip some of the nitty-gritty underpinnings of a language. But should that stop me from building a product?
In the end, I need a language that is easy to pick-up, be productive, has its heart in the right place. A young Python programmer can web scrap easily or plug into Ansible or so many other things. If you know of another language that would make more practical sense and still be easy to pick-up, I would switch.
Reading this got me thinking and I wonder if other people feel like me about this, so I'm going to share it. This is not serious, but not entirely unserious...
I try to be a good sport about it, but every time I write python I want to quit software engineering. It makes me angry how little it values my time. It does little for my soured disposition that folks then vehemently lecture me about the hours saved by future barely-trained developers who will ostensibly have to come and work with my code. Every moment working with python (and that infernal pep-8 linter insisting 80 characters is a sensible standard in 2019) increases my burnout by 100x.
I try to remind myself that we're trying to make the industry less exclusive and more welcoming to new developers and "old" isn't necessarily "good" (in fact, probably the opposite), but damn I just don't understand it.
It used to be that I could focus on other languages (Erlang, Nemerle, F#, Haskell, Ocaml, even C++) and sort of balm myself. But now, I can't even overcome the sinking feeling as I read the Julia statistics book that I'm going to be dragged back to Python kicking and screaming in the morning, so why even bother?
And frustratingly: it's one of the few languages with decent linear algebra libraries. And that means it's one of the few languages with good ML and statistics support. So it's very hard not to use it because when you want to cobble together something like a Bayesian model things like PyMC or Edward actually give you performance that's obnoxiously difficult to reproduce.
This is what the industry wants and evidently a lot of people are okay with it, but to me it's misery and I can't work out why people seem to like it so much.
I am about to hit a decade of python experience. I work with it daily, all my major codebases are written in it.
I hate it.
It has an ok object system which is possibly its only redeeming quality. I found Racket about 4 years ago, and any new project that I work on will be in Racket or CL.
I could go on at length about the happy path mentality of python and the apologists who are too ignorant of other people's use cases to acknowledge that maybe there might be some shortcomings.
The syntactic affordances are awful and you can footgun yourself in stupidily simple ways when refactoring code between scopes. Python isn't really one language it is more like 4, one for each scope and type of assignment or argument passing, and all the sytactic sugar just makes it worse.
Not to mention that packaging is still braindead. I actually learned the Gentoo ebuild system because pip was so unspeakably awful the moment you stepped away from the happy path.
Blub blub blub, someone help I'm drowning in mediocrity.
I am about to hit two decades, with lot of pauses. I now use it professionally, but it was still my favorite language a decade ago (then I fell in love with Common Lisp and lately Haskell).
I think you need to look at it historically. Against other languages circa 2000, Python was a huge win. Easy to write (compared to C), consistent (compared to PHP), minimalist (compared to Perl), multi-paradigm and playing well with C (compared to Java), with a great standard library (compared to Lisp).
Today, the landscape is very different. Lot of languages took hints from and got influenced by Python (including Racket I am sure). Functional languages have taken off. Type inference is a standard feature.
History is always changing. There is some potential for a next Python, but we don't know what it will look like yet. I suspect it will be functional, but I don't think it will be in the Lisp family. It probably won't happen until after Rust and/or Julia get lot more adopted. Anyway, just like C, Python will be with us for decades to come, for better or worse.
Two decade Perl programmer who's been using a bunch of Python recently. I don't hate it, but I've not been getting any particularly complicated use-cases. I do miss Moose a lot, and I miss Bread::Board and I miss DBIx::Class, which SQLAlchemy does not make up for. The weird scoping didn't take too long to get right, although string interpolation still takes me too much thinking about.
What I am missing from Perl is library documentation and MetaCPAN. It seems that the go-to solution is that you're meant to create some kind of damn mini-site for any even vaguely complicated project you do using Sphinx, which seems bizarro land. Also the _requests_ documentation looks like it was written by Wes Anderson and wtf I hate it. Also I hate that all libraries have cute rather than descriptive names; yes there's some of that in Perl, but it feels like less. Bah humbug. Other than that it's fine.
This resonates with me. I miss working with Perl greatly (my current employer forbids me to write anything on it), and having to deal with Python instead makes me miss it even more.
That doesn’t even work in all situations. What if the system requires development packages? What if it’s a different OS or architecture? Packaging is a nightmare.
That's why Docker is a good idea for packaging things up. You basically get exactly the combination of binaries, libraries, etc. you intend to run. Probably not a bad idea to use it for development as well. Or at least something like pyenv or whatever it is called these days.
They aren't too bad IMO. However, you need to set up a build environment/VM with your choice of your earliest-supported Linux+glibc. Build on old, run on new works well.
Linux backwards-compatibility is pretty good, in that a static binary should run just fine on newer systems. I've had far worse experiences with OpenBSD, where a build on an older version of the OS would never seem to run on a newer system.
Thanks for sharing, it certainly makes me feel better about my troubles with accepting python. And "the happy path" might be a state you spend a /minority/ of your time in, depending on project. So first you spend 10% of your time to fix the 80% of easy features, but the tricky stuff can be more awful than it should be.
And packages. I thought I was just stupid, but thankfully there was more to it.
Quite welcome. I think you are on to something. The thought that comes to mind is "Python is a great language if you want to do exactly what everyone else is doing." Most of the love has nothing to do with the language itself, but rather the fact that people can make use of some library that a big engineering effort produced in another language entirely. Thus they conflate Python with access to that functionality. All of my code bases are pure python (in part because of the packaging issues), so maybe that is the difference, people aren't really Python users, they are numpy, pandas, django, sqlalchemy, or requests users. It would be interesting to compare the view of library maintainers vs just 'users.'
Agreed this is it, when people think Python, they think all the default use cases covered by the many libraries that let you "get things done" quickly with syntactic sugar. Alas, there's more to our work than the first few miles of getting things done. There's that scaling part too and python loses it there: no good threading, no enforced typing, bizarre scoping, half baked module system,etc
> Is it the language, or someone forcing you to use that linter and settings?
It's definitely both.
Preface in true internet style: these are just opinions and you may not share them. That's fine.
I really don't like Python as a language. I don't like its total lack of composability. I don't like its over-reliance on a very dated vision of OO. I don't like how its list comprehensions aren't generic. I don't like how it farms all its data structures out to C. I don't like how it uses cruddy one line lambdas and forces me to name things that are already named by the things I'm passing it to.
And also the linter just exacerbates these things, because the linter is just a crystallized slice of a kind of software engineering culture I really don't like.
Not sure why Python’s list comprehensions not being generic might be such a great downside but just wanted to add that I personally think that they are perfectly fine just the way they are. For the context, I’ve been professionally writing Python code for 14 years now. I understand it’s not the perfect language but for quite average programmers like myself it does an excellent job of getting out of the way and of letting you focus just on the data, because that’s what programming is (or should be), i.e. managing and processing data.
Like I said, it’s not the perfect language, it does not have Erlang’s pattern matching nor Lisp’s insistence on thinking about almost everything in a recursive manner, but you can at least mentally try to incorporate those two philosophies into Python once you know about them (and once you know about other similar such great programming concepts).
> I'm not really sure what your point is in criticizing "farming" to C.
If Python is so great how come it can't even express a decent data structure that it needs? I'll happily level the same criticism at Ruby if you like.
> You made a false claim.
Firstly: This is not High School Debate Club. There isn't some kind of point system. Language and expectations like this are not only counterproductive (in that they essentially turn every conversation into a series of verbal ripostes in which the goal is to be most right rather than *learn the most) they're also tedious and unwelcome.
> You referred to Python iteration as not generic,
I did not. I said list comprehensions weren't generic, and then I tried to explain my complaint. It may be that someone has done a legendary feat of going through and creating a bunch of mixins for some of the gaps in the list comprehension coverage such that you can find some way to override how non-determinism is expressed. If so, please point me to it.
> If Python is so great how come it can't even express a decent data structure that it needs?
Why is this a requirement of a "great" language? By not requiring a language to be self-hosting, you are adding more degrees of freedom to your language's design, so I could even see an argument that writing it in C is an improvement. I don't necessarily agree with that, but I don't see why cpython written in C implies it is a bad language. Maybe you could elucidate your thinking?
you are adding more degrees of freedom to your language's design
Why do you think that? Languages/runtimes with high C integration and fairly exposed C bowels like Python and Ruby have, over time, turned out to be very hard to evolve compatibly.
Because it's a fact? With cpython you can develop things in python if that suits you or C if that suits you. You have more freedom to choose what fits your use case. I'm not saying this is necessarily good, but I don't think it's obviously bad. I'd like to hear from people who think it's the case.
> Languages/runtimes with high C integration and fairly exposed C bowels like Python and Ruby have, over time, turned out to be very hard to evolve compatibly.
That is true, but it's also arguably one of the reasons cpython became so popular in the first place. The ability to write C-extensions when appropriate has been very powerful. It's certainly caused issues, but I think if python didn't have exposed bowels it may never have become nearly as popular. What if numpy wasn't ever written? (This isn't to say that they couldn't have exposed a better C-api with fewer issues, but hindsight is 20/20...)
I guess I don't understand how 'the design gets stuck in amber' (which you seem to agree with) and 'gives you lots of design degrees of freedom' can be true at the same time.
It gives you flexibility in writing libraries while making it harder to design a new compatible runtime. That said, PyPy has achieved pretty good C extension support while making the language faster.
The claim was 'degrees of freedom in your language's design'. It's an odd one because the history of a bunch of similar languages has been exactly the opposite. Compare, say, JS to Ruby and Python. Even with the seemingly crippling burden of browser compatibility, Javascript has evolved out of its various design and implementation ruts a lot more gracefully than either Ruby or Python.
> Language and expectations like this are not only counterproductive ... they're also tedious and unwelcome.
Who gets to be the language police? I'm fine with "High School Debate" but "verbal ripostes in which the goal is to be most right rather than learn the most" is not an accurate description of that.
You can do this if you like. In the past, I'm guilty of it as well.
But I won't engage with someone who does this the same way, because they're not engaging me as a human. Unless, of course, they're already dehumanizing me (as occasionally happens on this website) and then I don't feel quite so bad about it.
If Python is so great how come it can't even express a decent data structure that it needs?
I'm sure you know this is a deliberate design choice/tradeoff. It's arguably turned out to be a bit of a millstone for languages that eventually want to grow up to be general-purpose-ish, but that wasn't as obvious at the time.
> You made a false claim. You referred to Python iteration as not generic, when what you really meant is that Python lacks first class monad support.
Are you really that lacking in self-awareness that you responded to someone annoyed about programming culture by going full "debate with logic, facts and reason" mode?
Just a point about linters. They are bad in every language (because everyone has different opinions on what they consider beautiful code).
If you're having a burnout because the PR you just opened are being rejected by the linter you probably just make the linter apply the modifications automatically in pipeline.
We used to hate the linter checks in my current workplace because it was really boring to fix the issues. Now the CI simple fix them, and nobody cares anymore.
I'm not the person you're responding to but what they say resonates with me as a Python developer.
I believe Python is quite possibly the best language for a few things
* Exploratory programming - such as what data scientists do
* Writing programs that will never grow beyond 150LOC, or roughly what fits on a screen + one page down
When I have those two constraints met I am almost always choosing Python.
Here are some problems I face on codebases as they scale up:
* Python conventions lead to what I consider bad code. Devs will often choose things like 'patch' over dependency injection, and I have seen on multiple occasions projects derided for providing DI based interfaces - "oh, why would you write code that looks like Java? This is Python".
There's a lot of death by a thousand cuts. Keyword args in functions are abused a lot, because they're "easy", ability to patch at runtime means it's often "easy" to just write what looks like a simple interface, but it's then harder to test. Inheritance is "easy" and often leads to complex code where behaviors are very, very non-local.
Dynamic types mean that people are often a little too clever with their data. I've seen APIs that return effectively `Union[NoneType, Item, List[Item]]` for absolutely no semantic reason, without type annotations, meaning that if you assumed a list or none came back you were missing the single Item case. The implementation actually internally always got a list back from the underlying query but decided to special case the single item case... why? I see this sort of thing a bit, and other languages punish you for it (by forcing you to express the more complex type).
* I find myself more and more leveraging threads these days. I did this often with C++, and all the time in my Rust code. Python punishes you for threading. The GIL makes things feel atomic when they aren't, reduces you to concurrency, all while paying the cost of an OS thread. Threading primitives are also quite weak, imo, and multiprocessing is a dead end.
And, really, Python is anti-optimization, which is a good and bad thing, but it's a bit extreme.
* Implicit exceptions everywhere. I see 'raise Exception' defensively placed in every Python codebase because you never know when a piece of code will fail. I see a lot of reliance on 'retry decorators' that just catch everything because you never know if an error is transient or persistent.
The common "don't use exceptions for control flow" is broken right off the bat with StopIteration. I just think error handling in Python is god awful, really.
* Mypy is awesome, but feels like a bit of a hack. The type system is great, and in theory it would solve many problems, but coverage is quite awful in open source projects and the type errors I get are useless. I actually only use mypy to make my IDE behave, I don't run it myself, because my types are actually technically unsound and the errors are too painful to work with (and honestly the type system is quite complex and hard to work with, not to mention the bugs).
There are lots of other smaller issues, such as my distaste for pip, py2/3 breakage, lack of RAII, the insane reliance on C all over the place, I think syntax is bad in a lot of places (lambdas, whitespace in general), etc but these are probably my main sticking points with using the language for large projects.
Again, Python is actually my favorite language for specific tasks, but I really think it's best optimized for small codebases.
The common "don't use exceptions for control flow" is broken right off the bat with StopIteration. I just think error handling in Python is god awful, really.
That's not an idiom python has ever subscribed to though, it's always subscribed to the "Better to ask forgiveness than permission" - I think there are strengths to both viewpoints, but honestly I think "don't use exceptions for control flow" is more of a convention that a "truth"
> honestly I think "don't use exceptions for control flow" is more of a convention that a "truth"
In absolute terms or when coding on paper, perhaps. But in the real world and if performance even remotely matters, it’s as close to a universal rule all languages end up embracing or turning into creaking hulks of slow code given how exceptions work in practice at the level of cpu execution units.
C#/IL/.NET embraced excerpts heavily at around the same time (start of the 2000s) but in time developers (in and out of Microsoft) learned the hard way that it doesn’t scale. With .NET core, exceptions for flow control are completely verboten and APIs have been introduced to provide alternatives where missing. Exceptions should be so rare if you chart all handled exceptions, you shouldn’t see any and would thoroughly explore why one or more pop up when a system or dependency hasn’t exploded.
If you care that much about performance you probably shouldn't use Python in the first place. Python deliberately prioritize ease of development above performance.
I'd be interested to learn about how exceptions are typically using in Ocaml for non-local control flow.
Exceptions have two core properties: (1) non-local jump (carrying a value) and (2) dynamic binding of place to jump to. Contrast this with e.g. break / continue in loops where (2) does not hold. If most use cases of performant OCaml exceptions were not making use of (2) that would be an interesting insight for programming language design.
No data as such. But here's the first example in the "batteries included" list library:
let modify_opt a f l =
let rec aux p = function
| [] ->
(match f None with
| None -> raise Exit
| Some v -> rev ((a,v)::p))
| (a',b)::t when a' = a ->
(match f (Some b) with
| None -> rev_append p t
| Some b' -> rev_append ((a,b')::p) t)
| p'::t ->
aux (p'::p) t
in
try aux [] l with Exit -> l
Here, the Exit exception is being raised as an optimisation: if no modification happens, then we return the original list, saving an unnecessary reverse. The try is the only place that the Exit exception is caught, so the jump location is static, much like a break.
That's, as you say, a statically scoped fast exit. One does not need the full power of exceptions for this (exceptions'd dynamic binding comes with a cost). If exceptions are widely used for this purpose, one might consider adding a nicely structured goto to the language. Something like linear delimited continuations?
Sorry, I misunderstood what you were after. Ocaml exceptions are used more generally, and often make use of 2.
For instance, the basic stream/iterator in Ocaml is Enum, which always uses an exception to signal when the stream has been exhausted, rather than providing a "has_next" predicate.
That is interesting. Once could argue that since fast exceptions contain non-local jumps with a static target, it is enough to supply only the former, so as to simplify the language.
Interestingly .Net has very performant exception handling as well. Not sure if something changed internally with Core, but using exceptions in place of return error codes was incredibly common within the .Net ecosystem.
.NET has always had slow exception handling because it tied/ties in Windows’ Structured Exception Handling (SEH); that’s rather slow but provides e.g. detailed stack traces even for mixed-mode callstacks.
Having ported some decently large codebases from OCaml to F#, the heavy use of exceptions in OCaml (where exceptions are very lightweight by design) had to be changed to normal control flow with monads to achieve good performance in F# — specifically because of .NET’s slow exception handling.
> The common "don't use exceptions for control flow" is broken right off the bat with StopIteration. I just think error handling in Python is god awful, really.
Python explicitly does not agree with this...
The same for a lot of your other complaints. It sounds like you are trying to write some other language in Python. Similarly if someone in a team using Java tried writing Python in Java they would complain a lot and end up with ugly hard to work with Java.
Python's exception system isn't bad. It's way ahead of Java's or C++ exceptions. The gyrations people go through in Go or Rust to handle errors are far worse. The exception hierarchy lets you capture an exception further up the tree to subsume all the more detailed exceptions. (Although the new exception hierarchy in 3.x is worse than the old one. In 2.x, "EnvironmentError" was the parent of all exceptions which represented problems outside the program, including the HTTP errors.[1] That seems to have been lost in 3.x).
I think StopIteration is being removed. That was a a bad idea.
> If a StopIteration is about to bubble out of a generator frame, it is replaced with RuntimeError, which causes the next() call (which invoked the generator) to fail, passing that exception out. From then on it's just like any old exception.
OK, I'm willing to contend that that is the case. It doesn't have much to do with my overall issue with error handling.
> The same for a lot of your other complaints. It sounds like you are trying to write some other language in Python. Similarly if someone in a team using Java tried writing Python in Java they would complain a lot and end up with ugly hard to work with Java.
This is a very loose criticism of my post, I don't know how to respond to this. I've written Python for years, I think I gave it due credit for what it's good at.
I am not trying to write some other language with Python, I just think Python is not a very good language compared to others given a lot of fairly typical constraints.
Why? Of course it's entirely possible to build programs of a larger size. I'm just saying that after a certain point you start hitting walls. There are tons of ways to hedge against this - leveraging mypy early on, spending more money (on eng, infra, ops, etc), architecting around the issues, diligence, etc.
It would be very silly for me to say that you can't build large systems in Python, I've worked on plenty that are much larger myself.
Saying a language is possibly the best for two important use cases (exploration, small programs) is quite a statement, in my opinion. I don't think I believe that there's a language that excels so well in, say, building web services.
Whats the convincing argument for defending this? Ive always heard its just the way python is. Through experience using exceptions for anything other then exceptional situations and errors seems messy.
I suppose that comes back to use cases. I am one of the 'getting things done crowd', rather than a computer scientist. A lot of my work had been in things like EDI, or similar integration code.
Imagine you are working through a series of EDI files and trying to post them to a badly documented Rest(ish) API of some enterprise system. If the file is bad (for whatever reason) you need to log the exception and put the file in a bad directory.
Pythons use of exceptions for control flow is perfect for this. If file doesn't load for whatever undocumented reason, revert back to log and handle the fallout.
"Oh I see a pattern, this API doesn't like address lines over 40 characters, I will add a custom exception to make the logging clearer, and go and try and see if I can fix this upstream. If not I will have to write some validation"
It is this dirty world of taking some other systems dirty data and posting it to some other systems dirty API that I find Python rules.
I have never worked on a large application where I owned the data end-to-end. Maybe there are better choices than Python for that?
My feelings exactly. I write a lot of kinda scientific code that is not really computational but takes one or more horribly messy datasets and combines them to do some analysis. I now routinely use exceptions to filter out bad/complex data-points to deal with later.
Data scientist here. It’s great because non programmers find it easy and we have so so many tools available.
That said, the amount of cpu cycles wasted even with best of breed pandas is insane, and when you want to do something “pythonic” on big data it all falls down. When you want to deploy that model you are also going to have problems.
That said, it’s still the best tool for the job, but it’s certainly not because of the creators of Python.
My opinion on static type systems is that, unless you're actually doing a more type driven development style and leveraging the type system, the single greatest benefit is IDEs will autocomplete for you, give you 'jump to' etc.
Python without type annotations can be painful in an IDE - it chokes a lot on those features.
Since I don't take a very type driven approach to Python (it would be too slow since I'd have to figure out 3rd party library types and shit like that) I just write annotations in places where I personally know the type. Mypy complains about this for various reasons - probably because my annotations are not technically correct, because I'm not a pro at generics in mypy and working out inheritance, as well as general mypy issues like poor support for class methods and that sort of thing.
But I ignore all of those errors because the IDE can still work things out.
What you’re describing is what happens when you have a developer who’s bad at their job.
I see the point that python invites these anti patterns.
But on the other hand, a software developer that returns Union(list[item], item]) in Python is probably also going to mess up a Java program sooner or later.
I don't think you'd ever write that in Java because Java would punish you for it. You can always blame the programmer, but I'm always going to blame the language for making 'bad' easy.
The overhead obviously is still there, but the interface is a drop in replacement for threadpoolexecutor, which looks basically just like a multithreaded/async/future-based collect.
The interface may have improved, I haven't paid attention to mp in a long while. If I'm reaching for mp I generally just accept that I'm using the wrong language.
I have a hard time reasoning about multiprocessing code, for various reasons. It's bitten me in a lot of weird, distinct ways, not really worth listing here.
To communicate across mp you use pickle, which is a pretty significant performance impact relative to something like threading. There's also the issue of copy on write memory + reference counting interacting poorly.
I suspect this is the root cause of the difference in our experiences. My uses of mp have usually been somewhat more "embarrassingly parallel", for instance having a list of data elements which need to be processed with the same algorithm. For this use case, the usage of mp is pretty simple, often only a `pool.map(f, xs)`.
I can imagine that pickle might have tricky edge cases and/or be slow.
1) No pattern matching. In 2019, this is just unacceptable. Pattern matching is so fundamental to good FP style that without it, it becomes almost impossible to write good and clean code.
2) Statement vs expression distinction. There’s no need for this and it just crippled the language. Why can’t I use a conditional inside an assignment? Why can’t I nest conditions inside other functions? It makes no sense and is stupid and inelegant.
3) Related to 2), why do I need to use a return statement? Just return the last expression like many other (better) languages
4) Bad and limited data structures. In Python all you get are arrays, hashmaps, sets. And only sets have an immutable version. This is unacceptable. Python claims to be “batteries included” but if you look at the Racket standard library it has like 20x more stuff and it’s all 100x better designed. In Scala you get in the standard library support for Lists, Stacks, Queues, TreeMaps, TreeSets, LinearMaps, LinearSet, Vectors, ListBuffers, etc.
5) Embarrassing performance. Python is so slow it’s shameful. I wrote a compiler and some interpreters in college and I honestly think I could create a similar language 10x faster than Python. Sometimes you need to trade off performance and power, but that’s not even the case with Python: it’s an order of magnitude slower than other interpreted languages (like Racket).
6) Missing or inadequate FP constructs. Why are lambdas different from a normal function? Why are they so crippled? Why do they have a different conditional syntax? The only sort of FP stuff Python has is reduce/filter/map. What about flatMap, scanr, scanl, foldl, foldr? Or why doesn’t Python have flatten? All of these are very useful and common operations and Python just makes everyone write the same code over and over again.
7) No monads. Monads can be used for exceptions, futures, lists, and more. Having to manually write some try catch thing is unseemly and worse than the monadic Haskell or Scala approach.
8) No threads and no real single process concurrency. Despite Python being used a lot, no one really seems to care about it. How can such a problem not be solved after over 20 years? It’s shameful and makes me wonder about the skill of Guido. There’s no reason why Python couldn’t have turned into something beautiful like Racket, but instead it has been held back by this grumpy old guy who is stuck in the past.
9) Others might not have a problem with this, but I detest Python’s anything can happen dynamic typing. It makes reasoning about code difficult and it makes editor introspections almost impossible. If I can’t know the exact type of every variable and the methods attached to it, it hampers my thinking a lot. I use Python for data science and if I could just have a language that was compiled and had static typing I would be 3x as productive.
Let me conclude by saying there currently is one good reason to use Python: if the domain is ML/DL/quant/data science Python is still the undisputed king. The libraries for Python are world class: scipy, sklearn, pandas, cvxpy, pytorch, Kerala, etc.
But Julia is catching up very fast and the people I have talked to are getting ready to ditch Python in 2-3 years for Julia. I don’t think I’ve encountered anyone who didn’t prefer Julia to Python.
It seems you just want a functional language. If so, why are you using Python, as there are much better alternatives like you mentioned, if not in the data science space? I use Python only for data science and generally Elixir, Racket, Haskell etc for other use cases.
> and that infernal pep-8 linter insisting 80 characters is a sensible standard in 2019
I don't know why 80 characters is a problem. I don't use the linter but I enforce this rule religiously with a couple of exceptions (long strings comes to mind). It forces me to think heavily about the structure of the code I'm writing. If I'm nesting so deeply, something has gone wrong. If I've got a ton of chained methods or really lengthy variables, it forces me to rethink them.
This also has the advantage of being able to put 4 files next to each other with full visibility. Vertical space is infinite, horizontal space isn't. It's probably a good idea to use it.
It's also awesome if you have to do code reviews on a laptop or don't have a massive screen available.
That said, we usually just go with autoformatting via black, which is 120 by default. No more hassle manually formatting code to be pep8-compliant. Just have black run as a commit hook, which is super easy via pre-commit [0]. And you can run the pre-commit checks during CI to catch situations where somebody forgot to install the hooks or discovered `--no-verify`.
Can't really imagine developing Python without Black any more.
I haven't got round to trying Black, but according to the project's README[0], the default is 88. Personally I think 79 is fine, but I can cope with up to about 100. Above that and you risk some really crappy code in my opinion.
EDIT: Sounds like the Black author agrees. "You can also increase it, but remember that people with sight disabilities find it harder to work with line lengths exceeding 100 characters. It also adversely affects side-by-side diff review on typical screen resolutions. Long lines also make it harder to present code neatly in documentation or talk slides."
While initially resistant I've come around on Black for our team and a failed Black check will now make a CI build fail for all our projects.
We're still using the community edition of SonarQube [0] for inspection but Black finally did away with the constant bikeshedding over formatting minutia, seems like it's saving us tons of time.
All I know is that with 80-char width I can have 2 files side-by-side on a 15" MBP along with the dir-tree on the left in an editor like PyCharm or VSCode and fully see both files wo wrapping. It helps my productivity immensely.
Same deal when it comes to reviewing PRs in GitHub. Wrapping just interrupts flow for me.
I feel the complete opposite. I really enjoy working with python over any other language. R does linear models and time series better and matlab has its charm, but overall I prefer python. Python is so easy to read and quick to program in. I am so glad I am not in the Java/C++ world anymore, but I know people in different roles have to deal with different issues.
> I really enjoy working with python over any other language.
I assume you mean, "over any other language I have tried" ?
As someone with a mathematical background myself, I am always surprised at how many data scientists and quants are ignoring more mathematically principled languages like F#, OCaml and Haskell.
> What does it mean? Have you done it in any of those language?
I did. I'm doing a image processing recently and use OCaml for prototyping. I've tried python (I've used it a lot for that long time ago), I've failed, it felt to awkward. I've described my experience here [1]
If you have no experience whatsoever with ML family [2], and doing all the stuff in python, you'll most likely be much more productive with python of course.
But I find ML-like languages way more pleasant, and I'm far more productive with libraries like owl [3], which are more fundamental and don't have fancy stuff, and ML, rather than with python and fancy lib like numpy/scipy.
Also Julia could be a good choice hitting a sweet spot between fancy libraries and fancy language.
Right now I’m experimenting with a pretty complicated model (60+ layers of multiple types), and I plan to train it on several hundred GB of data, using 8-16 node cluster (4 GPUs per node). Does Owl have a well tested and well documented autograd library with distributed GPU support (e.g. Horovod)? With a good choice of optimizers, regularizers, normalizers, etc, so I can focus on my model and not on debugging the plumbing or implementing basic ops from scratch. And last, but not least, it must be as fast as TF/Pytorch.
If the answer is “no”, then it does not matter whether I’m an OCaml expert, because I’m still going be more productive with Python.
p.s. Julia is nice though, hopefully it will keep growing.
I feel what you're saying is that regardless of how subpar a language is compared to alternatives as long as it has community built specific libraries that solve your problems you're more productive using them than anything else.
Which is of course a fair point. A language by itself is probably not even in the top 3 considerations when choosing new tech. Stuff like runtime, ecosystem and the amount of available developers would probably be more important in most cases.
> A language by itself is probably not even in the top 3 considerations when choosing new tech. Stuff like runtime, ecosystem and the amount of available developers would probably be more important in most cases.
Totally depends on a domain. In serious mission critical software you wont use libraries, but will use the language.
Yeah I don't disagree. But even there you would have similar other considerations besides the language. Like most still end up with C/C++ there even though there are others like Crystal, Nim, but you just don't find developers who know them easily, nor do you have any ecosystem support.
> Like most still end up with C/C++ there even though there are others like Crystal, Nim,
Because C++ and C are significantly better than Nim and Crystal.
There are also Ada and Spark and aerospace and very critical stuff.
> just don't find developers who know them easily
We don't look for OCaml/Ada developers, we hire programmers, and they program OCaml and Ada. It's not a big deal for a good programmer to learn a language, especially while programming side by side with seasoned programmers.
In my 6 years with Python, the only dissatisfaction with the language I felt was from parallel programming. I switched to Python from C, and at the time, I missed C transparency and control over the machine, but that was compensated by the Python conciseness and convenience. Then I had to dig into C++ and I didn't like it at all. Then I played with CUDA and OpenMP, and Cilk+, but I wished all that would be natively available in a single, universal language. Then I started using Theano, then Tensorflow, and now I'm using Pytorch, and am more or less happy with it. It suits my needs well. If something else emerges with a clear advantage, I'll switch to it, but I'm not seeing it yet.
As a bonus, it IS Python (numpy) in the background mixed with Scala. So you can use each language where they make the most sense - Python for the maths number crunching and Scala for the business logic and the architecture.
I think Spark also has .net bindings (so you can also tick F# on that list...).
As much as I love the languages you mentioned: I think it's a major weakness of them that they don't have the linear algebra libraries integrated such that you can do this the same way Python does.
For those unaware: Haskell has a REPL (ghci), and you can make files more script-like with the (currently most popular) build tool stack[0] if you include:
It's language vs libraries. If you have a library that has a function
get_the_shit_done_quick ()
than you don't care much about the language.
When you don't have such function, you need an expressive language to write it (and a bulk of python libs are not written in python, tho mostly for the performance reasons).
So it's all about finding a sweet spot between fancy libraries which do the shit for you, and fancy language, which let you to express things, absent in libraries.
This sweet spot differs from domain to domain, from user to user. Even in numerical stuff someone could have a requirement for a better language, although this domain is indeed to well defined to have enough fancy libraries.
Language vs libraries isn't just about an expressive language to build in when you don't have a library. The likelihood of a library's availability also depends on the barrier to entry. An amazing language that isn't usable by biologists won't have many libraries that solve biologist's problems.
To your original point of being "surprised at how many data scientists and quants are ignoring more mathematically principled languages like F#, OCaml and Haskell," I'd much rather use one of those languages, but I'd have to build the foundations myself. Today, they aren't the right tool for the job. They don't have the libraries I need, which means I don't build further libraries for them, making other people less likely to build on them, so they aren't the right tool for the job tomorrow either. I'd say it's a network effects thing primarily.
Well, yeah compared to R and matlab, I am willing to believe Python excels, but the person you are replying to is probably not doing data science, so he has options besides the 3 just mentioned.
Re. linting, I'd highly recommend Black with whatever line length you want - it'll reliably reformat your code, and once you lose the urge to reformat while typing it's fantastic. It's like deleting a bunch of useless mental code. And the code reviews are even better: include a linting step (ideally with isort) in CI and you can avoid 95% of formatting comments.
100% this. I just switched to using Black recently and not having to ever fix a lint issue again has been life-changing. Use Black with pre-commit (https://pre-commit.com) and never look back.
Over-sensitive individuals are hard to please and often unhappy. That's more of a personality flaw than a flaw with the current state of software engineering.
If there's one thing wrong with our profession is a lack of ethics and accreditation - we're essentially letting random people build critical infrastructure and utilities.
We don't have a tooling problem, in fact we have too many tools.
I see so many people (especially on HN) fixating on tools, dissecting programming languages, editors and libraries into their most minute parts and then inevitably finding faults in them, disavowing them and swearing that other tools X, Y and Z are purer and worthier.
If you want to stop hating software and improve your burn out, stop caring about irrelevant stuff.
Is that supposed to help someone? I fail to see how telling someone "just ignore the stuff that irks you that you have to spend 40+ hours a week dealing with. You are overly sensitive and your concerns are irrelevant" helps anyone. Even if it was true, it was delivered in such a ham-fisted manner that I can't imagine anyone taking it to heart.
I sometimes have a tendency to focus only on the negative aspects of some things, while ignoring that all in all, those things are fine. I don't think I'm alone in that, certainly not in our line of work.
A call to "snap out of it" seems that it can help in such situations. Python is not a programming language that should make people burn out or angry. Very few languages should be capable of that, so I think this issue goes deeper than just flawed tools.
I find that the only way not to go nuts in this profession is to ignore most of it and most of it is really not relevant to building good software. There are just too many tools and always searching for the perfect tool is a recipe for being unhappy.
>If there's one thing wrong with our profession is a lack of ethics and accreditation - we're essentially letting random people build critical infrastructure and utilities.
As someone with a P.E. license that spent hundreds of hours studying and had to sit for that excruciating 8-hour exam twice, I don't think even 5% of the software developers in the US could pass an equivalent exam. Granted, I think the sector I took it in has a harder test than some, but it is a weed out test for someone that already took 4 years of Engineering school.
Some takeaways from this:
1.) I learned a little bit from studying, but overall even though it was hard, I would've learned a lot more by getting a Master's degree.
2.) The test isn't good at determining if you're good at your job or honestly even minimally competent in your area. For example, even in a specialized field (power systems engineering), there are thousands of diverse jobs (power generation, distribution, electrical contracting, power transmission operations, power transmission planning....etc etc) so the test only had a few questions on my particular area.
3.) There are a lot of amazingly smart people working in software, but the skill range seems to be bigger than in traditional engineering fields where most engineers are fairly similar in skillset (there are some outliers) as they all generally have to pass the same courses if their university is accredited (talking about USA here). In the software world, you have software engineers and computer science doctorates mixed with someone with very little training that is trying to wing it. That means the dataset has a far greater range on skillsets. One employee might have had a class on compilers while another just learned what a for-loop is. In engineering, we generally all show up to the first day of work with the same building blocks (thermo, statics, Dynamics, circuits, differential equations, stats, calculus, basic programming...etc). The only difference between me as a senior engineer and a new hire is 9 years of experience, knowledge of the business and tools and ability to get things done without oversight. It makes a big difference, but I wouldn't expect them to be lacking any of the tools or training that I have picked up (ok...maybe databases).
I'm struggling a bit to convey the overall message that software engineering seems a bit different and licensing would therefore need to be different if done. Perhaps you could have licensing for individual subjects? For example, you could pass a basic test on relational databases where you prove you can do basic operations such as select, where clauses, joins, updates, exports...etc. Then you'd have another to prove you were minimally competent in Java? Would that be of any value to an employer? I don't know. I'm guessing someone already does this for Oracle and Java too.
so I am studying for the FE (I need a lot of math before taking it is realistic) mostly 'cause it gives me this broad feel for things Engineers all know. (I will take the 'other disciplines' - mostly because I want this to be as broad as possible; being broad but shallow makes it a lot easier for me, too, but for me, it being broad is an advantage in other ways, too.)
I personally find tests to be way easier than school, and the schools with reputations that are worth something are... pretty difficult for people like me (who weren't on the college track in high school) to get into. (and there is something of an inverse correlation between the prestige of a school and how flexible they are about scheduling around work; especially for undergrad)
From what I've seen of the test, it does provide some foundational ideas of what engineering is about. Like, it goes a lot into figuring out when things will break - something I haven't really seen a lot of in software engineering.
What I'm saying here is that I dunno that an optimal SWE PE would test you very much on the specifics of Java or SQL or what have you. I mean, from my experience with the FE, at least, they give you a book that has most of the formulae you need... and you are allowed to use a copy of that book during the test, you just need to be able to look it up and apply it. Seems like they would do the same with Java or SQL.
(I mean, to be clear, to apply the formulae, you still need to have more math than I do. I imagine the same would be true of sql or java, only I'm pretty good with SQL, having 20 years of propping up garbage written by engineers who weren't.)
From what I've seen of the software engineers, Most of the time, the software guys throw something together, toss it over the fence and move on. Clearly, they didn't do any analysis to figure out how much load the weak point in the system can handle, or even what the weak-point was. It's incumbent upon me (the SysAdmin; usually the least educated person in the room) to feed load to it at a reasonable speed and to back off and figure out what happened when the thing falls over.
I mean, I think the real question people are bringing up here is "what if we treated software engineering, or at least some subset of software engineering more like real engineering?" - like clearly sometimes you can slap together something over the weekend and it's fine, but... if you are building a car or an airplane or a tall building or something, you probably want to put in the time to make sure it's done right, and for that, you need rules that everyone knows; the PE system, I think, establishes rules everyone knows, while I think software engineering mostly works on the concept of "it worked for me"
Wait...are you studying for the FE without getting an engineering degree? Props to you. One thing to keep in mind though is that there is a morning and afternoon session that are both 4 hours iirc. The first session is always a general session which covers the absolute basics of Engineering math, circuits, thermodynamics, statics, Dynamics, chemistry, and physics. It really is very easy if you remember the classes. Some of the circuits problems can be done in your head and the statics problems might have the resultant force being part of a 3-4-5 right triangle (again, it shouldn't take much thought). The purpose of this is to ensure you learned the absolute bare minimum in these classes. One reason the general questions have to be easy is that depending on your course schedule, it might have been two years since you took a course (Ex: you might have taken only one thermo class as an electrical engineer during your sophomore year). The afternoon test is either also general or specialized to a discipline (Ex: chemical engineer) and are much more difficult in comparison. I barely studied for the FE and felt I knocked it out of the park (especially the morning session). I spent months of all my free time studying for the PE and failed the first time...it is difficult. Keep in mind that both of the tests have a lot of breadth, but little depth. Going into an actual Engineering curriculum will teach you a whole lot more. MIT used to (might still) have a circuits Edx class online for free which covers the first out of 3 circuit classes a EE will take...that should help a little with the scale.
Software is weird as the hardware, languages, and frameworks are always changing and the optimal work done on any project seems to be just enough to keep the customers from going to a new product and not necessarily the best or even a good product in many cases. There are cost constraints in Engineering as well (good, fast, & cheap...pick 2), but it still feels pretty different to me than software engineering where something breaks all the time in any non mainstream software I've ever used.
Yeah, they'll let anyone with a couple hundred bucks sit for the FE. The chances of getting a PE or even an EIT, though, without a degree are... slim to none. But that's not really my goal? (I mean, it would be if it were just tests) Mostly I just want to know those 'absolute basics' of which you speak, and I like to have a test to study towards.
I'll check out that edx class, thanks, that sounds like my thing.
I dunno, man. I own a bunch of stock in BA right now. I bet that the idea of spending twice as much on software engineers and hiring licensed folk to write their software is looking pretty good to Bowing execs right about now, even from a plain profit and loss perspective.
(of course, a lot of professional engineers were involved in building that plane... but it's pretty unlikely that there were any PE software engineers involved, just 'cause there aren't many. Would that have helped? maybe, maybe not. to the detail that I've studied (not.. really all that much) it sounds like they made some really basic mistakes, like relying on a sensor that wasn't redundant, and those mistakes should have been caught by the other engineers. I don't know that it was entirely a software engineering problem. )
As software mistakes get more and more costly, it starts making more and more sense for execs to hire people who are properly licensed to supervise the project. (I mean, assuming such people exist, and for software engineering, right now, you could say such people don't exist.)
Oversensitive? Is the contractor who chooses a Dewalt or Ridgid over a Ryobi for daily work "caring about irrelevant stuff"? A drill is a drill right? Why is it different for us in software?
Maybe. (and I thought festool was the new hotness? I thought that at least rigid and ryobi had been devaluing their brand by selling cheap junk under their brand? But I'm solidly in the 'prosumer' and not 'real contractor' end of the market, so not an expert or even close observer, really.)
But I think the point OP was making is that contractors have a licensing and training program, and if you hire someone to put a roof on your house, they either have to go through that process or work under someone who went through that process. I mean, choosing the right tool is a small part of that, but someone in the chain of command is gonna have words with you if you bring your Xcelite screwdriver and try to use it on the roofing job.
That's not true almost anywhere in software, and that probably makes a big difference.
(I mean, not being educated myself, i personally have mixed feelings about licensure. But it's a good discussion to have, and I think that there should be something like the PE for software engineers (and there was, in texas, but it will be discontinued this year. )
I don't know anything about those brands you mentioned, but a programming language is more complex than most (all?) mechanical tools, it's designed over years and continues to change over decades.
It's impossible to do a perfect job under such conditions, and it's anyway impossible to please everyone.
> Over-sensitive individuals are hard to please and often unhappy. That's more of a personality flaw than a flaw with the current state of software engineering.
Ah yes. Just remove the part of myself that got me into software as a child, and proceed robotically.
> If there's one thing wrong with our profession is a lack of ethics and accreditation - we're essentially letting random people build critical infrastructure and utilities.
Spot on. Leftpad (and perhaps the whole js ecosystem) are good examples.
Those differences don't matter much when it comes to building software that fulfills specific requirements.
I am a big fan of compile-time checking, but there's a lot of good software built without it and sadly there's also a lot of successful slow software. These are disadvantages, not an impassable barrier.
I know I’m responding to opinion but: developer productivity isn’t one of the pitfalls of Python.
Yes I agree that if you’re using Python for a large scale project involving lots of developers its not the best; but that’s because it doesn’t have a good type system.
You can’t work out why people like it so much because of this misconception. The languages that you gave as examples most definitely do _not_ value your productivity, it values correctness as enforced by a type system and refactoring needed for large projects.
I am more productive in a language with an expressive type system (e.g. Haskell) than one without. Thinking about types not only guides me towards a working solution quickly, but the checker catches all my minor mistakes.
In Haskell, you can actually defer all type errors to runtime if you want to. But I have never felt this makes me more productive.
There are plenty of real world situations where neither the requirements nor the prerequisites value correctness, but getting an 80% or even 99% correct set of results while flagging the unhandled cases is very valuable.
I'd say the most immediate value/payoff of correctness is ensuring your own code is consistent with what you think it does rather than correct wrt to some sort of external specification.
The bigger the team, the more distributed the engineers and the bigger the codebase, the more productivity is lost by using scripting languages. I find it infuriating because it is so frustrating and it definitely does not feel productive.
For a lone hacker, its the other way around. Compare e.g. to Golang's "programming at scale" rationale.
The way I think about it is that Python is a strong local optimum in a space that has a massively better optimal solution really close by. But it's nearly impossible to get most people's algorithms to find the real optimum because Python's suboptimal solution is "good enough". And the whole software industry (and in some ways, by extension, all of humanity ... to be over melodramatic) is suffering for it.
> And the whole software industry (and in some ways, by extension, all of humanity ... to be over melodramatic) is suffering for it.
I don't think the biggest services built with Python (think Instagram, Dropbox, etc.) have more consumer-facing issues than services written in other languages.
If you're talking only about developers, fine, however I also think most Python developers like the language. For me it seems that Python has strong vocal critics, that show well in places like HN, however it is not representative of the majority.
So I really don't think Python is making the humanity suffer, for any good measure of suffering.
There is great saying that Python is the second-best language for everything and it might be true. Where Python excels is that you have good-enough libraries and support almost everything. All other languages have some pain points where they shine very brightly in some areas but they have very bad or non-existing libraries and support to other areas.
```--max-line-length=120``` passed to flake8 will switch it to 120. Use another number for a different length.
Things like max line length should be something your org or your fellow contributors decide on, not something dictated to you no matter what the language.
A corporate point of view is than any programmer should be interchangeable with the minimum amount of fuss. I understand why someone building an injury organization has a responsibility to think about the future.
But I confess that sometimes I feel very demoralized when an organization implicitly tells me that my years is study and my ongoing self-education and practice is all meanless Because in principle someone fresh out of college should be able to take over my project with the two week handoff and a few docs.
> Every moment working with python (and that infernal pep-8 linter insisting 80 characters is a sensible standard in 2019) increases my burnout by 100x.
You hyper-fixated on that tiny thing. It increases your burnout 100x, remember?
It's the only problem you mentioned, and I inferred from your post that it was really bothering you. That was my thinking anyway.
I see now that you mentioned other issues in other comments.
Anyway I appreciate you sharing your feelings to an evidently unreceptive audience. It's nice to know there are better ecosystems out there waiting when I get a chance to look for my own next step.
i fully agree with you. it has a stubbornness about it and not in a good way like the stubbornness you might find in lisp, scheme, and sml derivatives. i have had to write python on only a few side projects, but it was miserable, as you say. not only is the language all over the place and embarrassingly slow, the ecosystem is as well. i tried getting an application someone left when they left running on windows 10, and it was basically a null effort without a complete rewrite, upgrade to 3.x, and upgrading/replacing GUI libraries.
if i had to write python as a day job, i would quit. i have said it before, but python is the new c++, helping people everywhere write brittle, unmaintainable code.
I'm sympathetic that Python is relatively not a great language, but IMHO an 80-character line limit is quite reasonable. It's easier to read on smaller screens, easier to view side-by-side diffs, and tends to force you to break up your code more.
That said, this shouldn't be a lint, it should just be enforced by a formatting tool as a format-on-save setting. It just destroys all the wasted arguments about formatting and the wasted time trying to manually line up code.
I'd also add that while perhaps a bit on the pessimistic side, I tend to view the 80 char rule / limit not as an ancient hardware limitation of monitors, but as a limitation of our eyes and visual processing circuitry. There is a reason why newspapers and well laid-out websites don't have 300 char width lines. Those are physically harder to read, whether we want to admit it or not, as our eyes lose track of the flow to the next line.
I'm all for decreasing unnecessary cognitive load, there should be quite enough of that without us adding more by accident.
I don't understand–I quite enjoy reading a well-formatted plaintext email that sticks inside an 80-character line width. Any mail client worth its salt should be able to display that.
> as a limitation of our eyes and visual processing circuitry.
If so, then why not put the limit on line length without trailing whitespace? Because it makes no sense that with indentation I should lose available characters.
> There is a reason why newspapers and well laid-out websites don't have 300 char width lines.
Yes, and the reason is, print and transportation are expensive, so newspapers found a way to cram as much text as possible in as few pages as possible. You don't see them replicating this style on-line, and neither you see it in magazines that are priced well above their production & distribution costs.
The reason "well laid-out websites" don't have 300 char width lines is because web development is one large cargo culting fest of design. 300 may be a bit much due to actual physical length, but your comment has 200+ on my machine and reads just fine.
I don't buy these "80 chars / short lines and ridiculous font sizes are optimal for the eyes" arguments. They fly in the face of my daily experience.
It's hypothetical so I can't say for sure, but I did work a lot with the JVM and I used Clojure (and I'd probably use Kotlin now as well) and I didn't feel as burnt out as I do now.
I have a acquaintance that swears by Clojure (for data modelling of sorts) and specifically that I should also use it (due to my category theoretic background).
Yeah, I'm a bit sick of Python. We have some internal utilities written in it and while developers could manage all the dependencies and everything, we kept having problems with getting everything working on the machines of our electronics techs.
Gradually pretty much everything has been rewritten in C++ (with Qt GUI framework). Way easier to be able to have a setup program (using free InnoSetup) that just extracts the eight or so DLLs required in the same folder as the EXE and that's it.
We just use Python for a bit of prototyping and data crunching here and there now.
> Way easier to be able to have a setup program (using free InnoSetup) that just extracts the eight or so DLLs required in the same folder as the EXE and that's it.
I have 3-4 files side by side, Everything else is max 80 chars. I really don't understand why people need more, because long lines are REALLY HARD to read.
You mean like 64 chars, after you account for typical indentation? I hit that limit way too often when using descriptive names.
I usually view two files side by side per screen (so 4 in total). I sometimes up this to 3 per screen, but the trick here is to use a smaller font. Right now, if I split my editor into 3 panes, each has 98 columns available.
I guess it can happen, but in my experience, I had more problems with overly long names hurting readability: something like ExampleDataUpdaterFactoryConfigReader. (No, I don't think IDE makes them acceptable because you still have to read them in the code, IDE or not.)
Of course, 80-character limit doesn't guarantee good naming, but it acts as a friction against adopting ultra-long names, occasionally forcing devs to find a better alternative. YMMV.
I work at the same place you do. I code often enough from my macbook, using likely the same tools that you use, and they're truly painful with >80 character lines. I don't understand how Go and Java devs survive.
I also don't think an 80 character line limit is a bad thing: small well defined functions that do only one thing are good. Long lines often encourage people to want to nest code deeply (which is terrible!) or to write complex one-liners, and that + list comprehensions is a dangerous pairing.
Java dev here. I would try and defend a 120 character limit but I have a 34" 5k widescreen monitor so I probably don't have a leg to stand on.
I'm writing a mix of Java and python at the moment. Python for lambdas behind an API gateway and Java on some containers for stuff that's more complex but evolves more slowly.
It's neither python or Java where I'm really spending my time though. It's CloudFormation, Ansible, Jenkins and stitching the whole system together for CD that's killing me. I feel like programming languages are the easy bit these days.
> I feel like programming languages are the easy bit these days.
Agreed. The mainstream garbage-collected languages are all basically the same in the grand scheme of things. The work that takes most of my time (and growing) lately is packaging, testing, deployment, etc.
The only tool that gets remotely irritable about it is the one we use for diff reviewing, afaict?
> I also don't think an 80 character line limit is a bad thing: small well defined functions that do only one thing are good. Long lines often encourage people to want to nest code deeply (which is terrible!) or to write complex one-liners, and that + list comprehensions is a dangerous pairing.
Python already starts you out at a nest of 2-4 characters, so we don't even get the full 80.
But honestly I don't think a 100 character line is going to doom us all to hyper-nested expressions.
Well that sounds like a personal problem and it has nothing to do with the language. As a counter-anecdote, I've been working on Python projects for over a decade and have yet to experience the draconian linter settings you described.
Thanks for posting. I've wondered if it's just me or does anyone else want to leave software engineering because of Python's dominance. There's no arguing against it either because as this thread shows, "I like it for my use case, look at these libraries that let you get X and Y done SOOOO easy, so it must be great for everything."
I hated Javascript back when all it had were callbacks, but once native promises stabilized, I loved the simplicity + ubiquity of the promise abstraction, and then later async/await. Now it's actually my favorite dynamically-typed language.
I've noticed a lot of Javascript hate also comes from people just disliking client development (which isn't easy on any platform).
I thoroughly enjoy UI/client development, but the web app tooling (layer upon layer that is supposed to "fix" HTML/CSS/JS every few years) is frustrating to me. Thus, I avoid it like the plague. :)
To be fair, the JS ecosystem I think is a bit more tolerable than Python. JS isn't being promoted far and wide as the good-enough tool for everything like Python is. A lot of JS is focused on the "view", and I can see a dynamic language fitting there (though React/Redux is making me rethink that).
It is a language, better than MatLab/R/SPSS, which come before it.
I honestly don't think it is that bad. And they are many people don't care from a programming language perspective, they need to just finish the functionality.
> It is a language, better than MatLab/R/SPSS, which come before it.
I don't think so. I think R is a lot more expressive and not really any harder to read. It might have a steeper learning curve, but it's not so bad that I think that actually matters.
I'm currently learning R. Its a terrible, dreadful programming language. But an excellent DSL for statistical analysis. The main mechanisms for the expressive nature of R is its use of the Environment structure, and terrifying casual use of call stack inspection just to make symbolic APIs. It doesn't even make the latter part of the language without Tidyverse things like rlang. In fact without the Tidyverse efforts the language is even bad as a DSL unless you only deal with a CSV of data with like 100 data points.
I think R is a much worse programming language, like 1 indexing, very unintuitive string manipulation, dataframe being the magic facade that hides complexity, etc.
I would choose Python any day if the other option is R.
1-indexing makes sense within the realm in which R shines. It’s only CS people who seem to completely lose their shit when they encounter it. I’m glad Julia also has 1-indexing.
If you are a computer scientist doing a little data science, then R is awful for the reasons you detailed. If your function entirely consists of data science, then R is a fantastic language for the same reasons you hate it.
R is a fantastic statistical toolkit and a terrible programming language. Building software in it? Massive pain in the ass. Processing tabular datasets on the other hand is an absolute breeze that leaves pandas in the dust. Tidy verse makes it better though.
Yeah, this is generally my take. On the one hand, I cringe when I see all the discussion regarding how much better Python is than R for data science. As a data scientist, who spends 90% of my time cleaning, manipulating, and modeling data, I would take R any day over Python. But at the same time I realize that for many, data science is more 90% software engineering, with 10% being the actual data manipulation and modeling, and for these people, R is a complete non-starter.
I look at developer surveys sometimes when I'm trying to decide what to learn next. According to the 2018 stack overflow survey, 68% of python users would like to use it again in the future [1].
The surveys never tell me why though. What do people like or dislike about python? I know it has a lot of libraries people love (scikit-learn, tensorflow come to mind)
Those surveys are often targeting mosstly newer devs who know no better because they did not have the time to validate their opinion against real world.
Its not thir fault tho, you can only do so much in limited time, thats why expertize requires years.
I would rather wish Golang eat the world than Python, just because the practicality of python becomes questionable when performance is key.
In my previous startup in India, I trained unskilled personnel to become decent python developers to work on our product; everything was fine till the product grew exponentially and we had to optimise every nook and corner of it to better serve the customers and save on server bills.
So we had to optimise our application with Cython to meet the demands. So, when training someone to code if we use Python as a language; we should follow up with the disclaimer "You will learn to code faster than most other languages, but you cannot use this knowledge to build something at scale (i.e at-least when budget is of concern, when you are not instagram)".
In comparison, Golang excels in both form and function. It is just as easy to teach/learn as python and doesn't need the aforementioned disclaimer. Web services are a breeze to write and is built for scale.
I understand that there are criticisms against the language design of Go, some are valid and most are just because it was made by Google but none questioning the scalability of Go applications.
Many of my complaints about Python are valid for Go, except that Go makes even more perilous decisions than Python for error handling (and the community kinda gleefully embraces it).
But at least Go is a lot faster and has real concurrency AND parallelism, so it's definitely better than Python.
Go makes worse decisions even, I'd say. Python has gevent, one of the best and most friendly greenlet libs around too, so the concurrency/parallel issues seem fairly moot. It's true that the batteries-included versions of things, while mostly easy to use, falter at scale.
> so the concurrency/parallel issues seem fairly moot.
Python has the GIL, so true parallelism can only be achieved with basically cloning a process X times the number of processors on a computer and inter process communication.
I’m really glad to hear someone else voice these frustrations. I’ve really tried to embrace Python and everyone tells me how great it is. I feel like a failure as a developer but I dislike it so much I quit a job, that switched to Python as the primary language, and took a year off from development. I’m pretty much a polyglot as far as languages go but Python riles me.
Python is the simplest mainstream language yet still reasonably powerful. Some folks don’t like simple and I’ve found it’s better in the long term to find those that do.
>that infernal pep-8 linter insisting 80 characters is a sensible standard in 2019
So change the limit or disable that check. If someone is keeping you from doing that they're the one who's insisting on 80 characters, not the language. Who uses any linter without making some changes to its default settings?
I was smart and lucky enough to switch from Python to Clojure and from Javascript to Clojurescript. I am not even sure anymore what have I liked about Python back then. I know for sure - those who really like Python probably haven't given a heartfelt try to Clojure, Haskell or Racket.
(and that infernal pep-8 linter insisting 80 characters is a sensible standard in 2019) increases my burnout by 100x
I've yet to encounter a python linter where you can't pick and choose which rules to ignore. This is a first one to go. Annoying PEP for sure, but https://pypi.org/project/black/ almost completely eliminates your issue.
There is a lot of hype and "they are using it too" mentality as well I think. I am working on control software which, if it crashes or even takes a tiny bit too long to issue a command, could cause the company big financial loss. I was forced to do it in python "because everyone else is using python".
This is about as sensible to me as saying it is a good idea because most trains are longer than they are wide. Who cares about those things, we use computers that nearly all have 4:3 or 16:~10 displays (or wider), and many of use use more than one of them.
I obviously don't agree, but it is worth noting that comparing typographic conventions for English with typographic conventions for code is not a very good idea, to me. Especially when we're discussing a language with semantically active whitespace.
A 100 character line might only see 15-70 characters of active use.
While this is a fair point, pylint bumps the limit to something like 100 characters, and you can split deeply nested logic into separate functions with ease.
Are you implying short and wide trains would be sensible? I like using a wide screen for programming, but prefer code formatted to 80 chars. That allows me to have two vertical windows of code open side by side. It also makes it easier to use things like a graphical diff to merge code.
it's funny because a few people ended up using python as a prototyping tool to find better approaches to then rewrite in cpp or else. Somehow in that regard it saved them tons of time.
You can configure Pep8/Flake8 to ignore subset of rules. You might wanna look into that. Also, automatically executing it might be your preference, or not.
What I find worse about Python than other languages is the lack of tooling and a relatively small collection of libs - many of which are half way done. In a last assignment, we decided Python is at most a hobby language. Python is great for machine learning because most research was conducting using this language, and as such, tooling is available. I would use it at most as an API exposing ML but that's pretty much it.
Well, that's what I said, there are plenty of packages for data science. For building APIs and web stuff usually there aren't. Python is nowhere near NodeJS for instance. But whatever suits, if you only know one language, it will seem like the best language out there.
This comment is so strangely off the mark I almost wonder if you're basing this off of some secondhand hearsay. Until recently, Python's largest arena of usage was web development and API scaffolding by far. A simple search would have revealed literally thousands of web development focused libraries, most centered around the Django ecosystem [1].
As another commenter mentioned: Django, DRF, Flask
But unmentioned... The old titans of Pyramid, Pylons, CherryPy, Bottle, Tornado, wheezy, web2py, and more.
A _gigantic_ portion of the community is centered around web development, and the fact that almost all web packages have centralized around Django, DRF, and Flask is a function of dozens upon dozens of popular frameworks merging in a "survival of the fittest" fashion into the best possible union of ideas. You seem to perceive "many packages" that all perform a similar function to be a good thing, but speaking from the perspective of someone who writes 70% Javascript and 30% Python I'll tell you that almost every other sane group of developers considers the highly duplicitous JS ecosystem to be a massive weakness.
The NPM ecosystem has an _ocean_ of shallow, highly similar, and one-off frameworks because Javascript developers tend to "divide and conquer" rather than bring good ideas together. Python developers deprecate their packages in favor of better ideas, or just open a pull request to alter the project rather than creating a 99th competing framework in the first place.
You have Django and DRF and Flask. DRF is, by far, the best designed framework for building APIs in any programming language I have ever seen, and I've at this game for a long time.
JavaScript has 10 times more packages than PyPi, and for the most part they are absolute garbage with no intent on being maintained.
Python has substantially fewer packages where the community rallies around them and keeps them as best-in-class. SQLAlchemy, DRF, factoryboy, requests, etc., are all incredible one-stop-shops for the vast majority of use cases.
You don't need 15 libraries for doing HTTP requests. You just need one good one that does the job so you never have to think about the problem ever again. Python excels at this class of libraries and by comparison npm has appalling choices.
I'll take a framework that has worked effectively for a decade without significant changes where I can use all my accumulated experience to get things done over the random musings and experimentation of the half-baked JS libraries where you're stuck for hours solving relatively trivial issues that I have never had to think about in DRF because DRF just works and is backed by some of the most experienced people in API architecture in the industry.
When you think about and iterate judiciously in a problem domain over a decade instead of trying something different every day to see what sticks, you'll see you can get remarkably far.
And what exactly are Flask and Django missing to merit more packages doing the same thing?
Routing requests and managing HTTP fundamentals is a solved problem. There is literally zero value in adding another framework when the real complexity is in business logic.
That’s the issue. These are routing requests and not much more. I recommend looking around at java, node, php, c# web frameworks for more details on what a web framework should do. Sqlalchemy and other hobby libs can extend flask and django but those are similarly limited.
> In a last assignment, we decided Python is at most a hobby language.
I don't know if you are just trolling, but this is the silliest, most detached from reality thing I've read online today. Python was key in building numerous massive public companies like Instagram and Dropbox. It's one of the most popular and widely used programming languages on the planet for everything from APIs to desktop clients to data pipelines. It had a lot of early popularity in the Silicon Valley start-up scene in the early 2000s, even pre-dating the Ruby on Rails web dev trend.
The guy who founded the company that runs this very website wrote about the draw of Python 15 years ago [1] at which point it was already widely used in certain niches. This is before any deep learning libraries existed. I remember first playing with Python around 1999 or so.
> I would use it at most as an API exposing ML but that's pretty much it.
I don't know how old you are or how long you've been in the industry, but the ML thing is a "second act" for Python. Deep learning grew up in a time and place where Python was a good fit which put Python in the right place to benefit. But Python had already lived a long and full life before any of that happened.
It's fine if you don't like Python or don't think it is a good fit for a project, but claiming it is a "hobby language" with a "lack of tooling and a relatively small collection of libs" is a good way to get laughed out of the room. It has one of the largest and most diverse libraries. And as far as tooling, Python is one of the most popular languages for implementing tooling. Check out ansible.
> There are plenty of packages for data science. For building APIs and web stuff usually there aren't. Python is nowhere near NodeJS for instance.
This has got to be a troll, right? Just some of the most popular web frameworks: Django, TurboGears, web2py, Pylons, Zope, Bottle, CherryPy, Flask, Hug, Pyramid, Sanic... Lots of huge websites were originally built with Python like Instagram, Reddit and YouTube. Of course they mature into complex distributed architectures using all kinds of languages, but it's all about the right tool for the job.
It appears tho the right tool for the task at hand, as a codebase grows, is NOT Python. A good starting point for someone fresh of off a university campus, and easily impressionable. An "object oriented" language without even the basics of scope visibility is not object oriented. Half done libs are not libs that one might consider production grade. Debugging tools where you need to dive into execution branches and heck knows what other archaic debugging techniques are not debugging tools. Lacking documentation or poorly written readme fiels are not documentation. This whole Python movement is gaining a bit of traction because it became popular in universities. The realities of real life programming will sway many religious programmers towards more suitable languages.
So yeah, fantasising about how Python "is eating the world" is a good dream, but the only thing that Python is eating is dust left behind by far more developed programming languages, surrounded by far more modern ecosystems around them.
If only its package management were as easy as its syntax...
I wish pip worked the same way as npm: -g flag installs it globally, otherwise it creates a local "python_modules" folder I can delete at any time. And optionally I can save the dependency versioning info to some package.json...
Instead, pip is a nightmarish experience where it fails half the time and I have no idea where anything is being installed to and I'm not sure if I'm supposed to use sudo or not and I'm not sure if I'm supposed to use pip or pip3, etc.
2. For each project you want to work on, create a venv. Yes, there are tools for this, but the base venv tool is totally fine. (venv should be included in your Python, but a few distributors like Debian put it in a separate package - install it from them if needed.) Use python3 -m venv ~/some/directory to create a venv. From here on out
3. As a first step, upgrade pip: ~/some/directory/bin/pip install -U pip.
4. Install things with ~/some/directory/bin/pip install.
5. Run Python with ~/some/directory/bin/python.
Slightly advanced move: make a requirements.txt file (you can use .../bin/pip freeze as a starting point) and use .../pip install -r requirements.txt. That way, if you get any sort of package resolution error, you can just delete your venv and make a new one. (Downloads are cached, so this isn't super annoying to do.)
A "project" can either be actual Python development, or just a place to install some Python programs and run them out of the resulting bin/.
(Edit: Yes, I know about activate, see the replies below for why I don't recommend it. With these rules, you get to say "Never ever type pip, only .../bin/pip", which is a good safety measure.)
Herein lies my problem. If I want to start a Node project I run `npm init` and then `npm install --save` to my heart's content. If I somehow manage to mess it up I just delete node_modules/ and install again.
If I want to start a Python project I have to set up venv and remember to put relative paths in front of every command or else it'll run the system version. Sounds simple, but it's still something to always remember.
1. pip install --user and sudo pip install are fine actually, they will not interfere with venv, they can co-exist just fine.
2. yes
3. probably do "source bin/activate" first, then run 'pip install -U pip'
4. just run pip install whatever, no need the full PATH
5. just run python directly, no need the full PATH
6. run 'deactivate' when you're done for now, 'source bin/activate' when you want to continue/resume sometime later
in fact I like this better than node_modules, the venv layout of bin/include/lib is more natural comparing to the odd name of "node_modules" in my opinion, and I don't need npx etc to run commands under node_modules either, all are taken care by venv with its 'activate' script
pip install --user and sudo pip install won't break your venv. But they will break your system Python and any OS commands that depend upon system Python, perhaps including pip and virtualenv themselves, which is incredibly confusing. I've helped both friends and coworkers un-break it, and the symptoms aren't generally obvious. I wrote the patch to pip in Debian to prevent sudo pip install from removing files from Debian packages via upgrading packages. It's a huge pain, it's only worth running if you know exactly what you're doing, and as someone who does know exactly what they're doing I can attest that it's never necessary. After all, you can always just make a virtualenv.
One thing I did at my last job was to make a Nagios alert for machines with files in /usr/local/lib/pythonX.Y/site-packages, indicating that someone had run a sudo pip install, which was super helpful for "why is this machine behaving slightly differently from this other machine which should be identical". We had a supported workflow involving virtualenvs and also we had multiple members of the Debian Python team on staff if you needed systemwide packages, so not only was there always a better solution, there were people to help you with that better solution. :)
Re activate/deactivate, that's a matter of taste but I find it easier to avoid it completely too - see my reply in https://news.ycombinator.com/item?id=20672299 for why. Basically, you get the simple rule of "Never run bare pip" instead of "Remember which pip is your current pip and whether it's the one you meant."
> But they will break your system Python and any OS commands that depend upon system Python
Sudo pip install might on some distros (and I consider this to be a bug on the distro level, not a Python issue) but I've never heard of --user breaking anything
Maybe I'm misremembering, but, isn't the point of pip install --user to get things onto your import path when running the base Python interpreter, just like sudo pip install would (except scoped to your user)? If so, wouldn't installing an incompatible newer version of some library (or worse, a broken version) break system commands that import that library, when that same user is running those commands?
I've grown to dislike activate, because it breaks the simple rule of "never run pip, python, etc., only run your-venv/bin/pip, python, etc.". Now the rule is "Don't run pip, python, etc., unless you've previously run activate and not deactivate" - and it has the complicated special case of "make sure the command exists in your virtualenv." (For instance, it's definitely possible to have a Python 2 virtualenv where pip exists but not pip3, and now if you run pip3 install from "inside" your virtualenv it's global pip! Or you might have a Python 3.6 virtualenv and type python3.7 and wonder where your packages went, or several other scenarios.)
If you have the shell prompt integration to remind you whether you're in a virtualenv or not, it's fine, but I don't always have it, and I find it helpful to manually type out the full directory name (generally with the help of up-arrow or tab...) so I know exactly what I'm running.
for bash/fish it automatically prefix with a (your-venv-name) so it's obvious you're under some venv, not sure about csh but I would assume it will do something similar. looks like venv supports bash/csh/fish only by default however.
I agree. I'm optimistic about tools like Poetry https://poetry.eustace.io/docs/basic-usage/ for solving this. Unfortunately Python predates the realization that good packaging tools were a thing that needs to be solved in the core language and not externally (Go, Rust, Node, etc. postdate this realization; C, Perl, Java, etc. also predate it).
The flip side is that decoupling the interpreter/compiler from the build system makes it more possible to write tools like Poetry (and, indeed, virtualenv) that explore new approaches to the problem. At my day job where we do hermetic in-house builds with no internet access and (ideally) everything built from checked-in sources, building C and Java and Python is straightforward, because we can just tell them to use our source tree for dependency discovery and nothing else, and we can set up CFLAGS / CLASSPATH / PYTHONPATH / etc. as needed. Building Go and Rust and Node is much more challenging, because idiomatic use of those languages requires using their build tools, which often want to do things like copy all their dependencies to a subdirectory or grab resources from the internet.
Of course, given that it's Python, there should be one - and preferably only one - obvious way to do it....
>Unfortunately Python predates the realization that good packaging tools were a thing that needs to be solved in the core language and not externally (Go, Rust, Node, etc. postdate this realization; C, Perl, Java, etc. also predate it).
Sure, but it's also a cultural thing. Ruby is nearly as old, and also predates this, but has nowhere near the insanity of Python. The community jumped on bundler and rvm/rbenv/etc super quickly, and rapidly improved them, while the Python community is barely even aware of pip-tools / pipenv AFAICT. Even virtualenv is really only a "real pythoners know" thing, it's rarely mentioned in guides, so newbies screw up their global environment frequently.
Ruby was exotic until someone translated the docs to English, but the whole ecosystem is indeed one reason I love Ruby. I really don't understand why python 2.7 is still a thing 11 years later. Sure, legacy systems, but if I install recently active open source on my machine I wouldn't expect it to use an outdated version of a programming language. Upgrading can't be that hard.
python 2.7 is still a thing because it was announced as the LTS* release over a decade ago.
* nobody called it that, but thats effectively what it meant to say “there will be no python 2.8. Python 2.7 will be supported until T. Python 3.x will be superceded in a year.”
Python 3 comes with it as "python -m venv". Once in the virtualenv, you don't have to worry about the various pip forms and effects, you can just pio install.
You can get fancier than that of course, but that's what works with stock python, on all OS.
I haven't seriously JavaScript'd in a couple of years but my problem with it then was different versions of node or npm. Nice thing about python virtual environments is that problem never exists (can make environments with whatever version you want).
pipenv really solves the Python version problem, IIRC. I don't actually use pipenv, myself, since I haven't had time to thoroughly figure out the new magic and I prefer to know exactly what's going on with my Python's.
npm doesn't solve the duplicating of deps any better than python/pip, as far as I know. The react-native app I'm currently working on has a node_modules folder at 960Mb. That's probably bigger than nearly every virtualenv I've ever seen. A react-native node_modules on a bare project with `--template typescript` is at least 350Mb (created one a few minutes ago). I'm using nvm for node version management. No problems so far.
Exactly. NPM gets a lot of hate but lockfiles and local install by default is great. The default mode should not be global installation. Also imo virtual environments aren't amazing. Having some mode you have to remember to flip on that changes essentially the semantics of your pip commands seems a little brittle. Tools that work on top of pip and venv like Pipenv or Poetry seem a lot better.
This isn't even the start of the problems with pip and pypi. If I install pylibtiff, it embeds a copy of an old libtiff which is binary incompatible with the libraries on my system. I have to build it from source, then it works just fine. But I can't inflict this level of brokenness on my end users.
This applies to many packages embedding C libraries, including numpy, hdf5 and all the rest. There has been minimal thought put into binary compatibility here, meaning that it's a lottery if it works today, or will break in the future.
I couldn't agree more with this. I was forced into doing some UI coding and although I could never full embrace js, the package management aspects (esp having the sane default of not installing packages globally) were definitely superior to python.
I feel uncomfortable with the fact that people feel a third-party solution is the best way to solve this mess. It can also get messy when packages installed with pip, pip3, conda and apt are all entangled with one another in various ways.
It’s unfortunate that it’s third party, but conda has the unquestionable advantage of being the only Python-centric packaging system that has a reasonable shared binary library story
I'm curious, do you not find wheels + manylinux reasonable? I agree that until recently, Conda definitely had that advantage, but now that you can `pip install scipy` and have that get you a working library and not try to compile things on your machine what does Conda offer beyond that?
I guess one thing Conda has that the pip ecosystem doesn't is that it supports installing non-Python shared libraries like libcurl on their own. Is that an advantage? (We absolutely could replicate that in the pip ecosystem if that was worth doing, and it's even not totally unprecedented to have non-Python binaries on PyPI.)
I think it would definitely be great if pip could install non-python dependencies. One problem right now is that many projects will tell you to just pip install xyz. You execute that, things start building, and the process fails partway with some cryptic message because you're missing an external dependency. You figure out which one, you install it, start again, and another dependency is missing. Rinse and repeat. It's definitely not a turnkey solution, and this issue trips up newcomers all the time.
With respect to versioning, I think pip should be way more strict. It should force you to freeze dependency versions before uploading to pipy, not accept "libxyz > 3.5", but require a fixed range or single version. That would make packages much less likely to break later because newer versions of their dependencies don't work the same way anymore.
Does pip allow version number dependencies? Conda is able to upgrade/downgrade packages to resolve conflicts, whereas pip just seems to check if a package exists and shrugs when there's a version conflict.
pip does handle versioned dependencies and ranges, and know enough to upgrade existing packages when needed to resolve an upgrade. Its resolver isn't currently as complete as Conda's - see https://github.com/pypa/pip/issues/988 . (On the other hand, the fact that Conda uses a real constraint solver has apparently been causing trouble at my day job when it gets stuck exploring some area of the solution space and doesn't install your packages.... so for both pip and conda you're probably better off not relying too hard on dependency resolution and specifying the versions of as many things as you can.)
... the same thing happens if you mix stuff in your usr/bin directory that isn't managed by your system package manager.
The solution is: don't mix your package environments. Use a conda environment. Just like in Linux, you'd use a container. If you wait for the Python steering committee to fix pip you'll be waiting a long time.
Yes, exactly my point: then you have to potentially deal with conflicting dependencies between pip and conda packages. This happens and it's a pain to deal with.
Holy Crap! What a lot of irrational, hyperbolic hate for Python.
I think everybody should spend their first couple of years working in Fortran IV on IBM TSO/ISPF. No dependency management because you had to write everything yourself. Or maybe [edit: early 90's] C or C++ development where dependency management meant getting packages off a Usenet archive, uudecoding and compiling them yourself after tweaking the configure script.
I'm not saying Python is perfect, but if it's causing your burnout/destroying your love of programming/ruining software development you seriously need some perspective.
I just returned to Python for the first time in a little while to collaborate on a side project and ran into a few tricky-to-debug errors that caused a fair bit of lost time. Know what the errors were?
In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string. Instead of throwing a type error, Python happily went along with it, and iterated over each character in the string individually. This threw a wrench into the works because the list being iterated over was patterns, and when you apply a single character as a pattern you of course match much more than you're expecting.
And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.
There's entire classes of errors you can cause yourself in Python that aren't possible in stronger, statically-typed languages. For a large project, I'd pick the old and boring Java over Python every time.
Python is a dynamic language, that's what dynamic languages do, you don't have a type checker but have greater flexibility, but you don't have to settle on that, you can actually use mypy and annotate types and get best out of both worlds.
> And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.
This isn't what Python would do, if the variable was undefined Python will throw an error, so you must have defined it with this name or you're misremembering what have happened.
It has nothing to do with static vs. dynamic. There's no reason that being an early-binding language that a string has to be iterable itself, and the proposal to change this was only rejected as it broke too many things[1] and couldn't be automatically fixed.
Point in the GP's favor: Fixing it would definitely not be a problem with an early-binding language! In fact, the nigh-impossibility of automated refactoring puts lie to the notion that late-binding languages are more "agile."
It's a design flaw, in the same way Python 2's allowing comparisons between different types was a flaw, e.g. "a" < 3 succeeds. Python 3 now, correctly, throws a TypeError because there's no sensible ordering between the two things.
(While I'm griping: Another design flaw is conflating iterables and iterators, which makes generators almost useless. Say a generator is pass to a function expecting an iterable. If the function uses it twice, the second time it silently returns nothing!)
> This isn't what Python would do, if the variable was undefined Python will throw an error
I think GP must have assigned to the name, in which case Python will create a lexically bound name.
Python's rules for naming can make perfect sense or be quite surprising:
try:
x = four()
except Thing:
x = 5
print(x) # 4 or 5
for a in [1, 2, 3]:
pass
print(a) # 3 ?!
mypy is a great effort, but very experimental. Try using it on any real-world large enough project and it loses most of its value as there are still a lot of unimplemented things, or because you'll depend on a third party module that hasn't support for it yet.
Case in point: Pandas, the foundation of data programming in Python, does not provide the Series or DataFrame (that's a table) types in a way that MyPy can use.
Your 2nd error isn't possible in Python, so I'm not sure what you did there. Regarding the first, sure, it is a bug that was annoying to catch. But, having an `Iterable` interface in Python is also really neat and useful if used responsibly. If you're programming regularly in Python, you are accustomed to the tradeoffs that come with a dynamic programming language and no static types, and you can still avoid issues like the one above.
Right off the top of my head, using type hints with a decent IDE or an assert statement would likely have caught the issue.
I'm not saying that Python doesn't have issues (all languages do), but I don't see the error noted above as any sort of deal breaker. On the other hand, if you're only ever going to use Python like a strongly typed language without taking any advantage of its dynamic characteristics, then I can see why it would seem as a total downgrade compared to languages like Java.
I didn't explain the second one well. Here's some exact code.
group_keys = ...
if not isinstance(group_keys, list):
groups_keys = [ group_keys ]
So rather than listifying the non-list variable, it was creating a new variable. The cause of this bug is that Python doesn't distinguish between declaring new variables and overwriting existing ones.
Well, this should have been caught as an unused assignment in static analysis. A whole ton of languages allow this situation, so I'm not gonna ding Python too hard for that one.
However, here's a related but different python gotcha:
if foo(a):
v = list(bar(a))
for i in v:
print i
In this example, v is only defined inside the if. Due to python's limited scopes, v is also valid outside the if, but only has an assignment when foo(a) is True. When foo(a) is false, the for loop throws an NameError. And yes, a coworker wrote code that accidentally implemented this antipattern, albeit much more spread out in the code base.
This is clearly a bug in the code, yet no static analysis tools I've tried have successfully discovered it. There's a bug in pylint that's been marked WONTFIX because it requires a wildly different implementation. At a language level, it feels weird that if blocks aren't a scope level for new variables. If you want to reference v outside the if loop, declare / assign it outside the loop first.
statically avoids this problem. In general, prefer immutable variables where possible. Single-assignment form is nice for a lot of reasons, not the least of which is that it avoids this particular gotcha.
And I should add that the "right" way to do this would be to factor this out to a function:
group_keys = coerce_to_list(...)
is much clearer than either block, and avoids the possibility of the issue.
All of these things are true, but they require a non-trivial level of experience and discipline to avoid most potential gotchas. Your average Python project on the Web isn't written to this level of quality, and when people are learning programming using Python in school they certainly aren't there yet, and are gonna hit all kinds of problems related to this stuff.
But is there a way to force immutable variables in Python? You can easily still end up in the same situation when you typo something (easy to do when plurals are involved), and then end up reassigning something when you meant to create a new variable.
I don't think that's fair to be honest. If you had simply used Pycharm with default settings you would have easily caught the first bug due to the linting. It's a fair complaint, but this specific bug is easy to catch using any modern Python IDE.
I've never found the "Use this specific IDE" defense particularly valid, considering that many IDEs don't have these features and that in other languages the compiler itself protects you.
Needless to say, I was not using Pycharm for this development, nor am I likely to install an entire IDE just for a small change I'm making on a random project. It's a non-trivial burden to configure and learn an entire IDE, vs just using what I already know (which is often just emacs).
It's even harder to take "The IDE should make up for deficiencies in the language" seriously. In languages that handle this stuff well, you can edit in Notepad and still not make these mistakes. Why push it up several levels to a few specific IDEs that most people don't even use?
> But is there a way to force immutable variables in Python? You can easily still end up in the same situation when you typo something (easy to do when plurals are involved), and then end up reassigning something when you meant to create a new variable.
Not always. Mypy has experimental support for `Final[T]` [0], and attrs/dataclasses support final/frozen instances, but that's opt in on a per-argument basis.
I see this often and it is a bad pattern that people do.
Typically type checked languages wouldn't even allow you to do. If you would use mypy for type checking it wouldn't like it because you're redefining the type of a variable. Best practices would suggest you use a different variable for the conversion if you must, but ideally you should just make the function accept list as an argument. If you're really worried about passing something else than a list, you should use type annotations to tell type checker what it is. If you want to add extra runtime check then do:
assert isinstance(group_keys, list)
You can complain that Python allowed you to something dangerous, but you have tools to help you avoid it and this flexibility is what makes tools like SQLAlchemy so powerful.
I still don't think you quite understand what's going on here. Python wouldn't create a new variable in this case. I It would re-assign the value represented by the variable you already assigned once. I agree that it would have been better if Python had explicit variable declarations (this is one of the few things I think Perl got right.).
On the other hand, Ruby made this same mistake. If you wrote this code in Javascript you wouldn't get an error, but you would in fact have two different variables.
For instance, this code runs for me using node 8:
var fun = (bool) => {
var x = 1;
if (bool) {
var x = 2;
x += 1;
} else {
x += 1;
}
console.log("x=" + x);
}
fun(1);
fun(0);
> I still don't think you quite understand what's going on here. Python wouldn't create a new variable in this case. I It would re-assign the value represented by the variable you already assigned once.
Uh. no. he typoed the reassignment, so it wouldn't re-assign the value.
> So, of three of the most popular dynamic languages, Python, Ruby, and Javascript, none of them would have helped you catch this kind of error at script-parsing time. So again, it seems like you have an irrational dislike for Python, all things considered.
Sure, but he's made it clear he likes Java. Fundamentally he's against dynamic typing, so of course he doesn't like any of the dynamic languages.
I don't understand why you're accusing me of being irrational. These seem like very rational problems to have with Python. They literally caused me bugs that cost me time to deal with that I wouldn't have faced in other languages.
You're also assuming that I don't have the same problems with Ruby or JavaScript. I do. The exact same critique could be made of them as well, but they're not the subject of this thread; Python is.
You can't argue with someone who has chosen to overlook your viewpoint.
I've ran into the same issues while writing python code. People who are newly picking up python are especially prone to these kind of bugs. Also, with python I have to spend lot of time to figure out what went wrong in my code as compared to other languages.
People who have been using python for long have wired there brain to avoid such pitfalls and now they happily defend it.
I don't think what you're saying is true. I already said I think it would have been better if Python and Ruby had explicit variable declarations. But, if this is your biggest issue with a language and it's ecosystem, then IMO that language is doing pretty well. I would rather, for instance, have to deal with implicit variable declarations in Python that the gigantic mess of Java frameworks that have been invented to "reduce boilerplate", such as Spring/Guice, AspectJ, Hibernate, etc.
My bad. I didn't realize you were against all dynamic languages in particular. FWIW I prefer Java and static types as well, but as far as scripting languages go, I think Python is pretty great.
I disbelieve. And I disbelieve despite being a fan of dynamic languages.
The tradeoff is that dynamic languages are faster to develop, more concise, but more expensive in maintenance exactly because of issues like this. The data that I base this opinion on is an unpublished internal report from nearly a decade ago at Google quantifying costs of projects of different size in their different languages. Which was Java, C++, and Python. Python had the lowest initial development, and the highest maintenance costs. That is why Google then moved to Go as a replacement for Python. It was good for the same things that they used Python for, but being statically typed, its maintenance costs were lower.
I can believe that. But for a lot of people, the lower initial development time/cost aspect matters a lot. If I had Google resources, sure, I'd Go with other languages perhaps, but you can still write high-quality and capable software in Python. And while the batteries included aspect of Python is not everyone's cup of tea, I personally find it quite handy to have that so I don't have to waste a ton of time evaluating different libs to do fairly standard things.
To be clear - I'm not trying to say that Python is better in any objective way. Ultimately, I think people should use the tools they have available and prefer, to build what they want.
But for a lot of people, the lower initial development time/cost aspect matters a lot.
As I said, I'm a fan of dynamic languages. :-)
One of the top ways that startups fail is failing to build what they need to build quickly enough. Maintenance costs only matter if you succeed in the first place. Using dynamic languages is therefore a good fit.
But, even if you're not Google, if you're writing software and have the luxury of paying attention to the lifetime costs of the project up front, you should choose a statically typed language.
That would not catch the bug if the input is not under is control.
You could as well say "Just check if the object is a string" in the method, which would work but the point was rather that it is difficult to notice if you did not think about it. Compared to other languages that would crash or not compile instead.
Yeah, the input isn't really under control because it's coming from deserializing a YAML file. It worked for the exact type of input I was expecting, namely, when you configure a specific value as a list, but it wasn't working for anything else. And YAML has plenty of types it can split out, so my naive fix still only handled lists and strings properly!
Yeah, YAML deserialization is the worst case scenario for dynamic typing. In most situations, types are pretty consistent and assuming you run your code at least once, you'll find most errors. But with YAML deserialization all bets are off. YAML is even worse then JSON for this because seemingly minor changes in the YAML can change the shape of the data.
I've had success validating such data against a schema, so I know it had consistent type structure before working with it.
>Iterable` interface in Python is also really neat and useful if used responsibly.
Honestly this was a major attraction to python for me a decade plus ago as a student when I started learning--even when I used it irresponsibly. There are so many small tasks where you just kinda have to iterate over 100-1000 items that you're not worried about big-O or anything like that—you just want to iterate and work on a collection quickly for some task in the office.
>In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string. Instead of throwing a type error, Python happily went along with it, and iterated over each character in the string individually.
I've been using python for about 13 years professionally and I wrote up a list of "things I wish python would fix but I think probably never will" and treating strings as iterable lists of characters was on there.
I've seen this bug multiple times and the fix is relatively easy - just to make strings by default non-iterable and use "string.chars" (or something) if you really want to iterate through the chars.
Nonetheless, I still love the language and wouldn't use anything else.
>Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.
This one gets caught by linters - unfortunately 90% of most python linters spit out rule violations which aren't important which drowns out stuff like this in the noise.
* Implicitly casting strings, integers, dates, etc. to boolean (e.g. "if x" being true if x is a non empty string). Cause of more unexpected bugs that I can count, but would cause massive headaches if implemented and memories of the the 2-to-3 transition would scare anybody away from doing this I think.
* Treating booleans as integers (True + True = 2). Probably wouldn't cause that many headaches if implemented but everybody still seems to think it's a neat idea for some reason.
* Treating non-package dependencies of pip packages (e.g. C compilers, header files) as something that is either the package's problem or the OS's problem. Nobody looks at this problem and thinks "I should solve this".
Iterating over characters in a string is something that's done very often in introductory CS classes, but very little in the real world. Python has support for string finding and regexes; why in the world would I be individually iterating over characters? Generally, when you see that, it's a code smell.
So yeah, I totally agree with you, it'd be better if trying to iterate over a string were a flat-out error, and if you really want it, you should mean it. Though Python being dynamic still means that you'll only spot this error at runtime.
As for linters, how do they know if your intent was to reassign the value of an existing variable, or to define a new one? The language has no way to indicate which of these is intended.
For your first error, you can do some foot-shooting with a statically typed language too.
I remember a bug I made using C#, where I wanted to delete a collection of directories recursively. I got mixed up into the various inner loops and ended up iterating over the characters like you. But C# allows implicit conversions of char to string so the compiler was OK with it, and since those where network drive directories (starting with "\\server\"), the first iteration started deleting recursively the directory "\", which in windows means the root directory of the active drive (c:\)... And SSDs are fast at deleting stuff.
> And another one, I typoed a variable name as groups_keys instead of group_keys (or vice-versa, I don't remember). Instead of just throwing an error, Python happily went along with it, used an uninitialized value, and then all the logic broke.
Python doesn't have uninitialized values, it throws NameError when you try to access a variable that hasn't been set. So I don't see how this could have happened.
Well this is anything but a new complaint. I would assume a user who has worked in Python for some modest amount of time to have made peace with this. One works in Python knowing that this can and will happen (well one does have linter on steroid like mypy now to counter these).
Python code needs more testing, more run time type checking of function arguments than a statically typed language. If that's a deal-breaker then one shouldn't be using Python in the first place. What you gain though is some instant gratification, and the ability to get something off the ground quickly without spending time placating the type checker. Its great where your workflow involves lot of prototyping, exploration of the solution space and interactive use (ML comes to mind, but even there int32 vs int64 can byte, correction, bite). I see it as a trade off -- deferring one kind of work (ensuring type safety) over another. Hopefully that deferral is not forever. I like my type safety but sometimes I want that later.
What I typically do is once I am happy with a module and I do not need the extreme form of dynamism that Python offers (something that's frequently true) I take away that dynamism by compiling those parts with Cython.
> In one case, I was iterating over the contents of what was supposed to be a list, but in some rare circumstances could instead be a string.
The creator of a well-known alternative to Python has a single-letter email address, and regularly receives email generated by Python scripts with this exact bug (which means instead of sending an email to "user", sends an email to "u", "s", "e", and "r"). So I’ve heard.
In my CS program, we learned Python as a convenient way to sketch a program. We also learned C++ for speed and OCAML for those functional feels. A programming language is a tool, Python has some great use cases mostly focused around ease-of-programming.
The bugs you describe should both be easy to catch with unit tests. It sounds like the problem is not that you're using Python, it's that your project lacks tests. Sure, you can typo this sort of thing; but it should be apparent within seconds when your tests go red.
(And nowadays, you can also use type hints to give you a warning for this kind of thing, e.g. your IDE/mypy will complain about passing a string where the function signature specified a List.)
Serious question: If you are writting unit tests to check types, why not just use a language that has a compiler that does that for you? And if you are writing python with type hints, why not just use a language that uses the types you spend time adding to make your program faster.
Python is great for sharing ideas / concepts, but under some circumstances it seems irresponsible to choose it over other viable options like Go (if you use Python because it's easy), or C# (If you use Python because it's a 'safe' enterprise choice). (Ecosystem specific things aside at least)
As the sibling comment said, I'm not proposing checking types in unit tests, I'm proposing checking that the behaviour is correct.
If there's a code path that passes in a bare string instead of a list, and your logic breaks, then that code path should have a failing test case. However, type hints can provide another opportunity to catch this kind of mismatch before they even get committed.
> under some circumstances it seems irresponsible to choose it over other viable options like Go (if you use Python because it's easy)
This is probably true, but I think people tend to overuse this argument (i.e. use an overly broad set of "some circumstances"). I build fintech apps with Python, for example, and don't find any of these issues to be a
problem. In my experience, if you implement sound engineering practices (thorough testing at unit, integration, and system levels, code review, clear domain models, good encapsulation of concerns, etc.), then the sort of errors that bite you are not ones that a type checker would help with. I agree that the worst Python code is probably far more unsound than the worst Go code, but I don't think that's the correct comparison; you should be comparing the Python and Go code that _you_ (or your team) would write.
I think it's easy to be dogmatic about this kind of thing; in practice most people are substituting personal preference for technical suitability. Sure, there are cases where the performance or correctness characteristics of a particular language make it more suitable than another. But for most software, then whatever your team is expert in is the best choice.
The problem was caused because I didn't know that there was a code path that passed in a bare string instead of a list, though. It's hard to write tests for situations you aren't aware of.
Because the unit tests are not to "check types", they are to check that incorrect values (e.g. a string instead of a list of strings) do not occur. They are no different from other kinds of incorrect values, like attempting to cluster an odd number of items into pairs.
> Python happily went along with it, used an uninitialized value
There is no such thing in Python. You should get NameError if a name doesn't refer to any object.
>>> def f():
... name
...
>>> f()
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 2, in f
NameError: name 'name' is not defined
It's not my project; I'm just a collaborator. My experience has been that a very tiny minority of Python code out there is written in this style, so unless you're only starting projects from scratch, you can't benefit from it.
And that'd be fine if everyone were on board with it and that were the general direction of the project, but I don't think that's true.
I've never seen a strict, type-annotated Python project out there in the wild, and I've seen a decent amount of them. A random non-primary-contributor isn't going to have much luck stepping into an established project and getting everyone to go along with adding annotations to the whole thing.
And if I were starting a project from scratch, rather than coercing the language to do something it wasn't really designed for, I'd just use a language that has first-class support for types directly in the compiler, like Java or Go.
Agreed. I really don't understand all these buckets filth being poured on Python in this thread.
It's a first language I worked with in my life that just clicked with my brain and doesn't just drain me.
I would take a Python job over a Java/C/C++/Go/Rust any day.
There's some languages that could pull me away from Python (Nim, Crystal) but they're nowhere popular enough to move wholesale to them.
> I would take a Python job over a Java/C/C++/Go/Rust any day
it's funny, I feel the exact opposite. I work on a team that maintains a digital catalog, and a lot of what we write is about taking in asset- and metadata files, asynchronously processing them, and then publishing that to a denormalized read-optimized data store. We often joke that we mostly take data from 'over here' and put it 'over there'.
All our stuff is in Java, and honestly, if you use Lombok to squeeze out the boilerplate, and a decent dependency injection framework like Guice or Dagger, modern Java really isn't so bad. Streams are clunky but they get the job done. We use Jackson a lot to serialize/deserialize Java pojos to JSON and XML, which is pretty seamless for us so far. The Optional class is again clunky, but it works well enough.
The thing for us though is that the problems we spend most time solving are just not really related to the language we write it in. The hard problems are much more around things like operations (cd/ci, metrics, alarms, canaries), performance (latency, load, etc.) and just the nuts and bolts of the business logic (what type should this new field be? what values can it take? how do we plumb this through to downstream system X owned by team Y? etc.)
I honestly wouldn't want to have to write this stuff in Python for a simple reason: I don't think I could live without static typing, which is a fantastic tool when you need to manage a large code base written by multiple people over multiple years. I can make a change in some package, do a dry-run compile of every system that uses it, and then see what needs updating. It gives me certain guarantees about data integrity right at compile time, which is super helpful when you're doing data conversion.
But hey, different jobs, different tools. Glad you found something you're happy with.
> I honestly wouldn't want to have to write this stuff in Python for a simple reason: I don't think I could live without static typing, which is a fantastic tool when you need to manage a large code base written by multiple people over multiple years. I can make a change in some package, do a dry-run compile of every system that uses it, and then see what needs updating. It gives me certain guarantees about data integrity right at compile time, which is super helpful when you're doing data conversion.
Programming in the large without type safety is a fool’s errand.
> But hey, different jobs, different tools.
Exactly. There’s a reason your kitchen drawer isn’t full of just sporks.
> Programming in the large without type safety is a fool’s errand.
Lol. Right. No big system has ever been built in an untyped or weakly typed language. Well, except just about every bit of software we all use everyday. But it does seem like some small startups can't get by without it.
>No big system has ever been built in an untyped or weakly typed language. Well, except just about every bit of software we all use everyday. But it does seem like some small startups can't get by without it.
Many have built models of the Eiffel tower with toothpicks too, so?
You can still built things with inadequate tools: inadequate != prohibitive. You just have more problems going forward.
Which is exactly the lesson people who write large scale software have found.
What is this "just about every bit of software we all use everyday" that you wrote about as been written in weak types?
Most major software is still written in C/C++ (anything from operating systems, Photoshop, DAWs, NLEs, UNIX userland, MS and Open Office, databases, webservers, AAA games, what have you). One could use just that C/C++ software, and they'd have almost all bases covered.
The rest is e.g. Electron based software and online services. For the latter, most of the major ones (e.g. Gmail, Apple's iCloud services, Microsofts, online banks, online reservations, etc, etc) are not written in "weakly typed languages", only the client is.
And those that were initially written in a weakly typed language, e.g. Twitter with Ruby on Rails, others with Python, etc, have rewritten critical services (or entirely) to statically typed languages (e.g. Twitter went for Java/Scale, others for Go, etc).
And even for the client, most shops are now turning to Typescript (and FB to Flow) because they've found weakly typing is not good enough for large scale. So?
Python is not weakly typed. It is strongly typed in that it forbids operations that are not well-defined. For example, adding a number to a string) rather than silently attempting to make sense of them. I agree wholeheartedly about weakly typed languages, though.
I believe that marketing Python as "strongly typed" has the potential to confuse rather than educate. Python still crashes at runtime with these errors. It has nice error messages, but it still crashes, potentially in production. If you want to create your own "types", you'll have to add your own runtime checks. It's much more sane than JavaScript, but it's not strongly typed like Haskell. Python does not automatically coerce some built-in runtime values, that's it.
Not automatically coercing values is all that strong typing means. Getting a type error before you run the program is static typing. They're separate axes, and both useful to talk about in a language.
Could you elaborate or point to a resource? AFAIK, term "strongly typed" is usually used to refer to that the type cannot change but I'm failing to find a well defined definition or the comparison against statically typed.
Static typing means that types are figured out statically by looking at the source code, and type errors are detected then when it notices a mismatch. Dynamic typing means that types are worked out at runtime by looking at live objects when code operating on them executes.
Strong typing means that types cannot be substituted for other types. In C, you can write `int x = "one"` and the char * (address of) "one" is automatically converted to an int, or in Javascript you can write 1 + "2" and a string "1" is automatically created; depending who you're talking to, either or both of these qualify as weak typing.
They're both spectrums, and commonly confused with each other.
You're explaining static typing vs dynamic typing. I'm still failing to see how different Strong vs Static. If the only difference is "Static" means "types are figured out statically by looking at the source code" do you mean it's possible to change the type unlike strong typing? If not, can we say Static encapsulates Strong?
Static typing is not a superset of strong typing, they're on different axes. Strong vs weak typing (which I explained in the second paragraph) is about how strictly types need to match expected types before you get a type error. Static vs dynamic typing is about when you get a type error (during a static typechecking phase, or at runtime when you try to use a value as that type).
When you say the type cannot change, that's ambiguous: do you mean the type of the value a variable holds, or the type of the value itself? In C (a statically typed language), "int x" means that x will always hold an int, but you can still assign a pointer to it, it just turns into an int (weak typing). In Python (a dynamically typed language), the variable "x" wouldn't have a type (so it could hold an int at one point and a string later), but the value it holds does, and because it's strongly typed, it would throw a type error if you attempted to use it in a place where it wanted a different type (eg, `1 + "2"` does not turn 1 into a string or "2" into an int).
If I got this correct, you're saying strong can be compared to weak and static can be compared to dynamic. So there is no such thing as strong vs static typing comparison.
"Dynamic typing" is really just case analysis at runtime. Every static language is capable of dynamic typing, it's not some feature that statically typed languages lack. A dynamic language is really just a static language with one type.
Because most statically typed languages allow us to define our own types, add type signatures to constrain etc. Dependently typed languages also allow types to depend on values. Inference is useful, but only one aspect of static typing.
My point is that your marketing is misleading. Use "strong dynamic types" if you must, but for Python, it would be more accurate to say "strongly tagged".
C's typing is so week it might as well be an untyped language - not even a dynamically typed langue. And that's what most of the software you run every day runs on.
Static typing was all the rage 20 years ago. C++ and Java were going to save us from the chaos of C. What people found was the vast bulk of software defects are not problems that can be detected by static typing.
Static typing just created a constraining, inflexible code base, that was no more reliable than C or smalltalk or lisp. Once your beautifully conceived collection of types were demolished by the cold hard reality of changing business requirements the type system actively worked against you.
Python and ruby and javascript started gaining traction, and at first it seemed crazy to use a language that didn't have a static type checker. But after people started using them they realized they just didn't have the kinds of bugs that a static type checker would catch anyway - because those types of bugs are caught by the dynamic type checker (something C doesn't have, and C++ only sort of kind of has) at run time when you write tests. And writing tests also caught all kinds of other logic bugs that didn't have anything to do with types. They were writing software faster and more reliably in dynamically typed langues than they ever could in the old statically typed languages.
Of course no language is a silver bullet, and writing software is still hard. Combine that with the fact that our industry has no sense of history, and a fair number of programmers today have only used dynamically typed languages, and you can see why the static typing fad is coming back around.
It seems intuitive that caching these type errors at compile time rather than run time will make for a more reliable system. But history tells us otherwise. Unless you just don't run your code before pushing it to production the dynamic type checker will catch it just as well when you run tests. And your types will drift away from the reality of the business requirements grinding development to a halt.
The static typing fad has a 5 year shelf life. Just enough time for managers to force a new generation of programmers to re-write all their code in typescript or whatever and learn it is just as unreliable, and much harder to work with.
(Sound) Type systems guarantee correctness for the invariants encoded as types. If it compiles, you know it doesn't have any type related errors at all. With more evolved type systems even your program's logic (or large parts of it) is guaranteed.
Tests just allow you to test random invariants about your program. If it compiles and your add() method works when passed 2, 2 and gives 4, it still might not work for 5, 5... (contrived example: imagine it with much more complex functions, though even a simple e.g. "one line" time conversion can have similar issues).
You need to test anyway. So, is it the case that type systems provide much value beyond what a proper set of tests, which are necessary, are going to provide anyway?
If you skimp on testing your system will be crap, but at least the type system can fool you into thinking otherwise because it still compiles.
Actually, if your type system is powerful enough, you don't need to test. That's the source of the "if it compiles, 99% of the time it works right" people mention about Haskell (and even more so languages like Idris etc).
Type systems are tests -- just formal and compiler-enforced, not ad-hoc "whatever I felt like testing" tests, like unit tests are.
From there on it's up to the power of the type system. But even a simple type system like Java's makes whole classes of tests irrelevant and automatically checked.
A programmer can also leverage a simpler type system to enforce invariants in hand crafted types -- e.g. your "executeSQL" function could be made to only accept a "SafeString" type, not a "string" type, and the SafeString type could be made to only be constructed by a method that properly escapes SQL strings and params. Or the same way an Optional type ensures no null dereferences.
> Actually, if your type system is powerful enough, you don't need to test. That's the source of the "if it compiles, 99% of the time it works right" people mention about Haskell (and even more so languages like Idris etc).
Types only eliminate certain tests. You will always have system tests, acceptance tests and unit tests.
One should use types to augment their system reliability.
Haskell's type system most definitely does catch some of your logical errors. That's exactly why it is so revered.
An effective use of a type system such as Haskell's Hindley-Milner can result in a vastly smaller surface area for possible problems and thus can cut a big number of otherwise mandatory unit tests off your todo list.
>Types only eliminate certain tests. You will always have system tests, acceptance tests and unit tests.
Yes, so let's eliminate them with types, instead of doing them. "Acceptance tests" are not concerned with programming.
>Types will not catch logical errors in your code.
Actually, depending on the type system, it will.
That's how program logic is tested as "proof" and programs, implementations of algorithms are determined to be logically correct in more exotic languages (but even in C + some restrictions + the right statically checking tooling, NASA/JPL style project do that).
The question is not whether a type system will catch bugs. The question is whether a type system finds enough bugs that tests (sufficient to cover the things that the type system does not catch) would not also catch.
If you have to point to something like Idris I don't think you're making a real world argument yet.
Both static type systems and unit testing are just tools which are supposed to help programmers to deliver higher quality software.
Both static type systems and unit testing have their disadvantages. For static type systems, you sometimes need to bend backward to make it accept your code and it's not very useful before the code grows large enough. For unit tests, even if you have 100% test coverage, it doesn't mean that you're safe - underlying libraries may behave in unexpected ways and the test data input won't ever cover the whole range of values that the code expects to work. Integration tests have the same problem, the prepared input represents just a few cases, plus they are generally harder to run so they are run less frequently.
So, both tools are useful but they aren't solutions for all the problems in programming. Static type systems have the advantage of being checked without running any code, which should be much quicker than running the tests. Static type systems become more useful as you increase the precision of types and the amount of annotated code in the project. When used correctly, they provide certain guarantees about the code which you can rely on and they are used to restrict the domain (set of possible inputs) of type-checked procedures and classes. This means that you can write fewer unit tests because you don't have to worry about certain conditions which the type system guards against (static guarantee of something never being `null` is quite nice).
Anyway, I think that both static type systems and tests are great tools and they can and should be used together if you value the quality of the code you write. This is getting easier thanks to gradual type systems (optional type annotations like in Python or JS) which allow you to get some of the static guarantees without insisting on everything around being typed. With tests and mypy (in Python) you're much better off in terms of code quality than if you used just one of them. I see no reason not to use them both.
> For static type systems, you sometimes need to bend backward
> to make it accept your code and it's not very useful before
> the code grows large enough.
How large need a program to become, before the advantage of being allowed to write fishy code is counter-balanced by the types becoming untractable and the code impossible to refactor in any meaningful way?
This is a serious question. Some years ago, apparently Guido Van Rossum though 200 lines would be already quite an achievement [0].
Based on my own experience, I feel that 99 out of 100 errors thrown at me at compile time are valid and would have caused a crash at runtime (ie. when I do not expect it and have lost all the context of the code change). And I get about 50 such compilation errors in a day of work, so I guess I could write without the compiler safety net for about 10 minutes. That's my limit.
One could object that a 10 minutes program written in python can accomplish much more than a 10 minutes program written in Java. That's much certain! But then we are no longer comparing the merits of compile time vs runtime type checking, but two completely different languages. Of course it is easier to write a powerful/abstract language with runtime type checks, while writing a compiler for a powerful language is much harder. Still, since (and even before) python/perl/php were invented many powerful compiled languages have appeared thanks to PL research, that are almost as expressive as script languages. So it would be unfair to equate runtime type checking with lack of expressive power.
Now of course tests are important too. Compile time type checking does not contradict testing, like you made it sound somewhat in your message. Actually, if anything, it helps to test (because of test case generators based on type knowledge to exercice corner cases).
I'm sorry if all this sounds condescending. I am yet to decide whether I should allow myself to sound condescending as the only benefit of age :)
But I'd not want to sound like I'm upset against anyone. Actually, I'm happy people have been using script languages since the 90s, for the same reason I have been happy that many smart people used Windows: my taste for independence gave me by chance a head start that I'm afraid would have been much tougher to get based on my intelligence alone.
And now that static type checking is fashionable again I'm both relieved and worried.
> Some years ago, apparently Guido Van Rossum though 200 lines
I think it's better to measure the number of separate code entities (classes and functions and modules in Python) and how many different use-cases (ways of calling functions and object constructors) each entity is expected to cover... After converting to LOC, I'd say ~500 would be the limit. After that, it's a constant fight with TypeErrors, NameErrors, and AttributeErrors - it's just that everyone is already used to this, while not many know of any alternatives. Also, there are substantial differences between languages - in some 10 lines are enough to start complaining, while in some others I've seen and worked with ~2k loc code and it was manageable.
> many powerful compiled languages have appeared thanks to PL research, that are almost as expressive as script languages.
Yes, but on the other hand, some powerful static type systems for dynamic languages also appeared, and some of them are close to Haskell in terms of expressivity. The particular example here would be Typed Racket, which has a state of the art type system which is built on top of untyped Racket. It supports incrementally moving your untyped code to the typed one (whether a module is statically or dynamically typed is decided when module is created; as you can define many (sub)modules in a single file, you can just create a typed submodule, re-export everything that's inside, and move your code there one procedure at a time). Also, it automatically adds contracts based on static types, so that they still provide some guarantees when a typed function is imported and used in untyped code. There are many interesting papers on this, and TypedRacked is really worth looking into, if you have nothing against Lisps.
> Compile time type checking does not contradict testing, like you made it sound somewhat in your message.
Damn! I actually wanted to argue exactly this: that both tools are useful and both can be used together to cover their respective weaknesses. :) Looks like I need to work harder on my writing skills...
> I'm sorry if all this sounds condescending. I am yet to decide whether I should allow myself to sound condescending as the only benefit of age :)
Well, it didn't sound condescending to me, so no prob :) But, if you'd like an advice on this: please don't try to be condescending on the basis of age alone! It's totally ok to sound condescending if you have knowledge, experience and skill to back it up... Well, at least in my book :)
The biggest issue, in my opinion, is in dependency management. Python has a horrible dependency management system, from top-to-bottom.
Why do I need to make a "virtual environment" to have separate dependencies, and then source it my shell?
Why do I need to manually add version numbers to a file?
Why isn't there any builtin way to automatically define a lock file (currently, most Python projects just don't even specify indirect dependency versions, many Python developers probably don't even realize this is an issue!!!!!)?
Why can't I parallelize dependency installation?
Why isn't there a builtin way to create a redistributable executable with all my dependencies?
Why do I need to have fresh copies of my dependencies, even if they are the same versions, in each virtual environment?
There is so much chaos, I've seen very few projects that actually have reproducible builds. Most people just cross their fingers and hope dependencies don't change, and they just "deal with" the horrible kludge that is a virtual environment.
We need official support for a modern package management system, from the Python org itself. Third party solutions don't cut it, because they just end up being incompatible with each other.
Example: if the Python interpreter knew just a little bit about dependencies, it could pull in the correct version from a global cache - no need to reinstall the same module over and over again, just use the shared copy. Imagine how many CPU cycles would be saved. No more need for special wrapper tools like "tox".