I have become increasingly convinced Python as a language is a "trap" for any use of notable scale, be the scale about number of developers, codebase size, or performance requirements. It's a great 0->1 language and great at simple glue, but eventually you hit a wall and have to keep investing larger and larger amounts of people or computing resources to get continued returns... all due to fundamental design decisions in the language from two decades ago that are quite difficult to fix.
Unless you go all-in on typing -- which is difficult today with any meaningfully sized existing codebase -- maintenance is largely a "hope manual testing and unit tests catch anything resembling type errors" which is a major challenge for, say, structural change to a code base like refactoring. Plus typing is still young, the tooling somewhat immature, and can lead to false senses of security if you aren't very careful and opt into the strictest modes. This makes large number of developers and codebase size a major stumbling block.
The interpreter performance and GIL are fundamental issues as well. Multiprocessing and hacks around the GIL are quite painful if you even glance at any native threading code (say in a C++ library) and even when you stick to pure Python, you have a debugging mess when anything goes wrong.
But if someone can improve performance, that'd be great, and has massive impact potential. It is incremental though and doesn't solve fundamental issues with the language. I'm also skeptical of the "5x" plan referenced in the blog, and very skeptical we can ever see meaningful removal of the GIL due to library and baked in design decisions in existing code. This means performance will fall further and further behind compiled languages.
(I write all of this having been a part of supporting Python at massive scale for over two decades, including at two FAANG companies who invest heavily in it. I've seen the curve and the pain it's caused, and would never use it for any code that needs to be performant or actively developed on the multi-month or year timescale).
A trap for your dream world were you suddenly get google size ?
Because I have a 1 million unique users video streaming service still running python 2.7, using a few servers. The thing has a mobile version serving a different media on the fly, encodes user uploaded videos, features comments, tagging, and even has machine learning detection of content now.
It is still maintained by one single person, and he is not a professional dev.
And I'd say it's already very rare to reach that size in any project, in any company. Hell my last 3 paid projects as a freelancer have less than 20 concurrent users. And that's professional. That's people's real life.
2 out of 3 IT projects fail, no code is even shipped. I would worry about the scaling later. Once you do have the problem, pay the price for more hardware or a rewrite, you made it!
And if you do know for sure you will have the problem (you already work for a GAFAM), then not choosing python is ok, it's not a trap, it's a trade off.
But for the vast majority of us out there, having an easy, solid, battle tested and clean language with a huge ecosystem is worth a 10000 times more than some scaling potential in 5 years down the road.
I have seen so many big C++ or Java project fail I don't associate any language with big and failure anyway. I've seen devs arguing about the purity of this in Haskell, or that in Ocaml, and nothing gets to the end user because it's never perfect, or it's a toy, and doesn't handle IRL.
But what powers most companies ? Rail apps that you won't upgrade ever, but are still running. Spaghetti PHP code and wordpress that just won't dies. Horrible SAP scripts, VBA macros and excel sheets that actually deal with your real data.
I haven't worked at Google's scale, but I've heard a lot of critism about tools and programming languages from people who create architectures that are inherently slow. When a user login requires 8 http calls to distinct microservices or something else silly like that the problem isn't a lack of static types. So far in my career I haven't had a programming language ever be the reason why something can't scale, most often the problem is the initial developers lacking a understanding of relational databases and cache locality.
It's a variation on cargo cult; you WISH you had the scaling problems that could be solved by using something a bit more performant.
But solve the problem first, and solve it fast. It's why so many startups from the 2010's on used Ruby; they were productive in it, solving a real problem, rolling out features fast.
I've worked in a few projects where they dove onto the cargo cult, of wishing they had the kind of requests and load that e.g. a microservices architecture might help with. Massively overpriced projects too, because they hired lots of consultants and self-employed people.
The one time there was a guy who rocked up in a Maserati (used, lol); he took two weeks to build a page that was just a centered bit of text and a button, and it didn't even work.
Also, a lot of developers forget to it, but even if you start with a fast language down the line you will end up rewriting it. The problem is that requirements change a lot and because of that your application will often end up being a mess. For similar reason RDBMS is also better than NoSQL when starting a project.
When rewriting you already know how the project is now, so you can design your app around that, which makes things simpler and faster.
> Because I have a 1 million unique users video streaming service still running python 2.7, using a few servers.
Congratulations.
•How many servers 2, 5, 10?
•How many pure python libraries did you have to replace with a Cython extension after you realized that it cannot support your scale?
•How many external dependencies? How many of them did you replace after they were abandoned? Surely not all actively developed library are still using Python2.7.
> It is still maintained by one single person, and he is not a professional dev.
So he cannot upgrade the project to Python3.x along with all matching dependencies either, So the project stays with all bugs and vulnerabilities.
Not everyone would be happy with such a setup and aforementioned drawbacks, that's why I think the parent comment says Python is a trap at scale and I concur. I've faced issues at at scale (few hundred thousand) and there's no way a python project can meet that scale without relying upon Cython dependencies.
> A trap for your dream world were you suddenly get google size ?
So only Google size is a benchmark for scale? If I were to start a web project which has just 10 concurrent users I would choose Go because I know I can develop the entire project with little to no dependencies, it can scale to N users without touching the code and with the confidence that the number of servers required will be < than that of python.
I still use Python every day, for where it's good i.e. small scripts for personal use, prototyping algorithms. For professional products I use Python only as an Machine Learning endpoint(With all the bandages) because I have no other choice.
But the question is rather, whats the cost / ratio ? The 7 servers costs 7 less times money that they bring. The solo dev makes $150k/years with it, and doesn't need to work full time, have flexible hours and its own business.
> How many pure python libraries did you have to replace with a Cython extension after you realized that it cannot support your scale?
Many. Being extensible with compiled extension is a feature of Python. Numpy and co are part of the ecosystem. That's an awesome thing about it: I can code in Python and get c speed anyway.
> •How many external dependencies? How many of them did you replace after they were abandoned? Surely not all actively developed library are still using Python2.7.
Many. And the project still runs, with so little resource, bring the money in and serve its purpose, after 10 years of service.
> So he cannot upgrade the project to Python3.x along with all matching dependencies either, So the project stays with all bugs and vulnerabilities.
Sure he can, it's just an expense he won't do. Bugs are workaround, vulnerabilities are shielded by other parts of the system. Essential things like injections are dealt with. Some trade off are made, like always.
> So only Google size is a benchmark for scale? If I were to start a web project which has just 10 concurrent users I would choose Go because I know I can develop the entire project with little to no dependencies,
No you can't. Not if you want the first version to come out in a few months. Because you need a scrapper, and a bdd, and registration, and auth, and permissions, and screenshotting, and upload, and videos encoding, with background tasks, and load balancing all the videos and pics, tagging, i18n, xss protection, and search, etc. So either you pay to use cloud services to do all those things for you, or you leverage a rich ecosystem. Or you take a year to do it. Even if Go were the best language in the world, it can't beat 10 years of django plugins, 25 years of python modules and the iterative speed of a dynamic syntax. Well, it can, with AWS :)
In fact, Go is a very niche language. If you take it out of the I/O and concurrency specialty, it's quite average. Rust is better at raw perfs. At same perf, java has a better ecosystem. For simplicity and easy compilation, you have nim and zig that are more expressive. Resilience ? Erlang. And for speed of dev, python is better.
So for our use case it is not were it would shine, because:
> it can scale to N users without touching the code and with the confidence that the number of servers required will be < than that of python.
1 server only run the python code, the others are for the video hosting and the db. Python is not the bottleneck in this site. It almost never is in web dev. nginx serves the content, postgres and redis deals with the data.
There are still no "rails" or "django" for JS. There are frameworks emerging with the same features, but they don't have close to the ecosystem. Meaning every time you want auth, social integration, etc. you have to reinvent the wheel.
There is also the stdlib. JS doesn't have simple things built in like left trim a string, csv loading, uuid generation, etc.
Then there is the API. Sorting in JS requires this weird -1/1 function. Getting the last element or truthiness to manipulate length.
Suddenly not only you have to install tons of things, but libs and frameworks can't rely on a common ground, nothing seems to integrate well. You glue a lot of things manually.
The popularity of JS is because it has a monopoly on the browser, and so a lot of people must use it, and a lot of users know it by default, and look for it elsewhere.
But when you have Ruby, Python, or even modern PHP frameworks in front of it, it's not a great deal.
Of course, not all programming tasks are about the Web. And this is where JS collapses completely. Sysadmin, data analysis, pen testing or automation using JS is a pain compared to the fluent versatility of alternatives. You know, things like connect to this ftp, download the excel files every hour, put the average of every column in the mysql db then export the thing as a PDF and send that to the boss.
Depends. Mainly on what libraries you can make use of. For anything numerical or also ETL processes that deal with various document formats python is way more mature.
Since you don't know if your product will workout and you want to go fast you use a third-party service first. Of course, for some type of stuff, you might have to go with python but if both can equally do the job well in the backend and you need a web[mobile] frontend the...
Many (most?) web projects allow hiding performance issues by scaling horizontally, caching and offloading work to native implementations where necessary (like video encoding). Python can work great in such circumstances.
The best testament to this is the reality that popular web services were able to limp along on a slow language for a long time before they had to rewrite their code.
But if one is doing something that requires raw power or precise control over resources, or precise timing then not choosing Python is not merely a tradeoff to be considered, but an obvious architecture decision.
>It's a great 0->1 language and great at simple glue, but eventually you hit a wall and have to keep investing larger and larger amounts of people or computing resources to get continued returns...
If you're a millionaire by that time, or have a strong established business, then it doesn't matter, it's a nice problem to have. Have your engineers have a go at it.
See: Dropbox, Disqus, AirBnB, and others...
And if you're not, then going from 0->100 in 2 years instead of 0->1 in a few months with Python wasn't worth it anyway (and you also don't have enough customers/users to make use of those 100x improvement).
A third possibility is you’re a team in a larger company, and then it could matter a lot.
I wrote a few things in what I’m pretty sure was correct, idiomatic, best-practices Python 3 and almost immediately I could see some familiar problems from my Perl days looming on the horizon, especially around testing, packaging and maintainability.
I enjoyed writing Python code and for the right kind of startup or hobby project I might use it in the future, but there is so much black-boxed weirdness in the ecosystem, so much inconsistency in paradigms, and so much temptation to “cheat” that I’d definitely think twice before committing a team to it. And I didn’t even get as far as the performance issues.
On the other hand, of course, you can opportunistically “move fast and break things” within a larger org too. But I worry that if you get too committed to Python, you will be unreasonably dependent on your lead engineers sticking around.
Also, pretty sure Python was called out as one of the main reasons why some hundred of Google Video engineers couldn't keep up with YouTube prior to Google's purchase.
Python might not be the most performant, but once you need the scale, it's a good problem to have because at that point, your product has already made it.
Sadly the page isn't available on Google Books anymore[0], you might still be able to get the book itself. There also might be a Hacker News thread on it too or it'll be on Reddit. It's one of those places I'll have seen it on.
I work as a consultant for companies with billions in revenue and from what I see it definitely matters. Re-writes simply don't happen that often. Maybe for the pure venture capital SV tech companies you mentioned, but many companies are not in that bucket.
Making good engineering choices from the beginning matters a lot.
Rewrites may not happen that often, but plenty of large-scale corporate tech projects flail for years without ever reaching a functional state.
If the choice is between a sub-optimally performant but working system, or a boondoggle that never gets off the ground, the “better engineering choice” is probably the former.
> I have become increasingly convinced Python as a language is a "trap" for any use of notable scal
At what point do you embrace the trap and happily live with it?
For example Dropbox runs many millions of lines of Python, has massive traffic and their server costs aren't completely out of line for the service they offer. Their code base is also over 10 years old.
Sure, Rust too for some of the desktop app components. But the takeaway is you only need to do that when you're operating at mega scale and actually hit these bottlenecks.
In a ton of cases you'll be completely fine serving a few million monthly page views of your SAAS app on a single $20-40 a month server using Flask, Django or whatever Python web framework you prefer. Performance will be really good too. Just talking about most apps where it's mainly reading and writing data to a DB.
I write all my backend code in Go (used Python before Go existed) because I can deploy Go executable on the cheapest render.com server and it runs using fraction of the available resources (~50 MB out of 512 MB available and literally 0.01% CPU use).
If I used Python (or Ruby, node or even Java) I would likely have to go higher (and pay more) because those languages are not only slow, they are also memory hogs.
I'm as productive in Go as I used to be in Python, I see no reason not to use Go.
Microscopic cost optimization is frequently not a problem, but it's great if you can get it for low effort with Go. It's highly contextual whether it'll matter that much in both the short and long-term. You're situation as a solo dev is special and can't be easily extrapolated to efforts involving many people. The things that make solo dev great are hinderances to collaborative work efforts (by design in many cases.)
As a frame of reference, ASP.NET services in C# can exceed the throughput and performance of an equivalent Django app, but Django (and Python) may be the best choice for some people. What's not considered here is the experience of the developer and whether they can avail of the platform capabilities fast. Do they need to rewrite libraries? How much work is needed to be as proficient as you are with Go? The language and platform choice is rarely a differentiator here.
Over the long-term you can see some patterns. Python companies are able to scale up their teams more quickly and cheaply in some markets than Go ones. You choose to optimize hosting cost while they optimize other costs. They may need to avail of libs like Pandas and these are readily available. I don't know what the situation is with Go today, but similar libs weren't available originally and the Go community didn't have a big foothold in data science.
That you can incrementally build performance-critical parts of your system in other languages is a huge success story for Python. I don't understand in what world you can view that as a negative.
If you were NOT able to do that, that would have actually been a reason to not build your app in Python.
Sure, but that is the feature that enables people to build apps in slow, interpreted languages like Python in the first place. The parent comment had somehow made it sound like that was a negative.
I did not mean "huge win" as in it is something unique to Python. I meant it in the sense that it greatly adds to the value proposition of the language.
Ok, let's admit it, writing bad Python code is very easy, as it is in every dinamically typed language, like Javascript...
... but I think you're forgetting the real motivations that drive people to use Python: it's easy to learn, it's very readable, there's a large (and not so much toxic) community behind, it's versatile.
No one is saying that it will replace every other language, but it still can find its space even among large projects.
Python already does so much for you, and provides so much tooling to do even more, it's hard to write bad code if your daily job is to be a coder.
The iterator protocol means you rarely get off by one errors, context managers and GC deals with resource handling, the GIL, for all its faults, helps a lot of concurrency errors...
Now, add a good IDE, and syntax errors, names errors and attribute errors go away. Play a little in the REPL, and use cases become more obvious, behaviors more clear.
Want more security? You can add type hints and unit tests (which are incredibly easy to write, thanks to pytest).
Even without all that, the size of the ecosystem means you won't write most of the code anyway, and less code, means less potential for mistakes. Adds to that the language bare bone syntax, and you really has something that actively helps with writing good code.
Not to say you can't write bad code in Python, but unless you are not a professional dev (in that case, you can write bad code in any lang), you have a lot to help you there.
I think the "dynamic" jab is a bit of a trap, itself. It is easy to write bad code in most any language. Anyone that tells you otherwise is trying to sell you something.
Readable varies by person. Myself, I find the decision on significant whitespace to make reading Python like reading English with no punctuation or capitalisation.
Do you have the opposite experience? A language used by FAANG that doesn’t have performance issues at their scale. They seem to invest a lot in JavaScript which is not what I’d call a performant language and as you’ve said Python.
I’ve used Python for more scripting type tasks and creating basic APIs to get specific jobs done. I’m very weary to go “all in” on Python for an app based on all the feedback about performance.
All that being said I’ve never heard someone say we used X language and it scaled remarkably without issues. I’ve noticed lots of companies eventually go to Java once they get big but nobody really likes to brag about Java apps.
> They seem to invest a lot in JavaScript which is not what I’d call a performant language
JavaScript (V8 at least) is extremely performant, near native code performance. Considering how dynamic it is, it's not an easy feat but Google, Apple and Mozilla work a lot on it.
pypy in general works better, sometimes equal, sometime slower, and sometimes doesn't(library support is not 100%).
the problem whit javascript is mainly no type check in compile time making lot more mistakes if you use a compile langues, that why a lot of big project now days use typescript, and force to use types
Those are some pretty nice synthetic benchmarks! However, if you'd like a look at a more real world scenario or two, have a look at the TechEmpower benchmarks as well.
Do note that for most of the "realistic" stacks out there, you'd probably want to filter out all of the micro or no framework approaches, to compare something like Express and Koa against Django and Flask, like in the following link: https://www.techempower.com/benchmarks/#section=data-r20&hw=...
In this context I believe that if a benchmark does exactly that your software would do in a production environment (shuffling JSON around and accessing a DB), then such a benchmark is probably a rough indicator of how well any stack would work, as long as the source code is also written in an idiomatic fashion.
Of course, if you can, take benchmarks with a grain of salt and ideally do some prototyping and load testing.
The controller code, the model code and even the repository code are all very reflective of the primitives you'd find in production code.
The only differences would be when you add additional complexity, such as validations and other code into the mix, such as additional service calls etc., but if even the simpler stuff in Python (as an example) would be noticeably slower, then it stands to reason that it wouldn't get magically faster on the basis of less overhead alone.
The exceptions to this which i could imagine would be calls to native code, since the aforementioned Python has an amazing number crunching or machine learning support, but that is hardly relevant to web development in most cases.
In such a case, we'd adamantly claim that these benchmarks are simply not good enough even for educated guesses and we'd have to build out our full system to benchmark it, which no one actually has the resources for - in practice you'd develop a system and then would have to do a big rewrite, as was the case with PHP and Facebook, as well as many other companies.
On the other hand, premature optimization and dogmatic beliefs are the opposite end of the spectrum: you don't need to write everything in Rust if using Ruby would get your small application out the door in a more reasonable amount of time.
There are no easy answers, everything depends on the context (e.g. small web dev shop vs an enterprise platform to be used by millions) and the concerns (e.g. resource usage and server budget vs being the first to market, as well as developer knowledge of tech stacks, for example market conditions of PHP vs Rust).
Regardless, in my eyes, there's definitely a lot of value in these attempts to compare idiomatic interpretations of common logic for web development primitives (serving network requests in a known set of formats).
(Incidentally, thank you for the pleasant conversation.)
> … which no one actually has the resources for…
That webpage references "JSMeter: Characterizing Real-World Behavior of JavaScript Programs".
iirc they instrumented a web browser to collect "real-world" data and demontrated that wasn't like the behavior of benchmark code.
That webpage references "A comparison of three programming languages for a full-fledged next-generation sequencing tool" — "reimplemented … in all three languages and benchmarked their runtime performance and memory use."
That's in-practice.
> … you don't need to write everything in Rust if using Ruby…
Performance doesn't matter until it matters.
As-in the home page quote and reference — "It's important to be realistic: most people don't care about program performance most of the time."
I suspect that site is some kind of in-joke. E.g. read the regex-redux example in say c#, now look at the go version, now the python version…
It’s not really what you’d expect for a site called computer language benchmarks game. Each example just calls out to a very fast c library (pcre2) to perform the heavy lifting regardless of which language is being “benchmarked”.
"This module exports a set of functions implemented in C corresponding\n\
to the intrinsic operators of Python. For example, operator.add(x, y)\n\
is equivalent to the expression x+y."
Why wouldn't you expect that CPython regex would "ultimately" be written in C?
But everyone knows it is significantly faster than Python. Probably at least a few times for pure JS vs. pure Python programs. Not that it matters, Python is basically glue to run mostly C and C++ programs.
Because it's obviously not the idiomatic way to use Python. No one is going to bother writing a program that way in Python, they'd just write it in C or C++ then link it.
Everything has performance issues but Python is at least one, if not two, orders of magnitude off of compiled languages like C++, Rust, and Go (despite Go having a garbage collector). They have various usability tradeoffs but once you get things working, at least you're not spending 10x as many cycles to do the same work as another language would need.
(Java probably is also fine; I've less personal experience there so I can't say).
> They seem to invest a lot in JavaScript which is not what I’d call a performant language
A lot of them have huge amounts of code that runs in browsers, so they essentially have to invest in JS whether they like it or not.
> All that being said I’ve never heard someone say we used X language and it scaled remarkably without issues.
Sometimes you have to go by what people don't say. Humans tend to talk most about problems or surprises. The former are actionable and the latter are more interesting to talk about because of their rarity. That means when things work well as expected, you get silence.
I remember when people constantly talked about how Java was too slow. Then there was a period of time where people talked a lot about how Java was fast enough because that was an interesting change. Now people don't talk about Java performance much at all, which is a good sign (but easy to overlook), that Java performance is now consistently reliably good for most users.
As someone with nearly two decades of experience with Python and who has deployed it in production in high impact scenarios, I’d like to nuance the above for people who think Python is only good for prototyping and that you have to rewrite in a “proper” programming language. This is the kind of blanket thinking to avoid.
I think the rewriting part is only necessary if there’s some characteristic in your use case that you need that Python doesn’t provide — like raw computational speed, low latencies etc. If you’re a senior engineer familiar with Python, these trade offs are always front of mind.
For the majority of software I’ve developed, these characteristics were not essential. Python was a good choice for 80-90% of all of the important software products I’ve ever written.
For many data science projects, maintaining large code bases in Python is often the optimal decision (rather than reinventing the wheel and writing your own data frame and machine learning libraries, or using immature poorly maintained ones in other languages — for data manipulation and scientific algorithms, these libraries are highly optimized in Python anyway since the underlying code is in C or Fortran). Python is also a good choice for data engineering pipelines.
Where I might hesitate to recommend Python is when you have to write web services or desktop apps or any kind of application which requires speed or scale that cannot be handed down to a lower level library in Python.
Also for very large code bases, static typing truly helps to keep things sane, especially when you have different teams working on different parts of the codebase — with statically typed languages, no type checks in unit tests are needed, and refactoring is much more solid and error free. Python’s type annotations are an attempt to move in this direction, but static languages truly excel at type integrity (which are sometimes the cause of subtle errors in Python).
Otherwise my experience has often been that Python is a good first or second choice, depending on what you’re doing. Making that choice correctly is what sets senior/principal engineers apart from junior folks.
> Python’s type annotations are an attempt to move in this direction, but static languages truly excel at type integrity (which are sometimes the cause of subtle errors in Python).
I wouldn't call it an attempt. I mean, sure it has no chance against for example Rust, but Python's type system is actually quite decent and I think it is more powerful than the one from Go. Also you have freedom of using both nominal and structural typing if you chose.
The only problem I have with it when I have to use a package that don't have types, but fortunately that is happening less and less.
Some package authors refuse to add types, but others provide stubs to solve that problem. For example boto3-stubs provides types for boto3.
I really like that thanks to types I can also easily refactor code without worrying about breaking something in the process. I suspect if types existed and were popular with Python 2 then the whole Python 2 -> Python 3 migration would be a non issue.
> For many data science projects, maintaining large code bases in Python is often the optimal decision (rather than reinventing the wheel and writing your own data frame and machine learning libraries, or using immature poorly maintained ones in other languages — for data manipulation and scientific algorithms, these libraries are highly optimized in Python anyway since the underlying code is in C or Fortran)
Or you could use R, instead of the half-baked immature clones in Python :) I'm only slightly joking here, even if Python is much much better for string processing, it's much less useful for DS tasks (unless it's NLP or DL, to be fair).
That being said, Python is the second best DS language, as well as second best for everything else, so it can be a good choice for these projects.
We started a project using typing annotations for everything, and it's really a pleasure to use. It's like the type safety of C++ without the insane language complexity; the "advanced" parts of the language usually make intuitive sense.
For our use case of deep learning, REST APIs, and image processing (often a mix of these), Python seems like the best choice we have. The GIL isn't a problem at all because all our computations are done with NumPy or PyTorch, which release the GIL during most of their operations.
Adding types after-the-fact can certainly be painful, but at least it's not an all-or-nothing choice, we can opt in to types for selected parts of a codebase.
Personally I've got a lot of mileage out of Hypothesis for property-based testing. It's good at exercising edge-cases, and works particularly well when we sprinkle assertions through a codebase (where the "property" we're testing is simply "calling Foo doesn't throw an exception").
> Adding types after-the-fact can certainly be painful, but at least it's not an all-or-nothing choice, we can opt in to types for selected parts of a codebase.
One of the benefits of typing (that is rarely discussed) is that it deters people from writing code whose type would otherwise be crazy (e.g., "if someone passes the string 'foo' in for a parameter, then the return type is a string, otherwise it's a bytestring"). In other words, it deters a lot of crappy code. When you annotate after the fact, the annotation becomes really difficult and people get crabby that they're having to make this really complicated annotation and they blame typing (rather than their own poor coding). To your point, typing after-the-fact is still better than nothing, but you're missing out on a lot by waiting. Moreover, Python's typing story is still really immature with respect to syntax and tooling.
If a person like that is able to harm the team, I'd argue that it's a management problem. Lazy coding is only an issue when you allow people to be lazy...
I think you could make the same argument about code styling or really just about anything. If you had sufficiently quality coworkers, these things probably wouldn't be an issue, but it's a lot easier to just implement the technical solution (a code formatter or type checker).
You seem to think the debate is about whether typing alone can deter lazy employees from wreaking havoc, or otherwise solving broad organizational problems. No one claimed anything like that.
The claim is very narrow: typing can inhibit certain kinds of bad code and guide people toward better solutions. Whether someone is writing bad code because they're lazy or inexperienced, typing keeps some of the bad code from entering the code base.
Of course it's not a replacement for good management, hiring practices, etc.
> typing can inhibit certain kinds of bad code and guide people toward better solutions
I argue that somebody who would try something like your example above is either lazy or trying to wreak havoc. In my experience, people like that won't be deterred by strict typing, they'll keep typing nonsense and copying dangerous lines from StackOverflow until the thing outputs what they want. Having them write code that's complicated or important enough to require typing checks will result in disaster anyway, the only solution is to keep them out of the codebase, or review (and often rewrite) all their work.
I agree that typing is a very good guardrail for well-intentioned people with a minimum of competence, but in that case there's nothing to lose by making typing optional: they'll follow it anyway, and the added flexibility is often useful when well-used.
An inexperienced dev who wants to learn also won't need to be forced to follow typing guidelines, we just have to explain it to them...
> An inexperienced dev who wants to learn also won't need to be forced to follow typing guidelines, we just have to explain it to them..
Any engineer who would be qualified to write these kinds of guidelines would tell you not to waste time drafting and enforcing guidelines because type checkers exist. And if they're a really good engineer, they'll politely rebuke you for attributing poor code quality to the author's moral character, as this is a self-limiting outlook and generally toxic behavior.
Pretty much everything you're saying matches my experience, so, playing the devil's advocate:
- The typing argument is impossible to argue against, but isn't it the same problem you'd have with any other dynamic language? How is, say, Javascript any better? I don't see this as something that's specifically Python's fault, rather, yet another argument as to why starting a large project in 2021 without seriously considering static types is bordering professional malpractice.
- As for the GIL thing, I'm with you that it's never going to be lifted in any meaningful way, but wouldn't the subinterpreters idea along the lines of Ruby's ractors fix most of the pain in most cases?
Also, why are you skeptical about "the 5x plan"? I don't really have an opinion about it, but I'm interested in hearing any input.
For your first point, I don't think a language such as JavaScript -is- much better, which is why Typescript was invented as an alternative for larger codebases/organizations
TypeScript:JS is a lot like mypy:Python; the only real difference is that Python adapted so that mypy (originally envisioned as its own language) code is also valid Python as well as the reverse, obviating the need for a compilation step to run mypy in the Python runtime (whereas TS has to compile to JS to run); also, mypy has mypyc to compile (currently, only a subset of) mypy code to something more efficient than normal python.
I have become increasingly convinced Python as a language is a "trap" for any use of notable scale
I assume you're saying this out of experience, so can you give a practical example of what you call 'notable scale'? And are you talking about desktop or web or mobile, or just all of them? Just so that we know what you are talking about in a concrete way. Not the 'large software with large number of developers is always a problem' way.
edit I also see others here using 'scale' in a 'performance / number of request per second' etc meaning - I thought we were talking codebase size though. I mean if you know on beforehand this type of performace is a requirement it seems weird to turn to Python, of all things?
I am seeing what the GP comment says in a codebase I manage, around 47k LOC. Not massive at all, but enough where the problems mentioned above start to pop up. This is an application that runs in servers to analyze traffic data, so it has both analysis code and also a lot of code for the analysis framework. It gets hard to manage. I have unit tests and also integration tests that cover some, but not all of the code paths (it's very hard in this case to have everything covered). Most of the times, when they fail is due to something that would have been caught by static typing.
The codebase is being slowly migrated to static typing. On one hand, as the parent says, the typing module is still immature and there are still some Python constructs (not too weird ones, see [1] for an example) that you can't type-check correctly. On the other hand, I like the fact that you can include typing slowly and not all at once, it makes the effort much easier to tackle. And, if typing works, it works well.
Regarding performance, well. Parallelism is pretty hard to do well, and the language itself is not the fastest thing. Some parts are migrated to a C extension but that's costly in terms of development and debugging time.
Despite all of that, I do think that Python was the best choice for our situation, and still is. Maybe from this point onwards another language would make things easier, but without Python's library support, ease and speed of development and expressiveness the cost of just getting to market would have been far higher, and probably we wouldn't have reached this point with other languages. And migrating the codebase to another language is just not worth it at all, as there are still a lot of areas we can improve quite a lot with far less effort than a full rewrite.
Would be cool if some experienced dev could share some estimates when the Python scaling issues start.
When you build a backend using Python + Django/FastAPI, I assume in most cases the DB and not Python is the limiting factor. Moreover, you could always spin up more workers to mitigate scaling issues.
When you train ML models, your Python code just calls C++ functions. Python is not a limiting factor here either.
I worked for a startup that built their services using Python and Twisted. The codebase was a monolith broken up into several Twisted services, by the time I left it was probably 200k LOC of Python.
For us, the scaling problems started when we had two independent teams working in the same codebase. The extreme dynamism of the language meant that classes and data structures were being mutated willy-nilly in ridiculous ways across the execution flow. The lack of static typing made onboarding new developers difficult as they had to parse generations of excessively clever code and magic left behind by departed developers. This problem has only gotten worse in Python 3, which keeps piling on more ways to accomplish the same task.
The deployment story was also awful, but I don't think that's a surprise to anyone who has deployed Python at scale.
In terms of raw performance, at one point we estimated that our Python stack was adding 3-400ms of request latency compared to a comparable system written in Go. With the Python 3 deadline coming, we convinced management to invest in rewriting performance-critical parts of the service in Go, instead of the migration to Python 3. I left before the project was completed but we were already seeing massive improvements.
I recently learned just enough Rust to use its ndarray crate and rewrite bottleneck bits of code. I used to think the whole "write inner loops in $fastlang, use $scriptlang as glue" was something of a lame "cope" (and I say this as someone who doesn't know $fastlang), but I'm seeing upwards of 100X time performance improvements for code that can't be hammered into numpy idioms. I can provide code examples.
Thanks for providing code. I’ve written code like that before and the first thing that jumps out at me is that you have for loops in Python code which isn’t slow but will never be as fast as loops in a static compiled language. Sometimes for loops are the most readable way of expressing a calculation but as a numerical computation person my instinct is to look for opportunities to vectorize by rewriting loops into matrix notation on paper and then expressing them as array calculations.
Otherwise you’re right — whenever you have “for” loops you’re always going to end up ahead if you rewrite those in C, Rust, Julia or Fortran. And that’s exactly what authors of high performance numerical codes do.
Some of that Python is strange for numerical Python practices, yes, but I was aiming for close floating-point output equivalence. Rust ndarray provides the standard set of matrix ops, but I keep getting different results (\propto 1e-5 on a stable basis, which isn't quite a proof of incorrectness, but....) when I use too much numpy stuff like linalg.norm, etc. (ndarray slices in Rust are annoying to use because of lifetimes and such.)
I did my dissertation on symplectic exponential Runge-Kutta schemes with stable numerics for the big matrix exponentials needed (which have this specific structure that allow for theorems to be proven). But I didn't have time to write any code at all. I wonder if open source ODE solvers are getting good high-order symplectic methods by now...
I don't know about other languages, but in Julia I've heard people often say that loops end up faster than the equivalent vectorized code. So while this is true for Python/Matlab I don't think it is good universal advice.
That said, matrix notation can sometimes be the more readable way of expressing a calculation.
I might have used an early beta of Julia circa 2018 or something, but the chorus that it performs like $static_fastlang doesn't match the experience I had.
I'd don't know what you mean by "performs like" but it's definitely significantly closer in runtime speed after the first run to compiled languages than interpreted (in particular Python/Matlab).
Anyways, my point was more in response to this from the person I responded to: "my instinct is to look for opportunities to vectorize by rewriting loops into matrix notation on paper and then expressing them as array calculations". In some languages (Julia in particular) that is slower than the equivalent loop based code.
If anybody is looking for an alternative to Python that is also a great 0->1 language but doesn't have the same wall as Python described here, check out Nim[1].
This lets you gradually transition hot path Python modules to Nim, get compiled performance generally on par with C and Rust, whilst enjoying strong, static typing with great type inference.
In my experience (6-7 years 4 of which are full time) Nim strikes the perfect balance of the productivity you get with Python with high performance at the same time.
Also the metaprogramming features are incredible and, importantly, don't use a language subset but use the base Nim language itself.
Fair, maybe it's always a matter of picking the trap you want :) If you are optimizing for 0->1, Python is great. But if you want code to live and evolve over time with multiple authors, or ever will care about runtime performance, it's a dead end almost from the first line.
But sometimes the productivity benefit is all that matters and you accept you may have to throw it all out later (or invest insanely in making it work)... it's all about making an informed decision.
As with any language it's how you end up architecting your codebase. I do believe python lacks an authoritative resource for what is "good architecture" which leads to a lot of the code scaling problems.
Would love to get your take on if python is popular in large organizations because of the existing libraries or if it is the language asethetic itself?
To an outsider who knows a bit of python because of Airflow and Spark, it seems Python has become a popular metaprogramming language that various disparate ecosystems have all adapated.
Pyspark is python but kind of its own language and much of the processing is happening outside the python runtime. I think you could say the same for people doing Numpy or Pandas.
Tensorflow probably even more so where I believe Python is the most popular interface, but you're really just programming a program to run somewhere else. Again this is the same with Airflow conceptually, though I believe the runtime is python.
If my hypothesis is overall correct, and that a majority (or large share) of python programmers aren't sharing the same ecosystem, libraries and packages, then shifting to a different runtime or language that just encapsulates a subset of the language is much less challenging.
This is the opposite problem of the JVM, where no one really likes Java the langauge, so everyone tries to create enjoyable (and productive) languages for the jvm to keep using the libraries and the runtime optimizations.
Likewise. I cut my teeth on Python 20 years ago and I loved it back then - even getting my hands dirty in the guts of the interpreter.
But now in hindsight, I'm shocked that Python 3.0 wasn't seized as an opportunity to lose more of its baggage - the under-optimized interpreter, the poor parallelization story, the excessive surface area of the interpreter exposed to Python itself, making it impractical to re-implement in other interpreters, better typing, etc.
They broke backwards compatibility and all we got for it was Unicode?
Most problems do not need scale. Some problems require it. For those: use one of many proven "fast" languages. For everything else Python (or similar language) is more than adequate.
To add to this: any time somebody at my company who doesn't work on our Python services every day has to dip into them for a one-off task, it takes a day or two of troubleshooting their environment just to get to where they can start working with the actual code. And this isn't necessarily from scratch: any Python environment that's sat still for more than a few weeks becomes inevitably broken. It's a huge time-sink and I have to think it offsets any other productivity gains that Python may be giving us.
Personally I refuse to touch our Python services with a ten-foot pole. My local app hits the testing environment and that's that.
I recommend learning to use tox and virtualenvs to avoid this headache. A properly laid out project (of any language) should have a fast time from checkout to running tests.
For most python projects, that can be as simple as `pip install tox & tox`
> eventually you hit a wall and have to keep investing larger and larger amounts of people or computing resources to get continued returns.
That's certainly my impression. I'm not a professional Python programmer, though I have used it in a few projects that went to prod. I really want to like Python, and it is really nice for some quick one-offs. However, in my project, where the codebase was 30-40% Python, I estimate that 80-90% of bugs and time spent were in Python. The other side of things was more or less write once, never revisit.
> This means performance will fall further and further behind compiled languages.
This is exactly as it should be: the nature of the interpreter is that it has to do a lot of the work a compiler does when it compiles whenever a new command is run (and in Python's case, that is before you cover the whole "everything is an object" part). Yes, some of it can be mitigated, but the expectation that an interpreter keep pace with compiled code is somewhat of a pipe dream. What I'd love is a language that can be compiled (and be highly optimized) and interpreted without changing it's behavior. The developer experience with an interpreter is fantastic, the performance of compiled code is... better.
> What I'd love is a language that can be compiled (and be highly optimized) and interpreted without changing it's behavior. The developer experience with an interpreter is fantastic, the performance of compiled code is... better.
You mean something like a byte code like language with a JIT system? Like Java and C#? C# in particular now supports AOT compilation, but can still be run as “interpreted” CIL.
Having a VM / JIT in the middle is a way to get there, but ultimately, different. At least in Java's case, my code targets an abstraction instead of hardware, so I don't get the full benefit of compilation. Performance is part of the story... access to the OS and hardware is another.
Even native compilers like clang will use LLVM, and LLVM IR can look a lot like JVM bytecode.
The VM abstraction is not a barrier to fully optimizing for your architecture, but rather how much time you can spend converting IR/bytecode into assembly, whether you do it at runtime, and whether runtime information lets you optimize even further.
I feel pretty similarly. Our product used to be a Python/Django monolith and over the years we've ended up pulling so much of the functionality out into services written in golang. It was fantastic for getting the product up and running relatively quickly, and we're still very happy to let Django take care of the frontend and database migrations. But when your codebase has gotten big, refactoring in Python feels like Russian roulette, and when you're doing a lot of background processing, celery feels like the wrong tool for the job.
All the communication is initiated on the Django side and it's either HTTP and tasks queued in Redis depending on whether it needs to be synchronous. It adds a little extra complexity but it's also nice being able to stop and start services without bringing the whole system down.
This mirrors my experience as well. At one (at the time major) cloud provider we had to internally re-implement 100% API / behavior compatible Java applications for the publicly available Python applications for our OpenStack deployment, due to performance and scaling requirements. OpenStack Foundation required all projects to be only written in Python, which I think was a major factor in why OpenStack adoption was never significant. Scaling and deploying it was an absolute nightmare using public code.
Anything done at significant enough scale /must/ be multi-threaded to take full advantage of modern hardware which makes languages which don't provide a reasonable threading model effectively non-starters or at best a toy language for internal prototyping. I shifted from Python to Ruby, simply because Ruby provided a realistic way to interface with real threads, and then to Go, and haven't looked back. Simple applications are massively more performant in Go than in Python, and it's not just due to typing and compilation.
I love how simple Python is to learn and how it brings more people into the fold in approaching solving complex problems, but at the end of the day you should treat it like runnable pseudocode to get shared understanding so you can implement in a real language.
It was never intended as a stand-alone language for large projects, rather as a glue and prototyping language. All the tacked-on "static typing" won't change this. I think it would be best if Python was used according to its strengths and not as a 'unicorn' language.
Python is great for developing tools for infrastructure management, for scientific computation, and for small-medium web applications. Given its best in breed interoperability with C/C++, it is also good as a shell language for a top level project that incorporates other sub projects.
I'm curious, as you seem very versed in the Python ecosystem, where you stand on projects like Cython and integration of C libraries with Python using projects like Cython? Or would you consider that more of a stop-gap measure and not "real" Python (which would be valid)?
Things like Cython are great until they aren't, so yes, I think of them as a stop gap (albeit a practical and useful one). Any time you change interpreters or the execution model, you risk compatibility challenges with third party libraries and native modules. You also risk getting painted into a corner by their limitations. They definitely can help with hot code paths when their trade-offs don't prevent it but it still ends up being a kluge working around fundamental language issues.
But when Cython fits, it's great, and a good tool to have in your toolbox if you're working with Python.
I'm not terribly familiar with the options for typing, mostly because I'm salty that Python rolled out annotations without a clear plan for how they'd be useful*. Aside from Cython, does typing provide any mechanism to improve performance, or is it only there for static analysis? Worse yet, is it being used to do dynamic analysis that slows the whole thing even further?
* specifically, this happened long after the tragically many-ways-to-do-it of packaging systems and virtual environments, which survive to this day.
Is/Are there alternative programming language(s) you would suggest instead of python? It would be great if you could list the use cases also where one language does it better than python.
Micropython seems to make embedded code more readable and bug free than Arduino C-based programs, judging from a few projects I looked into.
Yes, it would probably be a mistake to write a large distributed, high performance database (say, for a login system handling billions of users) in python, but many, if not most, projects never hit the wall you are referring to.
In that sense, I feel that writing something in the most scalable language (if such a thing existed) would just be premature optimization at the highest level.
Thanks for this comment, sums up exactly my experience with python. Do you have a pointer to a larger write up of these issues? I really enjoy python, but agree with you that it isn't well suited for performance or scale necessarily. I find that people really get attached to it though and I'd like to have a good reference for them to help explain why issues like interpreter performance, GIL and typing make a difference on large projects even if not their pet ones.
I'm a C++ dev and was taught Python as a tool for prototyping and validation of ideas, not as a language for final implementations. Later with years I've read endless histories about the "traps" you mention, so I understood the notion I was taught wasn't without its merits.
So I still look suspiciously to all mid to large projects what are written in Python. It takes some serious effort and good engineering to maintain good quality, and still the tooling is not there for error detection, like in other better typed languages would be. I wouldn't feel confident relying on such project for a critical infrastructure piece. The idea should have been validated quickly with Python, then implemented in a more appropriate language.
EDIT: Obviously that's just my possibly wrong opinion. There are huge projects out there proving otherwise, but the issues discussed here are _very_ real.
This is my experience as well (but not at a FAANG). You can do things quickly in Python and it is a joy to write, but down the road (as you grow), you will regret it and find yourself re-writing it.
Also, typing in python has a "kluge" feel about it. It's like most things in python: it works nicely on toy examples, but is rather frail on the edges (if you're not convinced try to define the json type). I also think they chose the wrong strategy (type inference would have been way better)
To use a metaphor: Python is the duplo of programming languages.
> I've seen the curve and the pain it's caused, and would never use it for any code that needs to be performant or actively developed on the multi-month or year timescale
Typed python is safer and more maintainable than Go. Slow as a dog, but it's really a hard choice when choosing performance and a bad type system with a half baked language vs a good language with a good type system that is a bit of a drag on performance.
And you really don't have to go all in typing. I have worked on gradually typing code bases and it pays off from day one almost.
Python will always be important to me. It's the first programming language that I've learned, and it allows me to support my family. It's awesome that almost any problem I have, there is a library or wrapper ready made for me to use. I can google just about any error and get a solution right away.
But I've found that I really dislike maintaining larger Python codebases. Whenever I've tried to develop packages or libraries, or create executables it always felt really clunky. And when it comes to personal growth as a programmer, I find it's just too easy to stack ready made libraries on top of libraries, without ever coding things from scratch.
But I'm glad to hear that there has been more focus on improving CPython's performance. I don't disagree with the things that the Python Devs have chosen to focus on, and I don't think anyone can really say it hasn't been successful for them. I use Python for work, so I won't forget it anytime soon, but for programming at home, I'll use other languages that align more with my opinions of programming that I've developed after having used Python for an extended period of time.
> And when it comes to personal growth as a programmer, I find it's just too easy to stack ready made libraries on top of libraries, without ever coding things from scratch.
This is working as intended. This is how everything should be.
I think it is as well. Python is a batteries included language. But just like I have to hide my phone sometimes to get myself to use it less, I like to drop down into a language that is more niche so I'm less likely to find YouTube videos and Stack Overflow answers to my problems. I've found being forced to read the language reference, and just allowing myself to be stuck on a problem until I've thought my way out of it, helpful for my own personal growth. I don't have the discipline to do this in Python
Ugh. Python could be fast, but the python core team / GvR put so many ridiculous constraints on what can change that maybe at best it gets like 10% faster, some day. Anything meaningful means breaking compatibility with extensions, and they absolutely won't do that. I feel like GvR got burned in the python 2->3 transition, but he learned the wrong lesson. The lesson he seemed to learn was never break compatibility but what he should have learned is don't break compatibility for stupid things nobody cares about. Making the interpreter much faster is something people would care about, but they'll never modify it in any meaningful way nor will they ever endorse something like PyPy.
There is no way I'm doing anything serious in a language that decides to make us rewrite part of our work regularly. We have actual work to do... I think you underestimate how important stability is.
I'd be happy to make the decision myself. I wouldn't be happy if I learned on HN that I have N years to re-write X% of our production code before our language stops being supported.
Not really suggesting any amount of rewriting of python code, just that extensions might need to be recompiled and/or not depend on reaching into internals they shouldn't touch. To users it should mostly be invisible.
> extensions might need to [...] not depend on reaching into internals they shouldn't touch
Is that the problem? I thought I read somewhere that the C/C++ extension API would have to be broken to allow for removing the GIL, for example. The API, not just internals that shouldn't be touched by user code.
If it were only a rebuild, I don't think anyone would object; pip packages are built once per Python version anyway. Breaking the API would be a different story for libraries that make heavy use of C++ extensions.
The interpreter that breaks the backward compatibility makes that code run infinitely slower (i.e. it does not run at all) instead of running faster.
And it doesn't matter if the compatibility was broken for some thing you actually care about: if your code can't run at all until the broken third-party dependency is fixed by its maintainer, you simply won't upgrade, will you?
I work in the Node ecosystem, and while they break compatibility every once in a while I can't say I've ever even really noticed. There's a way to do it without it being a catastrophy.
As far as I can tell, it's because it's not based on CPython. For whatever reason, to GvR Python == CPython, other interpreters don't matter. It's a shame, because python the language is fine, but CPython the interpreter is janky and slow.
PyPy is a trade off and not universally better. It has a larger memory footprint (already a bit of an issue for Python), higher startup times and incomplete compatibility, in particular with C compiled libs used precisely where performance matters.
I'm excited to listen to the full podcast. The highlights look really solid.
I think I would like to get into some of the optimization work on Python. I've contributed to some of the DS/ML/NLP libraries in the past, but the core language would be super interesting.
Anyone have good resources on getting started on that sort of thing?
I'm militantly disinterested in the performance of Python until the deployment style of most Python projects is no longer "hunter-gatherer-style installs".
I highly recommend Poetry as a way to manage Python projects. It's my favorite bit of software I've had the pleasure of using, period; if any Poetry devs see this, kudos.
However, with pip's native freeze and install -r abilities, I'm also curious what you mean by hunter gatherer style installs. I've not yet had a problem with installing Python modules.
I take that as "keep running pip install on yet another discovered dep until the script works."
And heaven help you if you accidentally get conflicting versions installed in the same venv. (By conflicting, I mean two libraries that both depend on different versions of the same underlying library. Odds of this increase dramatically the more you use.)
Neither of these problems have happened to me since pip introduced their new dependency resolver (I think sometime during 2020). All dependencies are installed, and packages in the default repo usually have consistent dependency lists.
It's another story for conda, which flat-out refused to install my list of around 10 packages (that seems to stems from the fact that most packages are in conda-forge, and requirements are often inconsistent with the default repo).
Now of course, you gotta remember that installing something is usually more than copying the code, and there is nothing to help you with that, in any language.
Welcome to the world of deployment, which is why things like ansible and docker exist.
Yes. If you are going to get predictable, consistent practices for this you need a more or less official way of doing this or projects will just end up choosing one of several ways of doing this. Or worse: they will invent their own.
And this is demonstrated in the thread: people start listing ways you can streamline installs and try to avoid version conflicts. And there are, as you can see, several. Some don't even anticipate what kinds of problems the user will encounter with versioning conflicts, old Python versions coming with the OS etc.
What is a bit disheartening is that people actually thought they were providing helpful suggestions for solutions when what they did was only to prove my point. But they won't necessarily see it that way.
I'm not sure one can really end up with one true way of packaging per language - sooner or later someone will think "this is too complicated, I can do it better and simpler!" and you end up with another way of software distribution for that language (which is just as complicated as the previous one, once the author actually notices all the usecases).
This is why I prefer using the target OS dependent package management - there is usually just a single one such official system, which is also language agnostic and much more robust than the various per language kludge.
It is often more subtle. For instance, for languages that compile to binaries, you have already solved about 95% of the problem. The remaining is mostly things like static vs dynamic linking and how you actually get the binary from a distribution point into some directory on a hard-drive (which is, in part, a separate problem domain - except when it isn't :-)).
Then you have languages that are "half way there", like Java. I suppose there are still people who make what I refer to as "splat style projects", where you unpack some horror-zip containing thousands of files and litter your surroundings with JAR files, config files and other detritus. But you can build everything into a single JAR file that contains all the dependencies and can be run without any other requirement than a sufficiently new Java runtime. And when shown how to do this, most people tend to adopt it as their default build product.
Languages like Python don't have a well-established build product that is self-contained. This places an undue burden on those who package software for distribution. It forces them to involve themselves in concerns that should be contained within the project - not leak out onto everyone's floor and potentially cause accidents.
Packagers have to make sure that "all arms and legs are inside the vehicle" for all possible permutations of system configurations. Which is easy when you have a statically linked binary. Less so when the software in question is more like a cranky, writhing toddler.
What really made the problem visible for me was when I worked in an organization where suddenly the place was filled with "data scientists". Mostly math or statistics people. Or more precisely: when more non-engineers started writing code and proved to be not only unable, but unwilling, to learn sufficient software engineering to ensure other people than themselves could actually run the code they wrote.
Which was kind of unfortunate because researchers who were unable to reproduce their own computations was a _regular_ occurrence. They simply couldn't figure out how to get their old code to run on their new computer, for instance.
And it isn't because they are jerks. It is because they use tools to get work done. They are not as interested in the tools as the people who make those tools are. To them the rest is noise that wastes their time.
This highlighted the fact that you shouldn't have to be a software engineer in order to produce programs that are easy to distribute - it has to be part of the path of least resistance.
The fact that Python says "not my problem" isn't helpful.
(I think there are interesting lessons to learn from Go. There are a lot of things I don't like about Go, but almost every instance where the language developers put their foot down and said "this is how we do it" actually made things better for everyone. It might not be your favorite way of doing things, but someone has made a choice - which is better than "I don't know...do whatever you like")
Good point about scientists being quite terrible and software mainteinability - I've encountered it quite a few times myself and from what I've heard its not much better elsewhere.
I blame the publish-or-perish mechanics and grant approval process not actually motivating the participants in any way to write maintainable or reusable code. Like, I don't say they should make it good enough for industry to consume, but at least for the followup scientist to build in the work to run!
But thinking about it, even in the computer science course I studied there was hardly any emphasis on software maintenance, cove versioning or even how to collaborate with others - all non technical courses were basically about being and analyst and planning how to write a project, with zero interaction with others.
As for self contained deployment mechanisms - I do agree that effectively packing everything into a single static compiled binary like AFAIK Go and Rust effectively do has a lot of benefits. But comming from the distro background (Fedora) it also terrifies my quite a bit!
With dynamic linking you can patch CVEs by rebuilding the system library everyone uses (and not just encryption libraries can get CVEs!), you can recompile shared libraries with hardening flags and you actually know if something no longer builds from source next time rebuilding one of the parts fails.
Compared to that a developer provided massive static binary is quite a significant black box that can have multiple unpatched CVEs, compiled in fixed version of patched libraries or even non-publicly available code (meaning the binary can't be in-depndently rebuilt from source).
This scenario kinda describes a fully open source project - I guess for proprietary stuff a lot of these downsides does not really apply though out of necessity.
I see your point about using shared libraries to fix problems in multiple applications at once, but there is a flip side to that: what you're running isn't what you built so you may not be able to predict what your software will do when installed on different users computers.
That's true - but I'm not sure one can always 100% assure the environment is the same - or even should!
Ideally one should make the software as robust as possible so that it runs with high probability even in unforeseen environments - and if it fails, it should do that noticeably so that the developers will learn about it and can fix it.
It pretty much works like this in Fedora - lot of the various API breakages and regressions show up when the bits and pieces first land in the rolling distro called Rawhide, but thats fine as hardly anyone runs Rawhide as a daily driver. By the time the next Fedora release is branched from Rawhide and goes via Beta and RC all these issues are ironed out and the end result is pretty stable yet very up to date.
It's usually all the proprietary or heavy bundling software that have the biggest problems with dependency updates as they don't go with the flow or avoid updating the dependency for so long to end up with an insane jump to do.
I agree that there is little to be gained from requiring perfection, I think there is considerable value in trying to do everything that is within reason. And there are a lot of reasonable things one can do to be nice to users. Allowing the user to install a piece of software without having to alter their system at all is a good thing.
And this brings us back to where this discussion started: it takes real effort to write non-trivial Python code that will run just by installing the program itself. The path of least resistance is to dump all the problems in the lap of the user, which many Python programmers tend to do in practice. That's not very nice to the consumer.
I think Python could benefit from developing empathy with the consumer.
I'd be happy with just an optimized strict subset of python that uses type annotations to compile performance critical functions like cython. Mypyc sounds like an attempt at that but doesn't seem to be constraining the language enough to allow it to shed the dynamic nature of the language. Another similar effort was EPython, but it doesn't look like anyone is working on it.
I guess Numba would be the closest thing to this at the moment.
Given Python has the multiprocessing module, I always get confused when people talk about Python lack of support for multicore. What are the shortcomings of the multiprocessing module, that cause people to disregard it?
Python's multiprocessing module forces you to serialize all your data, or use ctypes. This means it's either going to be slow, or it's like writing code in a hybrid of Pythonic and C-style code. Which of course goes against the idea of using the simplest tool for the job.
But you still do have to jump through some hoops then. But like the example in the docs show there are shareable primitives, you can share numpy arrays, etc.
Multiprocessing is pretty heavyweight, everything needs to be serialized, you can't share Python objects directly or with good performance across processes, there is (fairly new) shared memory support, but it's just a bag of bytes. It's not particularly good on the cross-platform angle, while it does work on Windows, the startup time for each process is like starting a fresh interpreter, because that's exactly what it is doing.
Multiprocessing can work if you have all your logic in Python already, and there is no notable shared state between the processes, and you don't need to coordinate much at all. So pretty much just the very simplest problems in concurrency. Everything else might technically be possible, but will likely be very slow and annoying.
I am probably the most commercially successful user of the multiprocessing module :)
It's basically fine anywhere you need a function call that you can dispatch out to like 50 or 500 workers on a queue and then do something after that returns, but any shared memory or IPC between the workers is up to you.
Python is also fine for webserving because most web servers pre-fork workers or whatever, so this doesn't come into play there either.
It's harder if you want to do something different where you want threaded workflows with synchronized/protected like constructs that folks might be familiar with from say Java.
Firing up multiprocessing (forking) has some costs to bringing up the interpreters so it's not something you want to start up a lot and then close down a lot, better if you can start things and leave them running. Once it's up it is pretty fast.
I guess mainly it changes the style of your program too much - it's basically just glue around forks.
> It's harder if you want to do something different where you want threaded workflows with synchronized/protected like constructs that folks might be familiar with from say Java.
The performance overhead of multiprocessing is pretty high, to the point that you really have to take care to ensure that whatever problem you're solving with multiprocessing is large and parallel enough to justify it, otherwise your multiprocessing solution may very well be slower than your single-process one.
That's not to say it's useless, it's a nice tool to have in the standard library, but if you're coming from one of the many mainstream programming languages with lightweight threading libraries which make it much easier to take advantage of parallelism it's easy to get frustrated at both python's standard threading and multiprocessing libraries for their respective shortcomings, at least in their CPython implementations.
On Windows multiprocessing is very poor due to lack of forking. This means for example no memory at all is shared; modules, data, external shared libraries etc are all individually loaded by each subprocess. There are also some Windows-specific oddities that make using it a further pain (can’t remember the details).
There is a project to effectively internalise multiprocessing by running separate interpreters inside one process. It’s meant to be cheaper than having separate modules too.
Sharing data between processes is _very_ expensive, so whole classes of programs are not suited for this approach.
For example I had a program that preloads large immutable dataset. and a bunch of threads use it. I couldn't do it efficiently in Python (at least not CPython).
Outside of Windows, I believe, you can do it so that the dataset is read-only shared (or copy-on-write) via fork, and then it’s as quick as local access.
Loading data before you fork works pretty well, but the overall effectiveness heavily depends on the type of data involved. If it's something like numpy arrays or similar large, indivisible objects, you're golden. If you want to preload and share something like a huge nested Python dictionary or other large collections of small objects, you immediately collide with the reference counters.
Basically, since the reference count is kept right before the object data, as soon as the child process touches it - even just to look at it! - you immediately trigger a copy on write on the nearest 4k of memory, which tends to add up fast if you're not careful. Even if you never touch 99% of them, the garbage collector is happy to do it for you.
At my previous job it was bad enough that I ended up writing a small patch to be able to set some refcounts to 0xFF...FF and treat them specially, never changing their value. (Yes, this also meant that they never got destroyed properly, and the extra checks made our codebase around 4% slower, but it was an acceptable tradeoff. No, the patch was no longer small by the time it hit production.)
As a sibling mentions, the multiprocessing support is basically support for spawning new processes. It's better than nothing, but it's not very good for the performance.
The real problem with Python multicore is that the main problem is solves is the one ctur is talking about, namely, "Oh crap, I used Python and it doesn't run fast enough for my needs... maybe I can run more processes?" Using a language that is already in the slowest class of languages and realistically 40-50x slower than other languages means that just to recover the performance you'd get from switching to a compiled language, you need ~50 perfectly parallel processes solving an embarrassingly parallel problem. If you can't do that... and that's a fairly common case, things that are a full, true 50x embarrassingly parallel aren't actually that common on real hardware because you'll get some sort of contention... then you can't even work your way up to the performance that you could have gotten by starting with Go or C# or some other reasonable language.
And that's ignoring the serialization overhead between the processes. If that starts costing you noticeable amounts the number of processes you need to recover goes up really fast, due to the way the math works.
Note the only thing about Python causing this effect is its performance; it is equally true of anything else that is the noticeably slower performance league. So, also, note that if you are using Python in one of the ways where it doesn't have this performance problem, such as NumPy with almost all your compute in native code, this analysis doesn't apply.
In the 1990s when Python was born, programming in Python was massively easier than programming in the static languages of the day. The landscape has shifted... Python is now only incrementally better than modern static languages in some dimensions, and I tend to agree with ctur, it just plain isn't better once you pass a certain size and the stereotypical problems with dynamically-typed code start hitting you harder and harder. There are now a lot of good statically-typed languages where you can start a new project, writing something just incrementally harder to write than Python, with the type inference, built-in associative types, tons more libraries, etc. all the advances of the past 25 years, and get static-typed performance. The gulf between "easy Python" and "pull your hair out static language" is not quite closed. Not quite. But it's a lot closer than it used to be; less "Grand Canyon" as it was in the 1990s and more "can I jump that creek? I mean, it's pretty close... I think I can jump it...".
The upshot is, if you're reaching for multiprocessing for performance reasons, not convenience reasons, you've very nearly already lost. There's a narrow window where it might still make a bit of sense, but you're getting perilously close to "You need to switch languages" just by reaching for it at all.
I agree with this, but when it comes to scaling, making things faster at the language level is only one option. And while, as you suggest, the Python/static_lang gulf has narrowed, I would guess the gulf between "what needs to be done in the language itself" vs. "what can be done by some external system/database/process" has also narrowed.
What I'm getting at is, let's say you have some system where you need highly performant real time processing of some data. Perhaps 10 years ago, you might have needed a complex multi-threaded java/c++ app to handle it. But perhaps now, you use, say, DynamoDB, kinesis, kafka, lambdas, or some other collection of services that you're basically gluing together with... python.
I'm not saying you're wrong. I'm just wondering, if you could truly factor in the costs in both developer time -- developing multi-threaded apps is usually pretty tricky after all -- and infrastructure costs and whatnot, where the boundary between "python is adequate" vs. "we really need go/java/c++" lies, and in which direction it's moving.
In retrospect, I guess it's sort of a silly question, since at the core, the services we'd be gluing together are no doubt written in faster languages. Perhaps a better way of framing it what % of problems can be adequately solved (cheaply) by just using language X, and how has that changed over time?
"I'm just wondering, if you could truly factor in the costs in both developer time -- developing multi-threaded apps is usually pretty tricky after all -- and infrastructure costs and whatnot, where the boundary between "python is adequate" vs. "we really need go/java/c++" lies, and in which direction it's moving."
"It depends.", of course. Despite the significant slow down of single-core performance improvements, 1 CPU is a lot of power in a lot of use cases, even if you divide it by 50. I do frequently find myself reminding some people who get a little too deeply into the cloud mindset and the believe that any non-trivial problem needs clusters of systems that you can still do an awful lot with one CPU. And I often wonder how many "clusters" are out there drinking down the watts doing work that if somebody would just spend a week or two optimizing their code to get the O(n^2.5) algorithm out of their system could be comfortably done on a mid-grade laptop... if somebody just realized you shouldn't need a "cluster" to do this task.
However, flipping your perspective around is probably more interesting... multi-threading is still not "easy", per se, but it is also way easier than it used to be. Threading hell is true and exists, but a non-trivial amount of it was down to the architecture being attempted. It's a bad idea to try to coordinate everything with piles of mutexes everywhere. If you program with more agent-like approaches to resource management, even if you don't do it 100% like Erlang forces you, multithreading gets much easier. What kind of things does it open up to for it to be much easier than it used to be?
I still use Python for certain tasks where it is suited. But I'd have a hard time going back to it for most of my programming. I've internalized the ability to say "and I need a server here with its own thread of control managing this resource, and I need to set up a worker pool there for this data processing task, and I can set up a recurring, independent process to check this other thing periodically without it interfering with anything else" whenever it is necessary to ever be able to go back to architecting systems without that capability. I'm not going back to cooperative scheduling unless basically forced. There's just so many places where you ought to have this capability available to you and it's actually easier to work multithreaded rather than do the work to try to thread a single execution context through all the things that shouldn't be that tightly coupled together.
It is generally underappreciated that having to have two bits of code share an execution context is a deep level of coupling, and languages like Python force all your code into one execution context.
Having multithreading easily available has its downsides, yes, but it also has its legitimate upsides for architecture.
Really I think this is only true for the data science ecosystem. In most other cases, modern static language ecosystems are at least as advanced as Python.
One thing that is often overlooked regarding Python multiprocessing is that depending on the version of Python and the OS, the processes are not launched the same way.
For example MacOS cannot use fork while it was (is?) the default on Linux.
I wish I knew what went on in modern CPUs when it comes to branch prediction and inline caching because that is absolute magic.
Seems spot on, and I really wonder as well. Always had the feeling that some decades ago it was still possible to outsmart the CPU (well, and the compiler/optimizer) and get performance improvements by thinking like a CPU, but these days this seems to have become impossible, for this reason I guess?
It seems possible that intel/AMD have used the Python interpreter in some of their benchmarks, with the result that they either add or tune hardware optimizations in ways that help it.
This could mean that people working on the Python interpreter encounter more problems with CPU magic than most people do, since changes they make move it away from what the manufacturers optimized for.
I wonder now that python made it to the big league, so to speak, if there shouldn't be more high level review of what exactly it should aim to be as an language and ecosystem going forward. Is the speed optimisation problem even well defined otherwise?
E.g. tackling performant numerical computations via numpy and other such libraries seems to be a workable pattern. Even with compiled languages like C/C++ and fortran these types of computations are best handled with tuned libraries.
For other uses, e.g., API's fastapi/starlette claims to be as fast as go / javascript. Not to say that there isnt a performance issue, but who exactly is experiencing it as a generic issue? E.g., I know for a fact that some python based desktop GUI applications have performance issues (...) but it could be the implementation and in any case GUI use is a bit of a niche (and python on mobile which is the dominant user device these days is very very niche).
I agree, it needed everyone to step back and make choices as to direction.
Instead, we got the "stone soup" where everyone wanted their favorite feature from some other language added in. You'll see this in most languages, where people want something from another language they are more comfortable with. Privately, I refer to this as the California problem: people move away from California (or anywhere else really, I am picking on California) but then bring all of the baggage and voting habits that made California unpalatable to them eventually.
Python is especially vulnerable to this because it conflicts with "There should be one -– and preferably only one –- obvious way to do it." The more features added to the language, the more ways there are to do something, and we then must invent new idioms and lean on "convention."
There seem to be big appreciation for go language.
How difficult is it to develop a language with (mostly) python syntax while keeping the performance of go?
I guess most people who use python use it because of its aesthetics and might even never heard of the GIL issue in python, so i guess among all languages the python syntax is the most liked one.
- speeds matters of that level matters only for a limited number of use cases. Most programming use cases are fine with python speed. They were fine, in fact, with it 20 years ago, before we got that hardware speed.
- people are starting to complain about speed in python only because of all the data science going up. Suddenly here speed matters. Before, you had to get to google level of infra to find use cases where python was too slow.
- dev are expensive. Hardware is cheap. If you have a slow website, paying $200 dollar more on your server is not a big deal. But taking 3 more month dev that features with your $300k pro is another thing entirely.
I’ve never had an issue with a Python performance, not in twenty years. Possibly because of using it for what it’s good at? Every year it gets faster as well. I did have an issue with the performance of Java twenty years ago, but that was a long time ago of course. Computers at an order of magnitude faster now and java startup improved.
Python is a very easy language to learn and allows the user to build complex systems without having to worry about the syntax. However, this ease of use has led developers into using Python for various purposes that are not right for the language.
Python has a fundamental design decision that is difficult, if not impossible, to fix - its dynamic typing. This means it allows users to run code without having to declare what type of data their objects are receiving or giving up any typesafety guarantees with their data.
This can lead users into building applications that are prone to errors in the future due to unforeseen changes in the application's structure.
If you get rid of the GIL you have to replace one big lock with a bunch of very small locks, which actually hurts performance for programs which only need 1 core, which is most programs.
> If you get rid of the GIL you have to replace one big lock with a bunch of very small locks, which actually hurts performance for programs which only need 1 core, which is most programs.
Python could do what most other languages do and allow programmers to explicitly add locks to the code that needs to be locked, rather than locking everything, all the time.
Python has a robust and very capable set of locks and multiprocessing primitives. The issue is that the GIL has been giving you atomic/serialized access to a lot of primitives for free since the inception of the language, so no one has really felt the need to do things like lock access to their basic dicts or lists. You can't just take that code and remove the GIL locking without causing chaos. And as we saw with the decade plus transition from Python 2->3 you can't just tell people to go fix all their old code.
Most code doesn't actually run in a shared memory parallel context precisely because of the GIL, so I think the "chaos" would be minimal. It will still be painful, and would require the ecosystem to adapt (and therefore will never happen because the Python ecosystem lacks the necessary leadership), but it's technically feasible and it would be politically feasible for other language communities.
Ok, can we do this?
Python interpreter can understand if it is running on 1 core processor (assume 1 core = 1 HW thread) or multi core processor. Then use GIL taht suits. ie, if the code uses multiple SW threads, use GIL that suits needs... something like possible? Any reason why experts are not doing this?
-p
I agree. At least an annotation based method of disabling the GIL. If I'm running multi-processing code and its a thread per process then I really don't need the GIL - I should be able to disable it.
a lot of python code has subtle race conditions where they assume some operations like dictionary access, manipulation, etc. are atomic and the GIL makes it all fine. remove the GIL and there can be a huge explosion of subtle and difficult to find bugs in even simple code. people have been working on trying to remove the GIL for a decade or more now and I don't expect any magic fix soon.
Unless you go all-in on typing -- which is difficult today with any meaningfully sized existing codebase -- maintenance is largely a "hope manual testing and unit tests catch anything resembling type errors" which is a major challenge for, say, structural change to a code base like refactoring. Plus typing is still young, the tooling somewhat immature, and can lead to false senses of security if you aren't very careful and opt into the strictest modes. This makes large number of developers and codebase size a major stumbling block.
The interpreter performance and GIL are fundamental issues as well. Multiprocessing and hacks around the GIL are quite painful if you even glance at any native threading code (say in a C++ library) and even when you stick to pure Python, you have a debugging mess when anything goes wrong.
But if someone can improve performance, that'd be great, and has massive impact potential. It is incremental though and doesn't solve fundamental issues with the language. I'm also skeptical of the "5x" plan referenced in the blog, and very skeptical we can ever see meaningful removal of the GIL due to library and baked in design decisions in existing code. This means performance will fall further and further behind compiled languages.
(I write all of this having been a part of supporting Python at massive scale for over two decades, including at two FAANG companies who invest heavily in it. I've seen the curve and the pain it's caused, and would never use it for any code that needs to be performant or actively developed on the multi-month or year timescale).