Hacker News new | past | comments | ask | show | jobs | submit login

There was always a denial of removing the Global Interpreter Lock because it would decrease single threaded Python speed for which most people din’t care.

So I remember a guy recently came up with a patch that both removed GIL and also to make it easier for the core team to accept it he added also an equivalent number of optimizations.

I hope this release was is not we got the optimizations but ignored the GIL part.

If anyone more knowledgable can review this and give some feedback I think will be here in HN




I use a beautiful hack in the Cosmopolitan Libc codebase (x86 only) where we rewrite NOPs into function calls at runtime for all locking operations as soon as clone() is called. https://github.com/jart/cosmopolitan/blob/5df3e4e7a898d223ce... The big ugly macro that makes it work is here https://github.com/jart/cosmopolitan/blob/master/libc/intrin... An example of how it's used is here. https://github.com/jart/cosmopolitan/blob/5df3e4e7a898d223ce... What it means is that things like stdio goes 3x faster if you're not actually using threads. The tradeoff is it's architecture specific and requires self-modifying code. Maybe something like this could help Python?


I would love to see this implemented purely for curiosity's sake, even if it's architecture-specific.

Personally, cosmo is one of those projects that inspires me to crack out C again, even though I was never understood the CPU's inner workings very well, and your work in general speaks to the pure joy that programming can be as an act of creation.

Thanks for all your contributions to the community, and thanks for being you!


That's a pretty clever hack, nicely done!


The GIL removal by that guy reverted some of the improvement done by other optimisations, so the overall improvement was much smaller.

And most people do care for single-threaded speed, because the vast majority of Python software is written as single-threaded.


> the vast majority of Python software is written as single-threaded.

This is a self-fulfilling prophecy, as the GIL makes Python's (and Ruby's) concurrency story pretty rough compared to nearly all other widely used languages: C, C++, Java, Go, Rust, and even Javascript (as of late).


Getting rid of the GIL will also immediately expose all the not-thread-safe stuff that currently exists, so there's a couple of waves you would need before it would be broadly usable.


Cool, they should start now.

As a python dev, pythons multiprocess/multithreading story is one the largest pain points in the language.

Single threaded performance is not that useful while processors have been growing sideways for 10 years.

I often look at elixir with jealousy.


Or maybe keep things the way they are.If you really need performance python is not the language you should be looking for.

Instead of breaking decades of code, maybe use a language like Go or Rust for performance instead.


Python is also the dominant language for machine learning which does care for performance. The person who made recent nogil work is one of the core maintainers of key ML library. The standard workaround is ML libraries, the performance sensitive stuff is written in C/C++ (either manually or with cython) and then uses python bindings. But it would be much friendlier if we could just use python directly.

It's also commonly used language for numerical work in general. Most of the time numpy is enough and then occasionally you'll need something not already implemented and then have to do your own bindings.


> Python is also the dominant language for machine learning which does care for performance. The person who made recent nogil work is one of the core maintainers of key ML library. The standard workaround is ML libraries, the performance sensitive stuff is written in C/C++ (either manually or with cython) and then uses python bindings. But it would be much friendlier if we could just use python directly.

Multithreading is not really the reason why things get written in cython etc., you can easily see 100x improvements in single threaded performance (compared to maybe a factor of 2-8x for multithreading). If you care about performance you'd definitely write the performance critical stuff in cython/pythran/c.


Nope, C++ and Fortran are.

The bindings available in Python can also be used from other languages.


I would have thought convincing people they’ll just have to use Go or Rust or Elixir would have been an easy sell around here.

Turns out they just want a better Python.


>Turns out they just want a better Python.

That's Go. It gives actual types[1] and structs (so you dont have to wonder about dict, class, class with slots, dataclasses, pydantic, attrs, cattrs, marshmallow, etc). It removes exceptions and monkeypatching. It's async-first (a bit like gevent). It's inherently multicore. And you can package and distribute it without marking a pentagram on the ground and sacrificing an intern to Apep.

You just need to stop being friends with C FFI. Which is fine for web and sysadmin tools. For data science and ML/AI/DL, it's less ok.

[1] And the types are actually checked! I think a lot of people aren't really using python type checking given how slow it is and no one seems to complain. Or maybe everyone is using pyright and pyre and are satisfied with how slow these are.


Going to the very unexpressive Go from the expressivity of python is a goddamn huge jump though.

Going to JS or even TS for performance would be saner, and it has a same-ish object model even.


The expressivity in Python is a problem that needs to be solved though. Moving to JS goes the wrong way.


It already exists, but they don't want to learn other languages.


> Instead of breaking decades of code

Pin your version.


Concurrency is not solely required for performance. I'm designing a small tool (a file ingester/processor) for myself, which gonna need several, really concurrent threads. I love Python, but I can't use that, so I'm learning Go.


Why put Go and Rust in the same category? I never really understood that.

Either include like almost every language from JS, Java, C# to Haskell, or just list C++ and Rust. But Go is in the former category.


Python has basically already done exactly that with 2.7 to 3 and we came out of that relatively fine.

I say bring it.


So you are essentially saying Python is obsolete. It's used for decade old code and for new code you should use go or rust.


> As a python dev, pythons multiprocess/multithreading story is one the largest pain points in the language.

Hmm, how is that so?

As a python dev as well, I don't have much complaint with multiprocessing.

The API is simple, it works OK, the overall paradigm is simple to grok, you can share transparently with pickle, etc.


Multiprocessing is fine, but the cost of starting another interpreter is pretty visible, so you need to create a pool, and it may not be an overall speedup if the run time is short.

It takes more careful planning than async in JS?, say, or goroutines.


Yeah but for JS style async you'd use probably use an event loop in Python, not multi processing.


Yes. But, frankly, async is also simpler in JS than in Python: e.g. no need to start a reactor loop.


Starting the event loop is no worse than any setup of a main function, it’s a oneliner: asyncio.get_event_loop().run_until_completion(my_async_main)


Errr no, that has been replaced with asyncio.run quite some time ago.


Pickle isn't transparent though, custom objects that wrap files or database sessions need to override serialization.

The ProcessPoolExecutor is nice but shouldn't be necessary.


On the flip side, if your workload can be parallelized across thousands of cores, python has about the best CUDA support anywhere.


That would be C++ and Fortran actually.


But the python bindings are great and useful to many, so Python gets to be added to the list.


Just like any language with FFI capabilities to call the same libraries.


As another python dev, it has basically never been a painpoint for me.

Your anecdote adds little.


Pretty sure it is actually a statement that comes from the "python is a scripting language" school, and not because the huge horde of programmers that craves concurrency when they write a script to reformat a log file to csv keeps being put off by the python multiprocessing story.


Not sure I understand your point, can you clarify? Python is used across many different domains, being able to really take advantage of multiple cores would be a big deal.

I'd really appreciate if python included concurrency or parallelism capabilities that didn't disappoint and frustrate me.

If you've tried using the thread module, multiprocessing module, or async function coloring feature, you probably can relate. They can sort of work but are about as appealing as being probed internally in a medical setting.


I'm not the person you responded to, but I think the gist of it is: what it is is not defined by how it is used. Python, at its core, is a scripting language, like awk and bash. The other uses don't change that.

Occasionally, a technology breaks out of its intended domain. Python is one of these - it plays host to lots of webservers, and even a filesystem (dropbox). Similarly, HTML is a text markup language, but that doesn't stop people from making video games using it.

The developers of the technology now have to make a philosophical decision about what that technology is. They can decide to re-optimize for the current use cases and risk losing the heart of their technology as tradeoffs get made, or they can decide to keep the old vision. All of the design choices flow down from the vision.

They have decided that Python is a scripting language. Personally, I agree (https://specbranch.com/posts/python-and-asm/). In turn, the people using the language for something other than its intended purpose have choices to make - including whether to abandon the technology.

If instead Python moves toward competing with languages like go, it is going to need to make a lot of different tradeoffs. Ditching the GIL comes with tradeoffs of making Python slower for those who just want to write scripts. Adding more rigorous scoping would make it easier to have confidence in a production server, but harder to tinker. Everything comes with tradeoffs, and decisions on those tradeoffs come from the values of the development team.

Right now, they value scripting.


Python has already become a lot more than a scripting language. To say today that scripting is it's core identity seems naive at best. Yes, it has roots but has object oriented and functional facets which do not exist in awk or bash.

Pandas, numpy, scipy, tensorflow. All of these go way beyond what is possible with a scripting language.

Since when is the runtime performance of a script a serious concern? Why is it a problem if this aspect gets slightly slower if it brings real concurrency support?


Awk is a functional language with a surprising number of features. Bash, not so much. Developers really do care about runtime performance of scripting languages: They often wait for scripts to finish running before doing other work, and if the sum of the time it takes to write and execute a script is too long, they will look for an alternative.

All of the libraries you have cited are from the ML and stats communities, and they are not core language features. From what I understand, ML folks like Python because it is fast to play with and get results. In other words, they like it because it is a scripting language.

Personally, I like that Python has kept the GIL so far because I would never run a 24/7 server in Python and I am happy to use it very frequently for single-threaded scripting tasks.

Edit: I didn't decide that Python was a scripting language. The Python maintainers did. The point is that the identity of a project doesn't flow down from its use cases.

Edit 2: I should have said "the identity of a project doesn't flow down from its users."


"Personally, I like that Python has kept the GIL so far because I would never run a 24/7 server in Python and I am happy to use it very frequently for single-threaded scripting tasks."

Just as a side-note - my prior gig used Python on both the Server and Data Collection industrial systems. It was very much a 24x7x365 must-never-go-down type of industrial application, and, particularly when we had a lot of data-sources, was very multi-processing. Was not unusual to see 32 processes working together (we used sqlite and kafka as our handoff and output of processes) running on our data collection appliances.

Our core data modelling engine would routinely spin up 500 worker pods to complete the work needed to be done in 1/500th of the time, but we would still see some of the long term historian runs take upwards of a week to complete (many hundreds of machines for multiple years with thousands of tags coming in every 5 seconds is just a lot of data to model).

I say this mostly to demonstrate that people and companies do use python in both large-processing intensive environments as well as industrial-must-never-stop-24x7 mission critical appliances.

I don't ever recall any of our engineers looking at Python as "a scripting language" - it was no different to them than Java, C#, Rust, Go, C++ or any other language.


I know that people do use python for 24/7 "must not fail" applications. I'm just not smart enough to write python that I would trust like that. Python comes with a tremendous number of foot guns and you have to find them because there is no compiler to help, and it can be a real pain to try to understand what is happening in large (>10,000 line) python programs.


I think the key here is to see using the Python language as an engineering discipline like any other, and just take the classes, read the literature, learn from more Senior Engineers, and projects, before attempting to develop these types of systems on your own.

I don't think anybody expects a recent graduate from computing science, with maybe 4 or 5 classes that used python under their belt (and maybe a co-op or two) to be writing robust code (perhaps in any language).

But, after working with Sr. Engineers who do so, and understanding how to catch your exceptions and respond appropriately, how to fail (and restart) various modules in the face of unexpected issues (memory, disk failures, etc...) - then a python system is just as robust as any other language. I speak from the experience of running them in factories all over the planet and never once (outside of power outages - and even there it was just downtime, not data loss) in 2+ years seeing a single system go down or in any way lose or distort data. And if you want more performance? Make good use of numpy/pandas and throw more cores/processes at the problem.

Just being aware of every exception you can throw (and catching it) and making robust use of type hinting takes you a long way.

Also - and this may be more appropriate to Python than other languages that are a bit more stable, an insane amount of unit and regression testing helps defend quite a bit from underlying libraries like pandas changing the rules from under you. The test code on these project always seemed to outweigh the actual code by 3-5x. "Every line of code is a liability, Every test is a golden asset." was kind of the mantra.

I think that what makes python different from other languages, is that it doesn't enforce guardrails/type checking/etc... As a result, it makes it trivial for anyone who isn't an engineers to start blasting out code that does useful stuff. But, because those guardrails aren't enforced in the language, it's the responsibility of the engineer to add them in to ensure robustness of the developed system.

That's the tradeoff.


It's not that hard, we have compute-intensive servers running 24/7 in production and written entirely in Python on our side, using C++ libraries like PyTorch.

You just have to isolate the complicated parts, define sensible interfaces for them, and make sure they are followed with type hints and a good type checker.


>> To say today that scripting is it's core identity seems naive at best. Yes, it has roots but has object oriented and functional facets which do not exist in awk or bash. Pandas, numpy, scipy, tensorflow. All of these go way beyond what is possible with a scripting language.

No, it is not. It's what you can do with a scripting language, which is a fairly straightforward functional categorization. If you are very familiar with it and really want to just ignore the performance issues and don't mind strange workarounds (eg elaborate libraries like Twisted), you probably use python.

> The point is that the identity of a project doesn't flow down from its use cases.

Use cases are what determines the identity of a language. It takes a massive and ongoing marketing campaign to convince people otherwise, with limited success without use-cases. Python is popular because it's a scripting language with a relatively low learning curve (one true way to do things philosophy). That's it. It will improve over time, but it's been slow going...and that's in comparison to the new Java features!

Haskell is a shining example of how the language identity is foisted upon the public until you are convinced enough to try it. It doesn't take long to learn it's a nightmare of incomplete features and overloaded idioms for common tasks, so it isn't used. There aren't good use-cases for a language with haskell's problems, so the developer community and industry-at-large avoids it.


But all the “magic” that makes the scientific stack so great is largely due to numpy (and many other Fortran or c driven code) being fantastic. Python is the glue scripting language for organizing and calling procedures. That’s,IMO, it’s original and core purpose.

Now the fact that you can do some fun metaprogramming shenanigans in Python just speaks to how nice it is to write.


Except most of them are bindings written in native languages, being scripted from Python.


If we’re going to leave Python as a scripting language (fine by me), can we get the machine learning community to swap to something better suited?

It strikes me as a bit of a waste of resources to keep stapling engineering effort into the Python ML/data ecosystem when it’s basically a crippled language capable of either: mindlessly driving C binaries, or scripting simple tasks.

What other performance, feature and technique advancements are we leaving on the table because the only “viable” ecosystem is built on a fundamentally crippled language?


From what I can tell, the ML community is moving toward Julia. I don't think anyone predicted that they would end up locked into Python so heavily.


That's not been my experience much at all work or research wise. Tensorflow, pytorch, jax are still very dominant. I've worked at several companies and interviewed at several dozen for ML roles. They have 100% been python/c++ for ml. I'd be impressed if even 2% of ML engineers used Julia.


I feel like Julia will take more of the R people than the Python people, to be honest.


I wanted to use Julia for some experiments but it's so confusing. I would call a function with VSCode and get "potential function call error" or something, with no details. Is it valid or not?

Also, I hate the idea of re-building all the code when the program starts. Python's JIT can at least ignore the performance-critical code that's written in C++.


IMHO majority of Python software doesn't use threads because it is easier to write single threaded code (for many reasons), not because of GIL.


In Golang you can spawn a green thread on any function call with a single keyword: `go'.

The ergonomics are such that it's not difficult to use.

Why can't or shouldn't we have a mechanism comparably fantastic and easy to use in Python?


Because making it easy to write C/C++ extensions that work the way you expect (including for things like passing a Python callback to a C/C++ library) has always been a priority for Python in a way that it isn't for Golang?


Any C/C++ extensions that wants to enable more efficient Python has to learn GIL and how to manipulate that as well. Including not limited to: how to give up GIL (so that other Python code can progress); how to prepare your newly initiated threads to be Python GIL friendly etc.

Personally, GIL is more surprising to me when interop with Python.


> Any C/C++ extensions that wants to enable more efficient Python has to learn GIL and how to manipulate that as well. Including not limited to: how to give up GIL (so that other Python code can progress); how to prepare your newly initiated threads to be Python GIL friendly etc.

Sure, but the easy cases are easy (in particular, you can usually do nothing and it will be correct but slow, which is much better than fast but incorrect) and the hard cases are possible.

> Personally, GIL is more surprising to me when interop with Python.

Any GCed language, including Go, will oblige you to integrate with its APIs if you want to handle complex cases correctly.


https://docs.python.org/3/library/concurrent.futures.html sort of gives you that, syntax works with threads or processes.


@Twirrim How would you rate the user experience of the concurrent.futures package compared to the golang `go' keyword?

It is architecturally comparable to async Javascript programming, which imho is a shoehorn solution.


I agree with your point, but the vast majority of C, C++, Java, and Javacript code is also written as single-threaded. It’s fair to acknowledge that the primary use case of most languages is single threaded programming, and also that improving the ergonomics of concurrency in Python would be a huge boon.


A lot of python runs in celery and possibly even more python runs parallelized cuda code. Not sure at all the majority of python code is single threaded, especially for more serious projects.


I think you mean the work by Sam Gross:

https://github.com/colesbury/nogil/

Interesting article about it here:

https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6ad0...


Removing GIL also breaks existing native packages, and would require wholesale migration across the entire ecosystem, on a scale not dissimilar to what we've seen with Python 3.


They believe that the current nogil approach can allow the vast majority of c-extension packages to adapt with a re-compile, or at worst relatively minor code changes (they thought it would be ~15 lines for numpy for example).

Since c-extension wheels are basically built for single python versions anyways, this is potentially manageable.


The various trade-offs and potential pitfalls involved in removing the GIL are by now very well known, not just here on HN but among the Python core developers who will ultimately do the heavy lifting when it comes to doing this work.


> on a scale not dissimilar to what we've seen with Python 3.

If only people had asked for it before the Python 3 migration so it could have been done with all the other breaking and performance harming changes. But no, people only started to ask for it literally yesterday so it just could not ever be done, my bad. /sarcasm

If anything the whole Python 3 migration makes any argument against the removal of the GIL appear dishonest to me. It should have been gone decades ago and still wasn't included in the biggest pile of breaking changes the language went through .


I think probably the reason is that in 2008, consumer chips were still mostly single or dual core. Today we have consumer grade chips that have dozens of cores, so the calculus has changed.


We didn't get any performance improvements, but at least now we can call print from lambda functions, that's was certainly worth the massive breakage.


I agree, please don't just accept the optimization and sweep the GIL removal under the rug again.


> I hope this release was is not we got the optimizations but ignored the GIL part.

I would not be surprised. It is highly likely that the optimizations will be taken, credit will go to the usual people and the GIL part will be extinguished.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: