> Maybe now I'll be able to actually figure out what data to send libraries without actually reading their source code.
One could hope, but any library abusing kwargs in all their methods is showing they’re willing to go through the absolute minimum to make their code usable, let alone readable and self-documenting.
It feels like we're going in cycles. C was somewhat lax with type checking, thus C++ and Java were both made more strict. Looking to escape the tyranny of static typing, the rise of Python, Ruby, or JavaScript instead left us with a desire that Rust, Go, and TypeScript now fulfill. I wonder what's the next step? LLMs are extremely broad in what they accept, but don't exactly fill the same niches.
I always saw C++ and Java addressing other issues, like low level memory management, and higher level abstractions like objects. Not so much type policy.
Anyways I see a strong tendency in lots of programmers dismissing anything that is not type static-strict-safe, and others advocating for a more relaxed system.
I see C++ as a scientific DSL definition language. It lets you create an abstraction and define arithmetic operators, iterators and even lets you control dereference and call semantics. The standard helps this scheme by defining return value optimization.
golang isn't in the same league as the other languages mentioned. And Java and C# have come a long way since to be expressive as well as lesser syntax heavy (type inference, records, pattern matching, string templates, etc.)
I'm curious - what do you particularly like about Fortran that isn't otherwise broadly available? Is it a matter of cultural idioms, a unique composition of features, or something else entirely?
Not OP, but I know a good use case: where I work we do lots of math and signal processing. It is done in matlab, which is great, but then we need to run it in some embedded processor. Using the C++ generated by matlab is beyond any hope. Had the code be written in Fortran (which is very possible, and would make the code clearer) it would run very fast.
Now we had a team of people translating matlab to C++
Check out D language, it should be suitable for math, signal processing, data science, embedded, etc, and it's intended to be better than Fortran, C and C++ [1].
[1] Is Fortran easier to optimize than C for heavy calculations?
I know that Fortran is highly used in the numeric world, especially due to widespread libraries such as LAPACK and BLAS, amongst others; in your opinion, what are the characteristics that make such code much more clear when written in Fortran as opposed to C or C++?
Also, do you prefer a specific version of Fortran, or is the latest one fine?
A couple big things: Fortran natively performs operations on arrays directly like Matlab or Numpy in Python (Matlab was originally a REPL-style front-end to Fortan), and Fortran compilers tend to yield quite fast code (though specific cases will have another language outperform Fortran).
That website/community was created in part by the original author of the Python Sympy library, Ondřej Čertík. He is also working on his own Fortran compiler that you can use via webassembly to play around with Fortran; find links if you want to play here: https://lfortran.org
I've only dabbled a little, but I like the general idea, and I appreciate a F/OSS Fortran compiler being developed like this alongside actively seeking to grow the Fortran community & push the language & its libraries forward.
I expect more widespread adoption of Fortran to be quite a ways out, but what Ondřej is doing for Fortran is necessary (not sufficient) for such adoption to be the case.
It is not about the language. Is about the people using it. They are typically not CS people. You cannot expect they program all the languages. They know typically python, matlab, and fortran. Of the 3, the one that would perform better is Fortran.
Can you give an example? I'm always curious what could be more readable than calling a few functions and composing them using properly named variables.
Modern Fortran (2008+) has built-in support for matrices, complex numbers, co-arrays (parallel programming), array slicing, etc. This makes it easy to write performant compiled code if you mostly deal with numerics.
Fortran has also added support for e.g. object-oriented programming, pure functions (no side effects for better optimization), and pointers. So idiomatic modern Fortran code looks very different from the “Fortran 77” code that many people might think of when they hear the name :).
Sure, not in the traditional sense. Since it does not make small standalone libraries, even though possible, I would not make a native extension with it. But juliacall works fine for Python and R. Quite seamless.
Well, who hasn’t used that classic datastructure `dictstack` (and famously the dict.pop operation)?
At first I was kinda giggling. But actually there are such things, if the Mapping is also ordered.
LRU cache, trees, tries… and—oh wait—all CPython dicts are ordered these days!
(Honestly I have only used the modern ordered-nature of `dict` for serialization to versioned or human-editable files. But why not an algorithm with a “`stackdict`” I guess?)
The Azure SDK generally have a lot of silliness in the Python implementation. Functions that take string as inputs, except of course they don't, they take two or three very specific string values and use them to control functionality. What those values are... Well, you should take a look at the ENUM in the C# implementation to figure that part out.
A big use case for kwargs is not breaking compatibility and not having to copy/paste a ton of parameters when just forwarding them. But that's exactly the case which is difficult to type correctly.
Either copy/paste or rolling them into config objects and passing those down is generally preferable. Copy paste doesn’t always feel great for pass through arguments but it’s perfectly interpretable.
Naked kwargs is so difficult to work with that I hesitate to think of a use case where it wouldn’t be an anti pattern.
> Either copy/paste or rolling them into config objects and passing those down is generally preferable
Preferable for whom? I do not prefer it. I much prefer to avoid the extra work it creates for me vs. the simplicity of kwargs. I use explicit args for the function I made and then add *kwargs on the end and then I don't have to write bespoke config objects or copy and paste a bunch of stuff that might be obsolete by a future update to some library and also pollute my function's signature. I would very much welcome a way to tell callers where kwargs is going without having to do extra work.
Preferable for those who would use your code. If it’s just you then it’s your exclusive preference. If you have users, args/kwargs is going to be more opaque than a more explicit option.
For code with many users, creating a few extra minutes of work for one dev is preferable when the alternative would be every dev who uses that code has to spend that same extra work and then some to grok what exactly is going on with the method signatures. Being explicit also creates traceable code, in that you can search a keyword to find everywhere it’s used or passed rather than tracing methods where it might be used.
I can promise very few users would be thankful for the elegance and minimalism of args/kwargs when they’re source-diving trying to figure out how to get some basic functionality to work.
I think people kind of miss the idea of kwargs in python, the idea is that they are a dict. You can cast a dict to kwargs, in contrast to positional args.
I see the typedict being super useful. I really don’t see your argument and would say that this typed dict is more developer friendly than having positional arguments.
https://docs.python.org/3/tutorial/controlflow.html#more-on-...
I haven't often been the designer of code others have used, but I have used someone else's wrappers for libraries that we use in many parts of our code. My experience of trying to use their wrappers has been a guessing game of how they decided to arbitrarily rename an argument and finding places where they don't support arguments that I need. I get that kwargs is a bag of mystery and that there is benefit to being explicit, but it comes with tradeoffs. I don't like really like kwargs, but sometimes the use of kwargs better serves the purpose than the alternatives. I've looked for solutions, but haven't found anything that avoids the above issues. If anyone has tools or techniques that eliminate these pain points please share. The typedDict is promising, but I'm not sure how composable they are. Time will tell.
I think naked kwargs can be abused, but there are many legitimate use cases for them. For example, we interface with a message bus that uses JSON for transport. There are several different ways to enqueue a message onto the bus, and it would add a ton of complexity and no real value to define the parameters for each of those Send APIs.
Looking at it another way, the hunk of code in charge of serializing your message does not care one whit about the innards of each message, and making it become aware would add tremendous complexity with no real value.
There's at least a handful of places that you can't escape them, one of the most evident in my mind is when constructing higher order functions or other decorators. Maybe a simple example would be a retry higher order function (or decorator), where you can't know the arguments and their form ahead of time, and want to invoke the wrapped function as is (and only do something like repeating if an exception is triggered). Keyword arguments are helpful for writing certain kinds of generic code, but definitely can be easily abused (much like most of the meta-programming facilities in Python).
If you didn't use **kwargs you'd have to copy and paste every kwarg and its default value from the superclass into the subclass, which is ridiculous IMO.
> be able to actually figure out what data to send libraries without actually reading their source code
Just reading this sent a chill down my spine. I have horrible memories of having to read the code to figure out what something was doing (in JavaScript, Python, Ruby, etc), due to the disaster of an anti-feature called dynamic typing having been used.
Untyped languages tend to be at the forefront of paradigms, and typed languages come in toward the end when reliability and need for tooling are more important than innovation/discovery.
In the 90s a bunch of kids were building websites with LAMP stacks while serious engineers were building aging/about-to-be-irrelevant desktop software in serious, typed languages.
I love Python, but I also love the ability to tell what is what.
I kickstarted a project in Python a decade ago that was wild on dynamic typing. As it passed a critical threshold of ~10k LoC, it became nearly ummaintainable.
What's that `response` passed here? A verbatim response object from `requests`? A proxy mapping? Bytes? A JSON string? If so, a list or a dict? And what fields are inside?
Multiply it by tens or hundreds of methods and classes, and it's easy to see why projects based on purely dynamic patterns fail to scale.
By now I have almost completely rewritten that project to use type hints everywhere I could. And guess what? In 95% of the cases I knew exactly what was being passed - or the choice was about 2-3 types at most, so a Union sufficed.
Yes, there's a 5% of cases where Python's dynamic typing features are a blessing. The power of meta programming in Python is often underestimated. Reflection is amazingly intuitive compared to the patchwork mess of Java, Kotlin and friends. Everything can be mocked without hassle and boilerplate. And duck typing can really be useful sometimes.
But, again, in an average project I wouldn't expect code that benefits from these features to make up more than 5-10% of the codebase.
For everything else, just do yourself, your future self and anyone who will work on your code a favour, and use type hints.
I agree. And what makes python a nice choice for this is that you can go wild in the beginning and then gradually add type hints as your project takes shape.
Statically typed languages in the 90s had a reputation for being verbose (like Java’s Cat = new Cat()), or difficult to work with (like C’s manual memory management). Haskell and other ML family languages hadn’t quite gotten that popular. Type inference wasn’t a feature in many popular statically typed languages until recently. And type inference makes static typing significantly easier.
The P in the LAMP stack is still a bit of a mystery to me. It could (one could wish) have been Haskell instead, but oh well.
It's not an anti-feature. Those languages gained popularity in part because they were dynamically typed. But this debate is decades old and people always dogmatically choose one side, so not really any point in trying to convince you. Alan Kay made the argument back in the 1970s that type systems were always too limiting, therefore classes and late binding were the answer for him.
I’m speaking from my real-life lived experience with dynamic typing. I feel that dynamic typing is truly harmful for any project that involves involving complex logic, needs collaboration / where there’s more than 1 person working on it, or even for any large project (even if only 1 person is working on it).
It was a massive waste of time to have to read piles of code code, use a debugger and inspect object structure, to understand how some parts of certain large codebases worked.
In my experience, dynamic typing has simply been horrible for team collaboration, code readability (and ease of understanding), and it results in a large number of bugs that could easily have been eliminated with static type checking.
> But this debate is decades old and people always dogmatically choose one side, so not really any point in trying to convince you.
You’re right about that. I am indeed very dogmatic and take a hard-line on this. I take a stronger position on this than most things (for example: something very subjective like curly braces versus white space indentation), to the point that I’ll say this: dynamic typing is simply a wrong and bad engineering decision.
Another example of a bad language design decision is allowing null to be a part of every type instead of requiring an explicit ? in the type, or the use of an Optional wrapper. This has been recognized as an ill, andis something many modern languages (to name a few: Rust, Kotlin, etc) fixes.
But that isn’t meant to level a personal attack against the languages designers of the past (or to say that they were stupid). Allowing every type to be Union[T, null] was a simply a language design mistake (that potentially cost wasted billions of dollars), but it's been recognized as a mistake today that we need to rectify and move forward from (as is reflected by decision made by more recent languages).
However, imo, in comparison, dynamic typing is a 100 times worse than not having null safety (or not having memory safety like C or C++). I can work with a C or C++ without much trouble, but not with dynamic typing. Dynamic typing makes it difficult and unpleasant to work with a large codebase, to an inexcusable degree.
Ha you sound exactly like my own consciousness. There are few things in life I feel as resolute about than this one. Dynamic types were a mistake. Let’s learn and move on. I often joke that if null pointer references were the billion dollar mistake (or whatever it’s called), dynamic typing was the 100 billion dollar mistake. And we’re still living with it but thank god for typescript etc for saving us finally
what's especially interesting about this is that it could create a new "meta" for static typing in Python.
One significant issue with static typing in Python is how much boilerplate is required to use types when also doing the sorts of things that dynamic Python is really good at - for instance proxying functions. If you want to do that now and preserve the types, you need to re-declare the types of everything in the wrapper.
Now, if the underlying function already made use of Unpack, you could "reuse" that type in your own wrapper with low boilerplate and less chance of things diverging in hard-to-refactor ways.
Yep, it’s incomplete, and much more importantly not machine readable. These days I want all my code to pass strict mypy. It’s mostly possible and a bliss when it works, but libraries (ab)using kwargs throw a spanner in that. Libraries where everything is kwarg and the docs have to be consulted are a killjoy. And they cause tons of bug from misuse!
If I understand you correctly, I think ParamSpec (since 3.10) is what you are looking for, especially if you want to be generic over the type of the inner function. The example from the docs (https://docs.python.org/3/library/typing.html#typing.ParamSp...):
from collections.abc import Callable
from typing import TypeVar, ParamSpec
import logging
T = TypeVar('T')
P = ParamSpec('P')
def add_logging(f: Callable[P, T]) -> Callable[P, T]:
'''A type-safe decorator to add logging to a function.'''
def inner(*args: P.args, **kwargs: P.kwargs) -> T:
logging.info(f'{f.__name__} was called')
return f(*args, **kwargs)
return inner
@add_logging
def add_two(x: float, y: float) -> float:
'''Add two numbers together.'''
return x + y
Software is always user-upgradable on Linux. Just install it somewhere in your home directory. GNU Stow [0] can be helpful as a very lightweight way to manage the packages.
(Of course, then you take on the responsibility of keeping up with patch releases yourself, which is why we use distros. But if it's just a small number of packages on top of a distro-managed base system, it's perhaps not so bad.)
I agree that compiling Python from source is surprisingly straight forward, but is that a serious question?
Have you ever worked in a place that uses Python? When someone says to you "hey it's not working" are you really going to say with a straight face "oh yes, you just need to compile Python from source". Come on, this is one of those obviously stupid situations that for some reason people feel the need to defend. It's not defensible.
You don't need to compile Node or Rust or Go or Deno from source to install the latest version.
Just run the miniforge install script if you want a very friction-free install. I'm not a big conda fan, but the "install in my home directory" use case is very well covered by miniforge.
https://github.com/conda-forge/miniforge/
Are you really going to support rhel8 as a platform for your project which uses specific python 3.12 rc3 features? Well there's always podman, I guess.
You have a minor bug -- when len(lst) is a multiple of batch_size, this will have an extra iteration at the end with an empty batch. The fixed version is `range((len(lst) + batch_size - 1) // batch_size)`, which emulates `ceil(len(lst) / batch_size)`. Yet more proof that this should be part of stdlib :)
Personally I think I'd actually write it like this:
for i in range(0, len(lst), batch_size):
batch = lst[i:i+batch_size]
The docs give another pretty nice implementation using iter() and islice() in a loop (but it uses the walrus operator `:=` so it requires Python 3.8+ as written).
99% of my more_itertools imports are exactly for this.
there's 1-2 other stuff from more_itertools that I think should make it to itertools. I'd actually like to see statistics from huge monorepos/opensource about usage stats of various more_itertools functions.
What do you mean by "empty sequence in"? The function doesn't raise if the input iterable is empty: it only raises if the chunk size n is 0. While that does have a natural interpretation of returning an infinite sequence of empty tuples, such a behavior would be qualitatively different than for other chunk sizes. The caller would never be able to retrieve any elements from the input iterable, and the output would be infinite even if the input is finite. In that light, it makes some sense (IMO) to avoid letting applications hit such an edge case unintentionally.
> PEP 669 defines a new API for profilers, debuggers, and other tools to monitor events in CPython. It covers a wide range of events, including calls, returns, lines, exceptions, jumps, and more. This means that you only pay for what you use, providing support for near-zero overhead debuggers and coverage tools. See sys.monitoring for details.
Low-overhead instrumentation opens up a whole bunch of interesting interactive use cases (i.e. Jupyter etc.), and as the author of one library that relies heavily on instrumentation (https://github.com/ipyflow/ipyflow), I'm very keen to explore the possibilities here.
Summary, sorry for poor formatting, I'm not sure HN can do a list of any kind?
New features
More flexible f-string parsing, allowing many things previously disallowed (PEP 701).
Support for the buffer protocol in Python code (PEP 688).
A new debugging/profiling API (PEP 669).
Support for isolated subinterpreters with separate Global Interpreter Locks (PEP 684).
Even more improved error messages. More exceptions potentially caused by typos now make suggestions to the user.
Support for the Linux perf profiler to report Python function names in traces.
Many large and small performance improvements (like PEP 709 and support for the BOLT binary optimizer), delivering an estimated 5% overall performance improvement.
Type annotations
New type annotation syntax for generic classes (PEP 695).
New override decorator for methods (PEP 698).
Deprecations
The deprecated wstr and wstr_length members of the C implementation of unicode objects were removed, per PEP 623.
In the unittest module, a number of long deprecated methods and classes were removed. (They had been deprecated since Python 3.1 or 3.2).
The deprecated smtpd and distutils modules have been removed (see PEP 594 and PEP 632. The setuptools package continues to provide the distutils module.
A number of other old, broken and deprecated functions, classes and methods have been removed.
Invalid backslash escape sequences in strings now warn with SyntaxWarning instead of DeprecationWarning, making them more visible. (They will become syntax errors in the future.)
The internal representation of integers has changed in preparation for performance enhancements. (This should not affect most users as it is an internal detail, but it may cause problems for Cython-generated code.)
Isolated subinterpreters (PEP 684): Just how isolated are those? Is it simply a more complicated version of multiprocessing, with all the same drawbacks (communication via pipes/socket/some-other-stream)?
It looks more complicated. You get the same cumbersome communication primitive (channels), except now native code can easily get messed up. And it requires more care when developing the interpreter itself.
Using multiprocessing was actually pretty easy (apart from the communication primitives, which obviously suck).
Multiprocessing means you need to deal with signals (which are ancient and powerful footguns), handle processes being killed, platform differences (fork vs spawn), etc. It really is a bad solution. Isolates are much much better.
> You get the same cumbersome communication primitive (channels), except now native code can easily get messed up.
I'm not sure what you mean by that. Native code has to be thread safe sure, but now it also can be thread safe! You can have native code that is actually properly multithreaded. A big win.
> And it requires more care when developing the interpreter itself.
However if one is using C++/Rust anyways, then it's a good idea to stay away from Cython.
From afar, Cython seems like a viable solution for Python/C++ interop. But the details get messy: you need to clone the .h headers into .pxd Cython-readable headers; and more advanced template-magic C++ constructs may end up being not directly usable in Cython due to missing features or bugs in the C++ support.
In the end, we ended up with quite a number of layers wrapping each other:
1. actual C++ implementation
2. actual C++ header
3. C++ wrapper implementation, avoiding constructs that Cython doesn't support
4. C++ wrapper header
5. Cython .pxd for step 4
6. Cython .pyx exposing `cdef class`es to Python with a nice Python-style API for the original C++ library.
7. Hand-written .pyi for type checking the remaining Python code, because Cython doesn't have support for auto-generating these yet.
Had we used pybind11 / nanobind instead, we could have stopped at step 3. Cython started easy, but ended up being a major maintenance burden.
What is or isn't "pythonic" is largely determined by the community not the language. Nothing is stopping the Python community from monkey patching everything like the Ruby community does.
The f-string changes arrived because there was a need to formalize the syntax, so other Python parsers, for CPython to move off having a special sub-parser just for f-strings, and to be able decide whether weird edge cases were bugs or features.
Once formalized it was decided not to put arbitrary limits on it just because people can write badly formatted code, people are can already do that and it's up to the Python community to choose what is or isn't "Pythonic".
FYI, one of the things I'm really looking forward to is being able to write: f"Rows: {'\n'.join(rows)}\n Done!"
Python tends to be permissive and rely on convention over preventing certain practices, sometimes summed up as "we are all consenting adults." E.g., there's multiple inheritance, no private variables, and monkeypatching. I see this change as in the same vein. This change also makes it conceptually simpler [0]. It also appears to reduce technical debt by reducing differences between expressions in f-strings and in the rest of the language.
They keep getting improved error messaging and this is one of my favorite features. But I'd love if we could get some real rich text. Idk if anyone else uses rich, but it has infected all my programs now. Not just to print with colors, but because it makes debugging so much easier. Not just print(f"{var=}") but the handler[0,1]. Color is so important to these types of things and so is formatting. Plus, the progress bars are nice and have almost completely replaced tqdm for me[2]. They're just easier and prettier.
[2] Side note: does anyone know how to get these properly working when using DDP with pytorch? I get flickering when using this and I think it is actually down to a pytorch issue and how they're handling their loggers and flushing the screen. I know pytorch doesn't want to depend on rich, but hey, pip uses rich so why shouldn't everyone?
I love and use rich too, but gosh I hope that libraries don't start depending on it just because pip does.
It has a lot of dependencies of its own, and dependency creep is real. I know pytorch isn't exactly lightweight in terms of dependencies. But I prefer using libraries that make an effort do only pull in absolutely necessary dependencies.
Yeah sorry I don't think I was clear. I don't exactly want to just drop in rich into the python source (for reasons you mention). But I do think they could take some of the ideas from there and place them in. Formatting is really the most important aspect here, especially around traces because these are the real work amplifiers. So much time is spent debugging that the better tools we have to debug better the more work __everyone__ does. But debugging is strangely a underappreciated area. I think you could do colors with just simple ansii encodings (same as you'd do in posix). I'm just using rich as an example of style.
r.e. pytorch: It's a love hate with me. I do think they should incorporate things that are extremely common and solve things that are daily issues. As a simple example, new users are often confused with loading and saving models when using distributed data parallel (DDP) because it creates this extra "module" name in the state_dict and so can require different usage for saving/loading models if you're distributed training or not. This can be quite annoying. Similarly there are no built in infinite samplers, which are common among generative modelers. People who don't iterate over epochs of data, but rather steps. There's of course many solutions to deal with this, but it does make sense with how prolific it is (and has been since 2015) that there just be a built in dataloader. I'd argue things like progress bars and loggers would also be highly beneficial, especially because pytorch's forte is generating research code.
Would there be any way to inject such formatting in by messing with the interpreter at runtime? I assume it would be possible to get as far as "works most of the time" and "doesn't create too many additional problems".
Like how the rich logging handler works? Or something different? You can definitely write custom handlers for python loggers. Pytorch is a bit more of a pain though.
I think the support for isolated sub-interpreters with separate Global Interpreter Locks is the most interesting new feature in python. It is doubtful not the best way to offer some sort of concurrency but still a step closer to maybe one day get rid of GILs.
Since it currently lacks any way to transfer objects between interpreters other than pickling, does it offer any advantage over the multiprocessing module?
Not for pure python code; but there's massive advantages for mixed C(++) and Python: I can now have multiple sub interpreters running concurrently and accessing the same shared state in a thread-safe C++ library.
Previously this required rewriting the whole C++ library to support either pickling (multiplying the total memory consumption by the number of cores), or support allocating everything in shared memory (which means normal C++ types like `std::string` are unusable, need to switch e.g. to boost::interprocess).
Now is sufficient to pickle a pointer to a C++ object as an integer, and it'll still be a valid pointer in the other subinterpreter.
Same w/ Rust and Python, this is really neat because now each thread could have a GIL without doing exactly what you said. The pyO3 commit to allow subinterpreters was merged 21 days ago, so this might "just work" today: https://github.com/PyO3/pyo3/pull/3446
Wouldn't you also gain some overhead reduction too? Multiprocessing will have more overhead for spawning new processes vs signle process that will contain the sup-interprets? or I am missing something?
Because in our case, all threads would be running a mixture of Python and C++.
Our core data structure is implemented in C++, large (often >10 GB), and can be shared across threads (thread safe, usage is mostly read-only).
But we have lots of analysis algorithms accessing that data structure, most implemented in Python. We tried releasing the GIL for every tiny call to C++, but that approach can barely keep two cores busy due to constant fighting over the GIL. (there's no good "inner loop" that could avoid touching the GIL within the loop body)
Rewriting most/all of the Python analyses in a different (GIL-free language) is a no-go, the analyses have accumulated over the years and now there's more than a thousand of them. It would consume all our development resources for the next ~5 years. In retrospect I can say that choosing Python for these was major mistake, but it's one that cannot be fixed without a company-killing rewrite :(
We actually invested several months of developer time in allocating our core data structure in shared memory, allowing us to parallelize with multiprocessing. But there's still a whole bunch of ancillary data structures written in C++ that are not so easy to put in shared memory, so all analyses touching those are limited to a single process, which by Amdahl's law immediately starting dominating our execution time.
Let's say you wanted to run some python code that was written and test it against some C/C++/Rust for accuracy of some sort (numerical, lexicographical, etc). In the old way you would have to fire up multiple processes to do that (like OS level processes) but now you can have your multithreaded compiled code running in threads and your multi-GIL'd interpreted code running all in one process and comparing their results in the `main` of your C/C++/Rust. That's a contrived example, but the issue was that a single GIL isn't threadsafe in and of itself. So if you're using these compiled languages as sort of python runners you couldn't multithread python interpreter execution and guarantee the code working. Also as the above comment stated, you could do hacks, but you'd double your memory allocation by needing a python and C/C++/Rust representation for everything that went back and forth.
It may not be a step towards that. Ruby has guilds which is a very similar idea and they are explicitly not working towards removing them altogether at this point. Matz did a full keynote on the latest Euruku defending not working towards removing the whole of it. See https://www.youtube.com/watch?v=5WmhTMcnO7U&t=1244s for the full talk if you are interested.
I find the convergent evolution of features in these two languages pretty funny, as it is very clear that they don't really look at implementation details of the other language even if they quite often land on ideas that are pretty close in practice.
PEP 632: Remove the distutils package. See the migration guide for advice replacing the APIs it provided. The third-party Setuptools package continues to provide distutils, if you still require it in Python 3.12 and beyond.
gh-95299: Do not pre-install setuptools in virtual environments created with venv. This means that distutils, setuptools, pkg_resources, and easy_install will no longer available by default; to access these run pip install setuptools in the activated virtual environment.
The asynchat, asyncore, and imp modules have been removed, along with several unittest.TestCase method aliases.
What are the best ways to utilize PEP 669? I’m always looking for two things: a good “time-travel” debugging experience (PyCrunch-Trace seems to be broken for me) and for a way to generate and view large traces (Firefox Performance Profiler can’t open traces that lasted 2-5 minutes because they're too large)
PEP 669: Low impact monitoring for CPython
PEP 669 defines a new API for profilers, debuggers, and other tools to monitor events in CPython. It covers a wide range of events, including calls, returns, lines, exceptions, jumps, and more. This means that you only pay for what you use, providing support for near-zero overhead debuggers and coverage tools. See sys.monitoring for details.
That’s what I adore about truly open source projects such as Python, but also Linux or Rust, for example. They’re not polished products of faceless corporations. They have rough edges, as everything does, and that’s okay.
And time and again some human spirit shines through, like it does here. A welcome reminder we’re all in this together, and that next quarter’s shareholder revenue is genuinely meaningless in the bigger picture.
> That’s what I adore about truly open source projects such as Python, but also Linux or Rust, for example. They’re not polished products of faceless corporations
Most corporation products are absolutely horrible comparing to Python, the polished ones you know of are few exceptions. I've seen a few internal programming languages and anything publicly known is ... at least usable.
It's a poem advocating empathy, people, not an edict on border control policy. Are people really so sensitive that they can't handle being reminded that other people may think differently than them?
I dislike it because it's kinda hard to understand (can't really say I did) specially without any context, and programming language release notes are something people should read even a hundred years afterwards.
I totally disagree with your opinion. I liked it because I found it easy to read (both in skim and in depth, as well as backwards), didn't require additional context (title was included), and programming release notes capture the historical context of the release (where refugees are subject to extreme NIMBYism in our day and age).
Isn't it obvious? Someone's "temporary hack" stopped working, and even though it's not been important enough to refactor for the past 100 years it's important enough to drop everything right now to fix when it breaks.
PL research and reverse engineering or just fixing compatibility. Say you want a program from 3.6 era but the 107 year old interpreter doesn't quite suit you and the newest one broke something, and you bisect interpreter versions to find 3.11 to 3.12 broke it.
Multiplanetary species that has merged with AI thanks to neuralink-esque tech discovers a bug in the brains of 5% of the population. Turns out the bug is due to Python 3.12 which was used to write the natural language understanding (NLU) engine of the brain chips.
I’m not sure; an unbalanced opinion is obviously unbalanced, and it’s easier to see it as not being the whole picture. But a seemingly reasonable opinion combined with a straw man of an opposing view, is harder to unlearn.
That is true enough. In this case, I think a useful learning outcome is that the backwards reading is, in theory and ignoring the emotional word choices, as politically extreme as the forwards reading. However, I doubt most will see it this way.
Not really, it's more of a 'construct-a-box-to-have-an-argument-in' approach. E.g. one could go off laterally in many directions. We could say, 'easy immigration policy is a neoliberal plot to drive down wages in the USA to ensure that current wealth inequality is maintained' or we could say 'immigration is wonderful because it brings in highly skilled people with unique talents and perspectives that are of great benefit to the US economy', and so on. It's a complex topic with a lot of historical context and there are at least half a dozen ways to analyze it from a cause-and-effect perspective - just the kind of discussion that social media can't handle well.
The reversibility trick is kind of cute - but can anyone write a legitimate Python code statement that also works in reverse? I sort of doubt it, the function declaration has to come first.
I despise random politics injections as well, but this is rather tame and very much in line with the ironic spirit of how Python release notes are written. I don't think it's offensive to anybody who doesn't go out of their way to become offended by things.
Both-sides-ism is not some kind of "free points" move in an argument. It's entirely consistent to believe (a) that a "tame" political statement supporting immigrants is acceptable in release notes, while also believing (b) that a cutesy poem defaming immigrants would be abhorrent and unacceptable. No one has an obligation to think that all political messages are acceptable just because they think that some political messages are sometimes acceptable.
Just to take an extreme example, imagine someone put in a note expressing empathy for the families of the victims of a natural disaster in their release notes. The following conversation takes place on Hacker News:
A: I don't really think these sorts of statements have a place in release notes. They don't ultimately accomplish anything or help anyone.
B: I think there's no problem with expressing compassion like this. It's not particularly obtrusive and isn't harming anyone.
You: How would you feel if the release notes had expressed glee at the disaster instead? You would oppose that, right? That means you must also oppose empathetic messages in release notes, to be consistent.
> It's entirely consistent to believe (a) that a "tame" political statement supporting immigrants is acceptable in release notes, while also believing (b) that a cutesy poem defaming immigrants would be abhorrent and unacceptable.
The opposing cutesy poem wouldn't be defaming immigrants!
It would be acknowledging a nation can't exist without borders, sovereignty matters, and unchecked migration is not universally good.
That so many commenters here are missing a key factor here (that the opposing view is presented as a disingenuous straw-man in the release notes) is quite illustrative of why it's not being seen as an issue.
I think you've missed the entire point of the structure of the poem - the backward reading (ie, immigrants may share my home and food) is as "tame" as the forward reading.
>That means you must also oppose empathetic messages in release notes, to be consistent.
Yes. It will be interesting reading these cultural artifacts after 50 further years of geo-political development.
Could you articulate what that opposing view is in this case?
It's also worth pointing out that Python (and it's ecosystem) is developed by an international team of people, most of whom volunteer their time out of good will and sense of shared purpose across diverse cultures.
The "both sides" logic seems to miss the point that world without mutual aid and international, cross-cultural cooperation would be one that would not have Python and it's ecosystem. It's not political for the Python developers to support such a view, such a view is foundational for the existence of Python.
One can be supportive of mutual aid and international, cross-cultural cooperation while also acknowledging that a nation doesn't exist without borders, sovereignty matters, and that unchecked migration is not universally good.
When you include the contemporary context of illegal US border crossings at record numbers [1], it's clearly making a political statement. Pretending the development of a programming language is inexorably linked to unchecked immigration is disingenuous.
Could you point out where they are supporting "unchecked migration"?
The statement there only reads to me as a call to have compassion for those caught up in what will be a perpetually escalating migration crisis (and will likely soon make migrants of those who are currently protecting their borders).
> Pretending the development of a programming language is inexorably linked to unchecked immigration is disingenuous.
Where am I (or anyone else) making that claim? I'm claiming cross-cultural/international cooperation and mutual aid are unquestionably tied to the development of not just a programming language, but all open source software. Anyone who has worked in this space at all can likely list someone they have worked with on nearly every continent.
The fact that you view calls for compassion and empathy as calls for "unchecked migration" is a bit concerning.
It's against zero immigration and building walls to keep out irregular immigrants (by definition) like refugees, none of that implies unchecked immigration.
I'm guessing you're not an American by this comment, at least from the US
>Do not be so stupid to think that
>A place should only belong to those who are born there
Does not imply unchecked migration since the entire social fabric of the United States is a based upon people moving here from somewhere else. Currently around 14% of US citizens are foreign born. Claiming they don't belong here would be considered an extreme view even among fairly right-wing Americans. Implying that only people born here belong here is, in fact, a absolutely 0 immigration view, that would essentially destroy the United States since our population growth is almost entirely from immigration.
Likewise:
>It is not okay to say
>Build a wall to keep them out
US immigration has not resorted to wall building as a form of immigration control for all but the most recent years. Again, nobody I know who is opposed to wall building supports "unchecked migration".
That's fair. I interpreted "no walls" as "open borders" but you're right that there are ways of enforcing limitations on immigration that aren't physical barriers at the border.
People born here do not have an automatic right to US citizenship. The 14th Amendment was intended for the descendants of slaves. It was not intended to cover the child of a diplomat born in the USA. Nor was it similarly intended for people who entered the country illegally. In time we will review cases of so-called birthright citizenship for those who are children of people who entered illegally, and correct the status. As noted, foreign diplomats children born here, do not have US citizenship for a reason.
Putting aside the fairly explicit lines in the poem: it's the context. It is not the case that the western world currently has no immigration and the poem makes the case to allow at least some.
More immigrants than ever are pouring over the US / Mexico border. The mayor of NYC (a self-proclaimed "sanctuary city") is now warning of the city's destruction as a result of the overwhelming influx [1]. This mayor is politically aligned with the party of our President, who presumably has no political interest in embarrassments like this, yet it's still happening.
"Unchecked migration" is essentially what is already happening, at least in the US. A cutesy poem in the release notes of software (??) that paints the side opposing it as mouth-breathing bigots and the supportive side as empathetic truth-tellers is unnecessary at best.
One of the two opinions, which are all that exists. All reasonable people have the one opinion, and all other people are all idiots who all have the other opinion.
I don't think this is a political advocacy. It's just a personal expression about a social issue. I found the way it was written quite thoughtful, actually, which I respect (though I don't like in its entirety).
But I upvoted your comment because I don't think there's a reason for people to downvote your comment.
It's absolutely acceptable that someone dislikes the poem and wants to express it here.
People confuse the purpose of "downvote". It's not for disagreement. Downvoting buries a comment. We shouldn't bury something just because we disagree with it. That's against one of the most basic human freedom value. If it's just a disagreement, reply to it expressing your view.
I do feel like it can help to break the herd mentality thing of "if it's already downvoted, downvote it more" by making people think about why they've done it
I don't particularly care what the venture capitalists who own this website want me to do. Nor is it surprising that they don't want politics here. Political consciousness can only be bad news for the uber-wealthy.
There needs to be a separate mechanism for disagreement versus quality (affecting ranking). Both of these should be separate from flagging for moderation.
There is. Replying. If there already is a disagreeing reply, you can upvote that. Downvoting a constructive and well-written post just because you disagree is not justified.
Those weathered by enough time and strife in this forum could predict this comment and ensuing thread a mile away after reading the announcement. After enough iterations, it all feels like a dance or ritual. Like watching birds mingle and bicker out your window. Just another day, the universe humming along. Things are as they should be :)
Wait until you have to use something like npm or yarn... and try to read the hidden log messages between the emojis, the "support X or Y" messages, the jobs search, and more.
If you'd like to see another viewpoint (or no viewpoint) espoused, volunteer to help Python with their releases and you'll have a chance to provide input to their release process.
For people who downvote Bostonian for his comment, how would you like it if that announcement, say, contained an abortion-related proclamation that you don't agree with?
"Share our food, Share our homes and Share our countries" is quite a big demand on everyone else. I am certainly not willing to do so without limit and without regard to context (e.g. the Muslim gang war that flared up in Sweden).
It's a statement about immigrants being people, that the hateful stories we tell about them are false that we have responsibility to them as fellow humans. It advocates for no specific political action, you can hold these beliefs and still be against immigration for whatever reason.
It's also a statement against rational thinking and discussions because it portrays both sides to an almost satirical level: "no borders" vs "all migrants are thieves/murderer/bombers"
There is 8 billion people in the world, with the number increasing still. You will always have to choose whom you feel responsibility towards, because all 8 billion are beyond anyone's capability.
I feel like you're applying programmer brain to statements that aren't actually commitments but north stars pointing us in the direction of how we should try and do better. Or phrased differently, "We might be the greatest country on earth, but an even greater country would ___"
And in this case to me the blank is "take on the world's tired huddled masses and set them up to be just as self-sufficient and successful as their native born." And I think that's pretty actionable, just about anyone could probably think of more than a few things that would push us closer to this.
>"Share our food, Share our homes and Share our countries" is quite a big demand on everyone else. I am certainly not willing to do so without limit and without regard to context
Neither are a lot of the people who initially advocated for the policies, ie NYC, as we're finding out.
Immigration is great. Unfettered illegal immigration is not. It puts pressure on social infrastructure and causes social strife. Look at us here in Canada. Most of our immigration is legal, and yet our wealth-per-person is shrinking because we can't build infrastructure fast enough to keep up with population growth.
There certainly is an equivalent of NIMBY in Western asylum politics.
A lot of Green/Progressive voters in Western Europe live in affluent neighbourhoods where practical effects of current migration waves are very limited, and often positive (e.g. cheap workforce for your household, but your kids' school does not suffer from any gang activity).
Voting patterns across income groups tend to reflect that discrepancy.
That seems to misrepresent reality. Generally speaking in all elections that I am aware off, rural regions are leaning right while urban regions are leaning left. In fact in general it seems anti-immigration/foreigner stances are almost anti-proportional to the number of immigrants/foreigners a person might encounter during their day.
Just 2 counter examples (anecdotes but I'm sure a bit of searching will reveal numbers to back this up): the first electorate that directly voted a green candidate Was the Friedrichshain-Kreuzberg electorate in 2002, likely one of the places in Germany with the highest proportion of immigrants (when it was still not gentrified like it is today.).
Another anecdote, this map of the French elections
https://img.lemde.fr/2022/04/11/0/0/1051/1674/800/0/75/0/869...
Showing that the anti immigration party of le pen is mainly winning in rural areas, and the urban centre of Paris is in fact voting the most left candidate
"In fact in general it seems anti-immigration/foreigner stances are almost anti-proportional to the number of immigrants/foreigners a person might encounter during their day."
That is an egg-and-chicken question. "White flight" is a thing and people who moved away from ghettoizing cities/neigbourhoods into the surrounding suburbia will likely vote against further immigration.
But that's typically not happening either. I fact usually the opposite happens, the urban areas with lots of immigrants get gentrified, because everyone wants to live there.
On top of that, we are now seeing that outer suburbs which were guaranteed winning electorates for right parties are now becoming more and more left leaning because young urban dwellers are moving there because they can't afford the cities.
Show me the evidence for "white flight" it certainly doesn't happen in most European metropolitan areas (the map of the French elections certainly didn't show that any of the areas surrounding the big cities were right leaning, in fact like e.g. Sachsen and Thüringen, the regions in Germany with the highest support for right wing parties, experience lots of people leaving, not moving there). That's one of the reasons why cities have become increasingly unaffordable.
> In fact in general it seems anti-immigration/foreigner stances are almost anti-proportional to the number of immigrants/foreigners a person might encounter during their day.
Maybe rurals are well aware of what's happening in urban regions and don't want it ? Maybe these people like their environment as it is and see no point in change ? I don't know, just guessing. :-)
The intention of the text is quite clear. It looks down upon people opposing immigration. The arguments from top-to-bottom and from bottom-to-top are quite different in tone.
IMHO there is nothing "fortunate" about dragging politics into programming, regardless of your or mine views.
I would find it fortunate and valuable if at least certain human activites stayed out of political culture wars entirely. If, instead of "my side, your side", there simply wasn't any need to think of a side for a moment.
I already deleted my FB and TW account to get rid of incessant political flamewars and I feel aghast that they are now following me to Python release notes of all places.
I’ve always considered the no-politics at work rule to be a neutral zone in an armistice. It’s been in effect for so long people have forgotten why it existed and see nothing wrong with resuming agitations. The ‘politics is personal’ and ‘bring your whole self to work’ shift is an intentional reinsertion of politics back into work and like the Chesterton’s fence parable I think we’ll rediscover why that rule was there in the first place.
The people advocating for bringing politics to work forget that people with opposing views will also do the same, and then you end up with a hostile, polarized and distracting work environment.
So the only choices are "let's abolish borders and live happily ever after" or "all migrants are killers/thieves/bombers we should build a wall". How diverse and subtile! If you frame problems like that it's easy to think you're in the "good guys" camp
In context of Python 3.12, neither fits my needs. Wouldn't it be nice if culture wars didn't infect every single attribute of our everyday lives? What this guy did was to spread useless political flamewar from the hell that calls itself social networks to a random professional webpage. Why exactly? Isn't there enough of flamewar out there?
It is positively totalitarian to drag politics into everything. The very meaning of "totalitarian" is that nothing is allowed to remain non-political.
So then we're making this about freedom as the central moral aspect. Then what about the freedom of the Python maintainers to post what they want? Why the selective defense?
False equivalency. The equivalent would be someone commenting saying they shouldn't publish release notes because of Reasons, and I would defend their right to publish release notes.
Your interpretation of "equivalency" is hard to understand in any context except one which seeks to limit the disagreement to an extremely narrow "debate" that just so happens to validate the ideology implied by your selective defense.
My response is to point out that your narrowing of the space of dicussion is not only noticed but acknowledged and explicitly challenged. In a conversation about morals this framing of yours is inappropriate and counterproductive. It is notable that you responded in the way you did, implying you're trying to keep this tactic subversive rather than explicitly acknowledging it. Classic move in rhetoric, to be sure.
You're acting like a smarmy high schooler who thinks they're good at reasoning because they're good at "debate".
No, it doesn't "just so happen", any more than saying "everyone should be given a fair trial" in a legal system "just so happens" to apply to the guilty and innocent.
> Clear?
Your response is entirely rhetoric, attempting to criticise me rather than my argument. It's clear how you feel; not what you think.
It's so hecking wholesome I started crying. My wife's son came to ask me, "Frank, why you crying?". I had to explain to him there's evil people in this world who oppose open borders.
Ah yes. People who disagree with you are evil. Maybe ask them why they disagree. Maybe they see societal problems with open borders. Maybe there's a reason governments don't just have open borders, and instead try to manage the immigration process.
oh cool, multi-line f-strings. With inline comments!
f"This is the playlist: {", ".join([
'Take me back to Eden', # My, my, those eyes like fire
'Alkaline', # Not acid nor alkaline
'Ascensionism' # Take to the broken skies at last
])}"
This probably isn't a good motivating example - my first thought was that this is an overcomplicated way to write
"This is the playlist: " + ", ".join([
'Take me back to Eden', # My, my, those eyes like fire
'Alkaline', # Not acid nor alkaline
'Ascensionism' # Take to the broken skies at last
])
Which is easier to read and exactly as long.
(In fact, without 3.12-compatible syntax highlighting, my first intuition upon reading the first line would be to suspect the code of having a typo.)
I don't find that example particularly compelling. To me it's more legible using regular strings:
"This is the playlist: " + ", ".join([
"Take me back to Eden", # My, my, those eyes like fire
"Alkaline", # Not acid nor alkaline
"Ascensionism", # Take to the broken skies at last
])"
> When f-strings were originally introduced in PEP 498, the specification was provided without providing a formal grammar for f-strings. Additionally, the specification contains several restrictions that are imposed so the parsing of f-strings could be implemented into CPython without modifying the existing lexer. [...]
> The other issue that f-strings have is that the current implementation in CPython relies on tokenising f-strings as STRING tokens and a post processing of these tokens. This has the following problems: [...]
> By building on top of the new Python PEG Parser (PEP 617), this PEP proposes to redefine “f-strings”, especially emphasizing the clear separation of the string component and the expression (or replacement, {...}) component.
You “just” have to make your parser understand how to have all expressions or whatever inside braces. No idea how the python parser works but think about how you can nest json arbitrarily.
:) Oh, this is neat! Python 3.12.0 has brought some cool features to the table. I'm particularly keen on the flexible f-string parsing and the performance improvements. The new type annotation syntax for generic classes is a nice touch too. It's always good to see Python getting refined with each release. Looking forward to exploring these new additions...
What is the rationale for types being part of the syntax but without shipping a built in type checker? Are we being conservative and waiting to see where the four type checkers — mypy, pyrite, pyre, pytype, and others? — lead us before rolling one into the standard distribution?
What's with the politicising at the end? I agree that both the EU (from where I'm from) and the US have chosen to take a very dark and authoritarian view on immigration, but I think that that discussion doesn't belong in a Python release announcement.
I am excited for the typeddict additions, but it looks like there’s still some work around unpacking needed before we can use it to get around using kwargs to forward arguments from one function to another without having to know the other’s signature.
The f-string nesting feature is nice. After having that sort of arbitrary interpolation in C# it always annoyed me that something as dynamic as Python couldn't figure it out.
The features of Per-Interpreter GIL are - for now - only available using C-API, so there's no direct interface for Python developers. Such interface is expected to come with PEP 554, which - if accepted - is supposed to land in Python 3.13, until then we will have to hack our way to the sub-interpreter implementation.
If gets approved https://peps.python.org/pep-0703/ would be much better. Yes, it might require code and package changes, but if people see that having python without GIL is an option if some less used features aren't used they likely would jump on it.
Eh, you still can't pass Python objects between them. This just means that if you run multiple instances of Python, you can do so in the same process. If you use the C api.
Ergonomic multi-core multithreading is already solved, for me at least, with ProcessPoolExecutor.map(). I wonder what advantages this direction brings to the table.
You will now be able to, in 3.12 run multiple CPython VMs inside the same process w/o having them share state. Previously CPython VM state was spread over a bunch of global variables so you had one GIL per process embedding Python.
For your usecase, eventually that ProcessPoolExecutor.map will be GillFreeThreadPoolExecutor.map and all the cross process serialization shenanigans will go away.
But instead of cross-process serialization we get cross-interpreter serialization, so serialization shenanigans don't go away.
There is support in the newest pickle protocol for using a shared buffer to transfer data more efficiently, but that would work in multiprocessing [0] just as well as in subinterpreters (and currently isn't implemented in either one).
I agree in the short term, those problems aren't entirely solved, but it does provide for an eventual serialization free way to communicate between threads.
Queues, immutable records, atomic refcounts, a global object heap for shared items. There are lots of way forward here that don't involve a full SERDES round trip.
My understanding is most low hanging fruit were picked for Python 3.11. And the faster CPython team have been looking at more mid to long term goals with the addition of a JIT and other accompanying infrastructure around for 3.13 and 3.14.
However, the recent no-GIL decision I think has sent a few things back to the drawing board to see what can and cannot be salvaged from their progress and plans so far.
They use the `pyperformance` package[0] that, according to the readme, use real world scenarios. So I expect it to capture every significant improvement.
I don't know much about the internals of python (or compiler theory stuff), but it would be cool to productionize a script with types and make a static binary out of it. pypy/mypy and other projects are fine but they don't support latest python versions and libraries. I want the ease of python but fast once you lock down the dynamic programming features.
I most curious to know what happened to the faster-python project progress. I see there were some changes contributed by Mark Shannon and Eric Snow but the performance "upto 5x in 4 releases" doesn't look like happening. I guess the biggest question is is JIT still being pursued?
Another thing I noticed is that homebrew python was noticeably slower on M2 comparing to the pyenv one. I imagine homebrew compiles it with too generic flags to support wide range of macs.
A comment in Homebrew's formula says that Homebrew adds the --enable-optimizations flag during compilation because Homebrew has separate binaries whereas the official Python release doesn't add this flag because "they want one build that will work across many macOS releases"
Oh hey, the useless (except for leetcode) language got an update. Yay, maybe more people try to write boring business logic without type safety. Can't wait.
I recently took part A of Dan Grossman's Programming Languages' course, and he focus on Standard ML which is strongly-typed. It was enlightening to learn about, and after completing Part A, I too felt that any language that lacks type safety can't be a serious language.
I recently started using Python again for a side project, and I had forgotten how good of a dev experience Python offers. I wish I could explain better than that though.
Part B of that course is about weakly typed languages like Ruby. Your comment is motivating me to go and finish Part B, because I wish I could articulate better what exactly it is that makes weakly typed languages useful in their own right.
I appreciate the thoughtful response, despite my comment being less then that. With that said, I am using Python in production for a line business app for a 2 billion dollar company now. I can't imagine why anyone would use this for a multi-team project outside of areas where it's the language with the most libraries(IoT, ML).
Not only is there the type safety issue, the standard testing and ORM libraries are way behind with .NET and Java have. Even Node seems better.
1. WordPress was too for a time, despite being slow and choke-full of vulnerabilities.
2. Right, at run time. My great-grandma always did say "the best time to catch bugs is in production, use Python sonny".
3. It's not that I need it, it's just that I can't think of any other use for it. Hopefully big data and IoT catches on to the first two points soon enough.
1. https://docs.python.org/3.12/whatsnew/3.12.html#pep-692-usin...