I find defaultdict, OrderedDict, namedtuple among other data structures/classes in the collections module to be incredibly useful.
Another module that's packaged with the stdlib that's immensely useful is itertools. I especially find takewhile, cycle, and chain to be incredibly useful building blocks for list-related functions. I highly recommend a quick read.
EDIT: functools is also great! Fantastic module for higher-order functions on callable objects.
I mostly migrated to frozen dataclasses from namedtuples when dataclasses became available. I’m curious about your preference for the namedtuple. Is it the lighter weight, the strong immutability, the easy destructing? Or is it that most tuples might as well be namedtuples? Those are the advantages I can think of anyway :)
You could of course accomplish the same with a dictionary comprehension, but I find this to be less noisy. Also, they have `_asdict()` should you want to have the contents as a dict.
You don't need `_make()` with dataclasses, and you get `asdict()` as a stand-alone function so it doesn't clash with each class's namespace. Here's what your code might look like with them:
import sqlite3
from dataclasses import asdict, dataclass
@dataclass
class EmployeeRecord:
name: str
age: int
title: str
department: str
paygrade: str
conn = sqlite3.connect("/companydata")
cursor = conn.cursor()
cursor.execute("SELECT name, age, title, department, paygrade FROM employees")
for emp in (EmployeeRecord(*row) for row in cursor.fetchall()):
print(emp.name, emp.title)
print(asdict(emp))
Dictionary comprehensions can be very elegant. List and dictionary comprehensions are very powerful and expressive abstractions. In fact, while not good practice you can pretty much write all Python code inside comprehensions including stuff regarding mutation.
This is valid(as in it will run, but highly unidiomatic) code:
quicksort = lambda arr: [pivot:=arr[0], left:= [x for x in arr[1:] if x < pivot], right := [x for x in arr[1:] if x >= pivot],
quicksort(left) + [pivot] + quicksort(right)][-1] if len(arr) > 1 else arr
That's incredibly clever, generators are underrated. I once challenged my friend to do leetcode problems with only expressions. Here's levenshtein distance, however it's incredibly clunky.
levenshtein_distance = lambda s1, s2: [matrix := [[0] * (len(s2) + 1) for _ in
range(len(s1) + 1)], [
[
(matrix[i].__setitem__(j, min(matrix[i-1][j] + 1, matrix[i][j-1] +
1, matrix[i-1][j-1] + (0 if s1[i-1] == s2[j-1] else 1))), matrix[i][-1])[1]
for j in range(1, len(s2) + 1)
]
for i in range(1, len(s1) + 1)
], matrix[-1][-1]][-1]
By default, List[Tuple]]. You could do a list comp over the fetchall(), but at that point there’s already some magic happening, so why not make it explicit?
If I find my self write a[0] a[1] a[2] in more than one place, I would upgrade it to a namedtuple. Much better readability, can be defined inline like `MyTuple = namedtuple('MyTuple', 'k1 k2 k3')`
For anyone wanting some more explanation, ChainMap can be used to build nested namespaces from a series of dicts without having to explicitly merge the names in each level. Updates to the whole ChainMap go into the top-level dict.
The docs are here [0].
Some simple motivating applications:
- Look up names in Python locals before globals before built-in functions: `pylookup = ChainMap(locals(), globals(), vars(builtins))`
- Get config variables from various sources in priority order: `var_map = ChainMap(command_line_args, os.environ, defaults)`
I found it perfect for structured logging, where you might want to modify some details of the logged structures (e.g. a password) without changing the underlying data.
I just wish python had some better ergonomics/syntactic sugar working with itertools and friends. Grouping and mapping and filtering and stuff quickly become so unwieldy without proper lambdas etc, especially as the typing is quite bad so after a few steps you're not sure what you even have.
Just as recent as today I went to Kotlin to process something semicomplex even though we're a python shop, just because I wanted to bash my head in after a few attempts in python. A DS could probably solve it minutes with pandas or something, but again stringly typed and lots of guesswork.
(It was actually a friendly algorithmic competition at work, I won, and even found a bug in the organizer's code that went undetected exactly because of this)
I find converting things from map objects or filter objects back to lists to be a bit clunky. Not to mention chaining operations makes it even more clunky. Some syntatic sugar would go a long way.
I use defaultdict a lot - for accumulators when you're not sure about what is coming. Here's a simplified example:
# a[star_name][instrument] = set of (seed, planet index) of visited planets
a = defaultdict(lambda: defaultdict(set))
for row in rows:
a[row.star][row.inst].add((row.seed, row.planet))
This is a dict-of-dict-of-set that is accumulating from a stream of rows, and I don't know what stars and instruments will be present.
Two dictionaries with equal keys and values are considered equal in Python, even if the order of the entries differ. By contrast, two OrderedDict objects are only equal if their respective entries are equal and if their order does not differ.
It may be more explicit: OrderedDict has move_to_end() which may be useful e.g., for implementing lru_cache-like functionality (like deque.rotate but with arbitrary keys).
OTOH that’s a lot less useful now that functools.lru_cache exists: it’s more specialised so it’s lighter, more efficient, and thread-safe. So unless you have extended flexibility requirements around your LRU, OD loses a lot there.
And if you’re using a FIFO cache, threading a regular dict through a separate fifo (whether linked list or deque) is more efficient in my experience of implementing both S3 and Sieve.
Also, dicts can become unordered at any time in the future. Right now the OrderedDict implementation is a thin layer over dict, but there are no guarantees it’ll always be that.
dict are ordered to keep argument order when using named arguments in function calling. So it would be a non-trivial breaking change to revert this now.
I would argue that OrderedDict have more chances to be depreciated than dict becoming unordered again, since there is now little value to keep OrderedDict around now (and the methods currently specific to UnorderedDict could be added to dict).
I guess for most purposes, OrderedDicts are then obsolete, but I believe there are some extra convenience methods that they have, but I've only really needed to preserve order.
Makes you think what other parts of Python have become obsolete.
It worked as an accidental implementation detail in CPython from some other optimization, but it wasn't intentional at the time. Because it wasn't intentional and wasn't part of the spec, that code could be incompatible with other interpreters like pypy or jython.
The reason Guido didn't want 3.6 to guarantee dict ordering was to protect 3.5 projects from mysteriously failing when using code that implicitly relied on 3.6 behaviors (for example, cutting and pasting a snippet from StackOverflow).
He thought that one cycle of "no ordering assumptions" would give a smoother transition. All 3.6 implementations would have dict ordering, but it was safer to not have people rely on it right away.
As someone who just had to backport a fairly large script to support 3.6, I found myself surprised at how much had changed. Dataclasses? Nope. `__future__.annotations`? Nope. `namedtuple.defaults`? Nope.
It's also been frustrating with the lack of tooling support. I mean, I get it – it's hideously EOL'd – but I can't use Poetry, uv, pytest... at least it still has type hints.
> OrderedDict - dictionary that maintains order of key-value pairs (e.g. when HTTP header value matters for dealing with certain security mechanisms).
Word to the wise... as of Python 3.7, the regular dictionary data structure guarantees order. Declaring an OrderedDict can still be worthwhile for readability (to let code reviewers/maintainers know that order is important) but I don't know of any other reason to use it anymore.
Being as specific as possible with your types is how you make things more readable in Python. OrderedDict where the order matters, set where there are no duplicate items possible, The newish enums are great for things that have a limited set of values (dev, test, qa, prod) vs using a string. You can say a lot with type choice.
Another reason is I think that 3.7 behavior is just a C Python implementation detail, other interpreters may not honor it.
This hit me bad once bad. I tested the regular dict and it _looked_ like it was ordered. Turned out, 1 out of about 100000 times it was not. And I had a lot of trouble identifying the reason 3 weeks later, when the bug was buried deep in complex code, and it appeared mostly what looked like random.
Does dict now guarantee that it maintains order? IIRC, it was originally a mere side effect of the algorithm chosen (which was chosen for performance), but it could change in future releases or alternative implementations.
If you liked this blog post, I can’t recommend PyMOTW[0] highly enough. It’s my goto for a concise introduction whenever I need to pick up a new Python stdlib module.
Throwing frozensets out, too. If regular sets aren't obscure enough, frozensets might be your thing. It looks like a set, it acts like a set, but its... hashable (for indexing) and (immutable.) Why use this? For algorithms that rely on combinations (not permutations), frozensets can be very useful. E.g. NOT this -- (0, 1) (1, 0) (both distinct using tuples) vs frozenset([0, 1]) ([1, 0] or [0, 1] have the same identity / frozenset.) You can use this for indexing algorithms and things like that. Sometimes, sets are very convenient because they naturally 'normalise' entries into a fixed order. This can simply a lot of code.
Adding `array` [0] to the list. It's generally slower than a list, but massively more memory-efficient. You're limited to a heterogeneous type, of course, but they can be quite useful for some operations.
And you can use struct for heterogenous data =) It has a neat DSL for packing/unpacking the data, reminiscent of the "little languages" from classic book The Practice of Programming. Python is actually pretty nice working with binary data.
> Python is actually pretty nice working with binary data.
It really is! I’ve been working on a project to generate large amounts of synthetic data, and it calls out to C for various shared libraries to do the heavy lifting *. Instead of encoding and decoding back and forth, I can just ship bytes around, and then directly write them out to a file. Saves a lot of time.
*: yes, I should just rewrite it into a faster language entirely. I intend to, but for the time being it’s been “how fast can I make Python without anything but stdlib,” as long as you accept ctypes as being included in that definition.
The simplistic answer is that lists have been heavily optimized over the years, arrays haven’t been touched much.
I’m not positive on why lists are faster to create than lists, though. Retrieval makes sense (lists already store the Python object, arrays have to cast it back), but creation I’m unsure about. I’ll check dis.dis.
EDIT: from a sibling comment above [0], maybe because array reallocs are done much more granularly than lists, so as it grows, it’ll have to do so more frequently compared to lists?
Add functools to the list. Espacially functools.wraps() and functools.partial().
The stdlib is full of goodies.
Now I always appreciated the battery included logic in python. But I noticed this week that LLM diminish that need. It's so easy to prompt for small utilities and saves you from using entire libraries for a few tools.
And the AI can create doc and tests for them as quickly.
So while I was really enthusiastic things like pairwise() were added to itertools, it's not as revolutionary as before.
They were super useful, but not included in the stdlib, despite being a few lines long.
We also had more-itertools, bolton, and others, to bridge that gap.
Now, there was always a tension between adding more stuff to the stdlib, or letting 3rd party libs handle it. Remember the saying: the stdlib is where projects go to die.
And of course tensions about installing full on 3rd party libs just for a few functions.
The result is that many people copy/pasted a lot of small utilities, and endless debates on python-ideas to include some more.
I think this is going to slow down. Now if you want "def first_true(iterable, default=False, predicate=None)", you ask chatgpt, and you don't care.
The cost of adding those into the project is negligeable.
It's nowhere near generating thousand of class stubs. It's actually the opposite: very targetted, specific code needs being filled instead of haunting python debates or your venv.
But to stimulate a bit your anxiety, I do think code gen is going also making a big comeback with LLM :)
Agreed, rewriting standard functions is much worse than using standard tools that already exist.
In addition to the extra boilerplate and reduced readability, that also sounds like an easy way to introduce subtle bugs. Standard library functions have been exhaustively field tested, a similar looking LLM generated function could easily include a footgun.
I wish there were some syntactic sugar for partial but knowing how patterns like that get abused in other languages maybe it is for the better that there isn't.
MappingProxyType is another handy one. It wraps a regular dict/Mapping to create a read-only live view of the underlying dict. You can use it to expose a dict that can't be modified, but doesn't need copying.
This is one I use all the time, super handy. Another CLI module I regularly make use of is `python -m json.tool`, for formatting and validating json.
Last year I ran http.server with -h to remind myself of something, and the --cgi flag caught my eye...funnily enough there's built in support in the web server for running CGI scripts. Alas, it's deprecated and will be removed in 3.13 later this year, but I when I discovered it I couldn't resist the opportunity to write a CGI script for the first time 20-something years: https://github.com/drien/python-httpserver-upload
Once I found myself needing to sort something topologically... and the interface to this sorter is so bad that you cannot really retrofit any kind of graph data to make it work with the sorter. So, it's kinda worthless, unless you specifically design your graph to be sorted with this sorter.
Also, topological sort is like five lines of code... so, it doesn't matter if the function is there.
This is the reason I love python for small personal projects. I can get up and going in a heartbeat and the stdlib has so much that I'd need. If there was a Flask-style HTTP server and a more requests-like HTTP client in the stdlib, I'd be a content man. Maybe I need to suck it up, but I just find venvs and Python packaging in general annoying to deal with, especially for something small.
That said, Go has those things so it's crept in a little bit into my quick programming, but I'll always love python.
> For people eager to join the AI/ML revolution it provides Naive Bayes classifier - an algorithm that can be considered a minimum viable example of machine learning.
I don't think this is true. It allows you to specify and calculate parameters for normal distributions, what allows you to jury rig a naive bayes classifier, what is shown as a doc example. This is not the same as providing a built in classifier.
I see a lot of mentions of itertools in this thread, which is indeed a great library, but I want to mention that itertools.groupby is one of the easiest to misuse functions I've seen. It's not necessarily intuitive that it groups contiguous records. Passing it an unsorted list won't break, but also might not return the results you're expecting.
Yes, it's similar to Unix's `uniq` (which is mentioned in the doc). Some SQL stuff also requires prior sorting to work right (although usually you can't accidentally miss it).
Nowadays the GNU `uniq` can sort and the `sort` can unique because there are performance benefits. Assume the same is true in Python so if worried about performance `groupby(sorted(...))` might not be the best.
The other thing that is a bit odd is it returns iterators. It's up to you to build concrete groups if that's what you need.
I was not aware of zipapp ... but it's interesting to see it exists as a method for enabling python to run 'zipped packages' ... since python can already do that by default with normal zipfiles, as long as the zipfile appears / is added to the python path (which is roughly analogous to how one can add .jar files to the classpath in java). E.g.:
I suppose, if the intent is to package something in a manner that attempts to make it newbie-proof, then requiring a PYTHONPATH incantation before the python part might be one step too far ... but then again, one could argue the same about people not quite knowing what to do with a .pyz file and getting stuck.
Probably silly, but I went five years of programming in python before I learned about the help function. Only learned about it when I had to take an intro to python class for school.
I've significantly reduced my use of `namedtuple` since DataClasses were introduced, but I confess I never did much performance comparisons between the two.
I assume the `namedtuple` syntax is more pleasing for Functional favorable programmers, but this makes me wonder if the stdlib should choose one of them?
I suspect nobody cares because it's not a problem. That bit of code you're moaning about will only be called once per namedtuple. It's unlikely to be a problem.
Guess who cares? The person you replied to... and it would've been really easy to figure that out, given that parent went all the way to look for implementation, isn't it?
Anyways. The reason I cared is because I was working on a Protobuf parser, where named tuple was supposed to play a key role: the message class. Imagine my disappointment when I started to run benchmarks.
But namedtuple makes classes, it does not instantiate them. If you're in a situation where your bottle neck is defining classes rather than instantiating them, then I would hope you can appreciate that this is an edge case.
In my opinion, namedtuple was created to allow usage of a tuple (they are required in many places) while giving names to the members, rather than plain indexes.
I am surprised that they didn't mention pack/unpack. And a namedtuple can take the results of unpack which means you can easily parse binary files. Like
ChainMap is one of my favourites. I like when I find a use for it. The obvious one is cascading options type thing (like cmdline options -> env -> defaults). I also found a use for it recently when changing the underlying storage layer of a class without breaking the API.
My other favourite parts of the stdlib are functools and itertools. They are both full of stuff that gives you superpowers. I always find it a shame when I see developers do an ad hoc reimplantation of something in functools/itertools.
I mean, technically braces ARE part of Python syntax. They're used by sets and dictionaries.
But I know what you meant, and yeah...they'll never use braces as block delimiters. IMO, that's a good thing. Whitespace-as-syntax means you're FORCED to have a minimal level of decent code formatting or it doesn't work.
Oh wow, yeah. The Zen of Python. Good reminder, thanks. Although some people think Python has strayed somewhat from that in recent years. Including me.
Another module that's packaged with the stdlib that's immensely useful is itertools. I especially find takewhile, cycle, and chain to be incredibly useful building blocks for list-related functions. I highly recommend a quick read.
EDIT: functools is also great! Fantastic module for higher-order functions on callable objects.
https://docs.python.org/3/library/itertools.html
reply