Hacker News new | past | comments | ask | show | jobs | submit login

Is that faster now? Back in Python 2.4 or so (last time I looked under the hood of CPython many moons ago) all member access required a hashtable operation. For __slots__-enabled objects (the namedtuple of their day) this meant hitting the class hashtable to get the __slots__ index before using that index to fetch the actual member from the object. So __slots__ was mostly about space efficiency, not performance.

Has that changed?




Attribute access of namedtuples aren't any quicker as you've surmised, it requires a hash lookup and an index lookup. Index lookup is as quick as normal tuple index lookups. The main reasons I reach for namedtuples is memory efficiency and much quicker creation times vs objects.


Memory efficiency? The last time I checked namedtuple dynamically generates code for a class with the specified fields, and evals the code. So it would be exactly as efficient as a normal class. For memory efficiency you want to have multiple points in a single array, to avoid the overhead of objects, but you'd need Numpy or Cython for that.


Not quite. The namedtuple will not have a per-instance attribute dictionary, so there is a significant memery saving when creating loads of them.


Right. But you can fit 2 floats in 8 bytes so if it's memory saving and efficiency you're after, you would use an array to avoid object creation overhead. Namedtuples are nice for their self-documenting properties though, but it's misleading to advertise them as an optimization.


Attribute access of namedtuples aren't any quicker as you've surmised, it requires a hash lookup and an index lookup.

Yes, you're right, I was mistaken about how namedtuples work under the hood.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: