
Why Python Runs Slow, Part 1: Data Structures - Sauliusl
http://lukauskas.co.uk/articles/2014/02/13/why-your-python-runs-slow-part-1-data-structures/
======
jnbiche
If you're using Python for performance-critical applications, simple use of
the integrated ctypes module (uses libffi in background) can make a world of
difference. There's a modest performance overhead, but basically you're
getting near C performance with proper use of ctypes.

Coincidentally, one of the usual examples given in the ctypes howto is a Point
structure, just like in this post. It's simple:

    
    
      from ctypes import *
    
      class Point(Structure):
          _fields_ = [("x", c_int), ("y", c_int)]
    

Then you can use the Point class the same way you'd use a regular Python
class:

    
    
      p = Point(3, 4)
      p.x == 3
      p.y == 4
    

Really, taking a half-day to learn how to use ctypes effectively can make a
world of difference to your performance-critical Python code, when you need to
stop and think about data structures. Actually, if you already know C, it's
less than a half-day to learn... just an hour or so to read the basic
documentation:

[http://docs.python.org/2/library/ctypes.html](http://docs.python.org/2/library/ctypes.html)

If you plan to write an entire application using ctypes, it'd be worth looking
at Cython, which is another incredible project. But for just one or two data
structures in a program, ctypes is perfect.

And best of all, ctypes is included in any regular Python distribution -- no
need to install Cython or any additional software. Just run your Pythons
script like you normally do.

EDIT: I was just demonstrating how to get an easy-speed up using ctypes, which
is included in the Python standard library. To be clear, you would usually use
this type of data structure in ctypes along with a function from a C shared
library.

Furthermore, if you're serious about optimizations, and you can permit
additional dependencies, you should absolutely look at cython and/or numpy,
both of which are much faster than ctypes, although they do bring along
additional complexity. Other commenters are also pointing out cffi, which I've
never used but also bears consideration.

~~~
Sauliusl
This would be one of the approaches to use if you wanted to squeeze the last
pieces of performance out of this class. I would recommend using Cython for
this though.

The problem with both of these approaches is that, well, use this too often,
however, and you suddenly realise you are not really coding in Python anymore.

Also, correct me if I'm wrong, overuse of tricks like these is one of the key
reasons why we cannot have nice things like PyPy for all modules.

~~~
jnbiche
I actually agree with everything you write here. If I could re-write the post
to include these points, I would.

I do think PyPy is very near to having full ctypes support, if they don't
already.

------
erbdex
"My experience in working on Graphite has reaffirmed a belief of mine that
scalability has very little to do with low-level performance but instead is a
product of overall design. I have run into many bottlenecks along the way but
each time I look for improvements in design rather than speed-ups in
performance. I have been asked many times why I wrote Graphite in Python
rather than Java or C++, and my response is always that I have yet to come
across a true need for the performance that another language could offer. In
[Knu74], Donald Knuth famously said that premature optimization is the root of
all evil. As long as we assume that our code will continue to evolve in non-
trivial ways then all optimization6 is in some sense premature.

For example, when I first wrote whisper I was convinced that it would have to
be rewritten in C for speed and that my Python implementation would only serve
as a prototype. If I weren't under a time-crunch I very well may have skipped
the Python implementation entirely. It turns out however that I/O is a
bottleneck so much earlier than CPU that the lesser efficiency of Python
hardly matters at all in practice."

\-- Chris Davis, ex Google, Graphite creator

~~~
jnbiche
This is the key. Very often, the app is IO bound, and moving to C will make
little difference.

This is one reason why I'm very excited about Python's new AsyncIO module in
Python 3 only. It's asyncore done right, and is a great way to write network
applications. I look forward to seeing what is built on top.

~~~
a8da6b0c91d
> Very often, the app is IO bound

I hear this all the time but it really doesn't square up with my personal
experiences. I've had to re-implement dynamic language stuff in C++ many times
to get acceptable performance. No real design changes, just straight ports to
C++. The entire value proposition of Golang is that the scripting language
dynamic type systems are too slow, and it seems like loads of folks agree.

~~~
pjmlp
That is why they never used Dylan, Lisp, SELF or any other dynamic language
with native compilers.

~~~
a8da6b0c91d
Those dynamic languages are only performant when you add a bunch of type
information. You wind up writing a simulacrum of C and it still under-
performs.

~~~
lispm
The idea is that there is some useful speed for much of the application by
using native compilers. Higher speed might be needed only in parts of the
program. Then a type inferencer, etc. comes to use. If that's still not enough
one calls C or assembler routines - but, say, 95% of the application can be
written in Lisp.

------
munificent
What I find interesting here is that Python (and most other dynamically-typed
languages) treat instances of classes as arbitrary bags of properties, with
the negative performance implications of that, _even though that feature is
rarely used_.

I'd love to have the time to examine codebases and get real data, but my
strong hunch is that in Python and Ruby, most of the time every instance of a
class has the exact same set of fields and methods. These languages pay a
large performance penalty for _all_ field accesses, to enable a rare use case.

JavaScript VMs (and maybe Ruby >=1.9?) don't pay the perf cost because they do
very advanced optimizations ("hidden classes" in V8 et. al.) to handle this.
But then they pay the cost of the complexity of implementing that
optimization.

I'm working on a little dynamically-typed scripting language[1] and one design
decision I made was to make the set of fields in an object statically-
determinable. Like Ruby, it uses a distinct syntax for fields, so we can tell
just by parsing the full set a class uses.

My implementation is less than 5k lines of not-very-advanced C code, and it
runs the DeltaBlue[2] benchmark about _three times faster_ than CPython.

People think you need static _types_ for efficiency, but I've found you can
get quite far just having static _shape_. You can still be fully dynamically-
dispatched and dynamically type your variables, but locking down the fields
and methods of an object simplifies a lot of things. You lose some
metaprogramming flexibility, but I'm interesting in seeing how much of a
trade-off that really is in practice.

[1] [https://github.com/munificent/wren](https://github.com/munificent/wren)
[2]
[https://github.com/munificent/wren/blob/master/benchmark/del...](https://github.com/munificent/wren/blob/master/benchmark/delta_blue.wren)

~~~
gopalv
Python actually comes static shapes for objects with __slots__

class vertex(object):

    
    
       __slots__ = ["x", "y", "z"]
    

I use __slots__ in most code after the first refactor for a very important
reason - I'm a _hyper-caffeinated_ code monkey.

I wasted an entire day because I set self.peices in one place and self.pieces
in another & couldn't figure out the bug, until I used the "grep method".

~~~
munificent
Right, but the problem here is you have to opt _in_ to that. The most terse,
simple way to express something is also the slowest.

Most classes have static shapes, so for the few types that are more dynamic, I
think it makes sense for users to have to opt _out_.

------
alephnil
It is of cause possible to write a faster Python, as PyPy proves, but some of
the design choices in Python does make it hard to optimize. This is not
inherently a problem with dynamic languages as the development of Julia
proves. In Julia it is easy to write programs that is in the same ballpark as
C efficiency wise, while still feeling very dynamic and having a REPL. Pythons
problem is that both the interpreter and the language is quite old, and
existed before modern JIT technology was developed, so that was not considered
when designing the language and the interpreter.

That said, Python emphasis on fast development as opposed to fast running code
is very often the right tradeoff, and is why it is so popular.

~~~
jl6
What language features would you have or not have if you designed a language
with knowledge of "modern JIT technology"?

~~~
andreasvc
If I would guess, I would say that you want to design the language in such a
way that things such as types are typically easy to guess for the JIT
compiler.

------
jzwinck
There's no silver bullet, but NumPy comes pretty close sometimes. Here's an
example where I achieved a 300x speedup with regular old CPython:
[http://stackoverflow.com/questions/17529342/need-help-
vector...](http://stackoverflow.com/questions/17529342/need-help-vectorizing-
code-or-optimizing/17626907#17626907)

The trick is often to somehow get Python to hand the "real work" off to
something implemented in C, Fortran, C++, etc. That's pretty easy for a lot of
numerical and scientific applications thanks to NumPy, SciPy, Pandas, PIL, and
more. Lots of libraries are exposed to Python, and you can expose more either
by compiling bindings (see Boost.Python) or using ctypes. If you do this well,
nobody will notice the speed difference, yet your code will still look like
Python.

And sometimes your performance problem is not Python's fault at all, such as
this lovely performance bug:
[https://github.com/paramiko/paramiko/issues/175](https://github.com/paramiko/paramiko/issues/175)
\- a 5-10x speedup can be had simply by tweaking one parameter that is no
fault of CPython or the GIL or anything else (in fact, Go had basically the
same bug).

What I'm getting at here is that performance is not just one thing, and the
GIL is a real and worthy spectre but hardly matters for most applications
where Python is relevant. Your Python is slow for the same reason that your
Bash and your C and your Javascript and your VBA are slow: you haven't
profiled enough, you haven't thought long and hard with empirical results in
front of you about where your program is spending its time, your program is
improperly factored and doing work in the wrong order, or perhaps you should
just throw some hardware at it!

------
crntaylor
The part that really surprised me was this:

    
    
      We only had 23 years of Python interpreter development,
      how would things look like when Python is 42, like C?
    

C, which I always think of as an ancient venerable systems language, is less
than twice as old as Python, which I think of as a hot new kid on the block.

In thirty years, when my career will probably be drawing to a close, Python
will be 53 years old and C will be 72 years old. Barely any difference at all.

~~~
coldtea
> _We only had 23 years of Python interpreter development, how would things
> look like when Python is 42, like C?_

Well, mostly like it is today. Maybe a little better, maybe a little slower.
Python hasn't progressed much. Compare it to the last 10 years and it's mostly
a flatline, if not a slight decline with some 3.x versions.

Whereas C was from the onset one of the fastest or THE fastest language on any
machine.

Getting fast is not usually due to some incremental effort. You either go for
it or not. If you start from a slow language, incremental changes without a
big redesign of the compiler/interpreter never get you that far.

Javascript got fast in 3 years -- not by incrementally changing the Javascript
interpreters they had before, but by each of the major players (Apple, Google,
Mozilla) writing a new JIT interpreter from scratch. In 2007 it was dog slow,
and then in 2008 all three major JS interpreters got fast.

Sure, they have been enhanced since, but the core part was changing to JIT,
adding heuristics all around, etc. Not incrementally improving some function
here and some operation there.

~~~
gizmo686
>Whereas C was from the onset one of the fastest or THE fastest language on
any machine.

You can't write performance critical code in C. C is great for high level
code, but if performance is important there is no alternative to assembly.

~~~
coldtea
> _You can 't write performance critical code in C. C is great for high level
> code, but if performance is important there is no alternative to assembly._

That statement makes no sense.

Of course you can write performance critical code in C. People write all kinds
of performance critical code in C. Heck, kernels are written in C. What's more
"performance critical" than a kernel? Ray traycers are written in C. Real time
systems are written in C. Servers are written in C. Databases are written in
C. Socket libraries. Multimedia mangling apps like Premiere are written in C
(or C++). Scientific number crunching libraries for huge datasets are written
in C (and/or Fortran).

The kind of "performance critical code" you have to write in assembly is a
negligible part of overall performance critical code. And the speed benefit is
not even that impressive.

~~~
spc476
That's true _now_ (and perhaps has been true for easily the past fifteen
years). But up til the early-to-mid-90s, it was relatively easy for a decent
assembly language programmer to beat a C compiler. In fact, it wasn't until
the early mid-90s that general purpose CPUs broke 100MHz.

Just be very thankful you don't have to deal with C compilers under MS-DOS
what with the tiny, small, medium, compact, large and huge memory models (I
think that's all of them).

~~~
ygra
Even now performance-critical code is assembly. Just take a look at your
libc's memcpy implementation, for example. Most likely there's a default C
implementation that is reasonably fast and a bunch of assembly versions for
individual architectures.

~~~
coldtea
> _Even now performance-critical code is assembly._

The problem I have with this statement is that it implies other code, like the
kernel, a ray-tracer, video editor, number crunching etc, stuff usually done
in C/C++, are not "performance critical".

Let's call what you describe "extremely performance critical" if you wish.

With that said, MOST performance critical code is written in C/C++.

Code that's not that much performance critical is written in whatever.

------
joshmaker
Who would use a dictionary for a point? If it's a 2D grid, you are only going
to have two axes and by convention x-axis is always first, and y-axis is
always second. Perfect use case for a tuple.

    
    
       point = (0, 0)

~~~
w0utert
It's just an example. There are many cases where you need more than what you
can safely stuff into a tuple, which means the choice becomes 'naked
dictionaries vs. classes', it's a tradeoff I've had to make many times writing
python code.

Of course there are many cases where using tuples is the best way to represent
some data item, (x, y) point data is an obvious example. But when your data
structures become more complex, need to be mutable, need to have operations
defined on them, have relations to other objects, etc, tuples are a bad
choice. You don't want to end up with code that uses tuples of varying
lengths, composed of elements of varying types, consumed and produced by
functions that are hard-coded to rely on the exact length, ordering and
element type of their their tuple inputs. Named tuples are one step in the
direction of dictionaries, but they are still nowhere as flexible as mutable
dictionaries or proper classes.

~~~
RyanZAG
If you can't use a tuple, it's not like you could have really used a C-struct
either. So the point is valid - the code should be using tuples and not
dictionaries or classes.

~~~
w0utert
Again, I assume the article only served as an example of how in Python you
usually end up using dictionaries or classes, which have fundamentally
different performance characteristics compared to C-structs. Yes, you could
use tuples instead, in some cases, and get performance characteristics
comparable to C structs. Often this is not an option though, and you would
still have to use classes or dictionaries. There are many scenarios you could
conveniently implement in C using structs, that would end up as an ungodly
mess in Python if you would implement them using nothing but tuples.

Just as a thought exercise, try to come up with a way to represent and
manipulate an hierarchical, mutable C-structure with 20 fields, using nothing
but tuples in Python.

~~~
falcolas
Like... named tuples? That's the nice thing about python, you're not
restricted to using one data type. If you have a data structure that doesn't
change, and speed matters, you can use something better suited to the task at
hand.

Of course, you're not restricted to using one datatype in C either, but the
constant comparison this article makes between structs and dictionaries is
misleading. Comparing hash map implementations with dictionaries would be much
more apt.

~~~
sitkack
named tuples are classes, but they use __slots__ so they are sealed to
extension thus using less memory. Take a look at the `namedtuple` source, it
is a fun little bit of metaprogramming.

[http://dev.svetlyak.ru/using-slots-for-optimisation-in-
pytho...](http://dev.svetlyak.ru/using-slots-for-optimisation-in-python-en/)

[http://hg.python.org/releasing/2.7.6/file/ba31940588b6/Lib/c...](http://hg.python.org/releasing/2.7.6/file/ba31940588b6/Lib/collections.py#l228)

I am _wrong_. namedtuple does NOT using slots. weird, wonder why?

Because it directly subclasses `tuple`

point = namedtuple("Point","x y z",verbose=True)

[http://blip.tv/pycon-us-
videos-2009-2010-2011/pycon-2011-fun...](http://blip.tv/pycon-us-
videos-2009-2010-2011/pycon-2011-fun-with-python-s-newer-tools-4901215)

------
kashif
The pythonic way of creating a Point like data structure is not

    
    
      point = {'x': 0, 'y': 0}
    

but instead, this

    
    
      Point = namedtuple('Point', ['x', 'y'], verbose=True)
    

The resulting class, I believe to be more performant than using a dictionary -
but I haven't actually measured this.

~~~
mjschultz
I'm probably doing it wrong but I measured exactly this yesterday and the dict
version was faster:

    
    
        $ python -m timeit --setup "from collections import namedtuple; Point = namedtuple('Point', ['x', 'y']); p = Point(x=0, y=0)" "p.x + p.y"
        1000000 loops, best of 3: 0.284 usec per loop
    

vs.

    
    
        $ python -m timeit --setup "p = {'x': 0, 'y': 0}" "p['x'] + p['y']"
        10000000 loops, best of 3: 0.0737 usec per loop
    

Maybe the use isn't right because I agree with your belief that namedtuple is
suppose to be more performant.

~~~
pdonis
You should be creating a dict vs. a namedtuple instance once, then accessing
it many, many times. What you're doing is creating a lot of dict vs.
namedtuple instances, then accessing each one only once. That's mostly testing
the instance creation time, not the field access time.

~~~
mjschultz
That code should only create the dict/namedtuple instance once, then access it
many, many times.

The creation occurs in the --setup portion. The field accesses occur in the
actual looping portion of the timeit code.

~~~
pdonis
Ah, ok, I see, for some reason my browser wasn't showing the namedtuple code
snippet right. Now I need to go look at the namedtuple code to see why
accessing the fields by name is slower than accessing them by index.

------
unwind
The article was kind of interesting, it's always good to talk about why things
are slow, since that might help highlight what you can do about it (short of
switching away from Python, that is).

I was abit annoyed about the benchmarks part; there was no C-based benchmark
for comparison, it was not clear how many points were being processed which
made the exact time kind of pointless. Also, C and C++ _are not the same_
which the author seems to think.

~~~
craigching
> C and C++ are not the same which the author seems to think.

The part where he lost me was when he said this referring to C:

> Here we tell the interpreter that each point would have exactly two fields,
> x and y.

Maybe I misread that, but I don't think I did, he was referring to the C
struct when he said that. It's one of those glaring mistakes that is hard for
me to look past when someone says something like this.

~~~
Sauliusl
Not sure why I put an "interpreter" there when C is compiled, of course, I
fixed this now.

Again, this does not change the fact that structs and dictionaries are very
different data structures.

~~~
GregBuchholz
[http://stackoverflow.com/questions/584714/is-there-an-
interp...](http://stackoverflow.com/questions/584714/is-there-an-interpreter-
for-c)

------
jaimebuelta
The sentence "Python run slow" is a little misleading. While is true that does
more stuff than other languages (C, for example) for similar operations, it is
also true that in the majority of cases, the relevant time consuming parts of
execution are in other areas, like waiting for I/O, etc.

In most of the cases, a program done in Python (or Ruby, or JavaScript) is not
slower than a program done in C or even assembly. Sure, for some cases, it may
be necessary to take care of rough cases, but I think that just saying "Python
is slow" is not very fortunate...

~~~
stefantalpalaru
> the relevant time consuming parts of execution are in other areas, like
> waiting for I/O, etc.

It's safe to assume that people complaining about a language's performance
deal with CPU bound cases.

> In most of the cases, a program done in Python (or Ruby, or JavaScript) is
> not slower than a program done in C or even assembly.

This is false:
[http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?test=all&lang=python3&lang2=gcc&data=u64q)

~~~
acdha
> It's safe to assume that people complaining about a language's performance
> deal with CPU bound cases.

That's a very unlikely assumption – many people will complain about the
language before even profiling their code. I've seen people do things like
complain about Python's performance before, say, realizing that they were
making thousands of database queries or allocating a complex object inside an
inner loop. Because they've heard “The GIL makes Python slow” it's incredibly
common for even fairly experienced programmers to simply assume that they
can't make their program faster without rewriting it in C and they never
actually confirm that assumption.

>> In most of the cases, a program done in Python (or Ruby, or JavaScript) is
not slower than a program done in C or even assembly. > This is false:
[http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...](http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t..).

That's a micro-benchmark collection. It's interesting for low-level
performance but it doesn't tell you much about the kind of programs most
programmers write. For example, I serve a lot of images derivatives over HTTP
– it's certain that are many operations which could be significantly faster if
they were written in C (i.e. reading HTTP headers in place rather than
decoding them into string instances) but in practice none of that matters
because almost all of the total runtime is already spent in a C library.

~~~
igouy
> That's a micro-benchmark collection. <

No, it's a meagre dozen "toy programs". (See Hennessy and Patterson "Computer
Architecture").

> ... many operations which could be significantly faster if they were written
> in C ... none of that matters because almost all of the total runtime is
> already spent in a C library. <

See Sawzall quote -- [http://benchmarksgame.alioth.debian.org/dont-jump-to-
conclus...](http://benchmarksgame.alioth.debian.org/dont-jump-to-
conclusions.php#jump)

~~~
acdha
> See Sawzall quote -- [http://benchmarksgame.alioth.debian.org/dont-jump-to-
> conclus...](http://benchmarksgame.alioth.debian.org/dont-jump-to-conclus..).

Agreed – my point was simply that over a couple decades I've only seen a
handful of cases where a performance issue which was due to the language
rather than the algorithm or simple data volume. Such problems exist but not
for the majority of working programmers.

~~~
igouy
Incidentally -- [http://books.google.com/books?id=gQ-
fSqbLfFoC&lpg=PP1&dq=Com...](http://books.google.com/books?id=gQ-
fSqbLfFoC&lpg=PP1&dq=Computer%20Architecture&pg=PA37#v=onepage&q=toy%20benchmark&f=false)

------
arocks
Rather than using a better data structure from the start (class vis-a-vis
dictionaries), why not move to a better Python implementation (PyPy vis-a-vis
CPython)? When I need a data structure, I look for one that is a natural fit
to the problem. Perhaps everything is implemented as a hash table in a certain
language (not taking names here :)) but would I replace every data structure
with a hash table for efficiency? That would be Premature Optimisation.

I would recommend to code in idiomatic Python as much as possible (in this
case, use a tuple for Point). Primarily because it is easy for other fellow
programmers and your future self to understand. Performance optimisations are
best done by the compiler for a reason, it is usually a one-way path that
affects readability.

~~~
GFK_of_xmaspast
I can't use pypy yet, I need scipy/numpy.

------
analog31
For my casual use, Python has been fast enough that I can be pretty carefree
about how I program. But I had one shocking experience when I ported a program
with a fair amount of math over to a Raspberry Pi, and it practically ground
to a halt. That was a disappointment. It won't drive me away from Python, but
it means I've got some learning to do on how to better optimize Python code.

~~~
fidotron
Python and performance can be summed up as "just think about what it has to do
to add two ints together". The I/O requirements of that vastly exceed the CPU
requirements.

Basically any tight loop, heavy use of hash tables (so that's Python, Ruby
etc.) or floating point (since this is a sore spot for ARM) is going to create
an explosion in time used. C/C++/C#/Java can, by being careful about the types
being used, be much faster.

~~~
a8da6b0c91d
I don't understand what I/O has to do with that. The issue is all the extra
branching and copying with every op code. Python is going to be 20X to 100X
slower than C++ at pretty much anything. It's an inevitability of the type
system.

------
tgb
This should be bringing up the classic array-of-structures versus structure-
of-arrays debate instead of what was mentioned. If only NumPy were standard
Python and so we could have an easy, standard answer that is usually fast
enough.

~~~
pekk
what is stopping you from using NumPy?

~~~
tgb
I do use it - it's just that it can't quite be given as the 'standard' answer
for cases like this. I just wish it could become the definitive rather than
the de facto standard.

------
rtpg
I'm surprised that Python's interpreter tech isn't advanced enough to try and
JIT some of this away. Python could probably learn a lot from various JS
interpreter advances.

~~~
jaimebuelta
The PyPy project is doing that [http://pypy.org/](http://pypy.org/) (is
mentioned on the article)

~~~
dagw
There is also Numba, which is a JIT python to LLVM compiler that supports
optional type annotations. Unlike pypy, it is designed to seamlessly integrate
with CPython.

------
zurn
On a related note, the field of compilers seems to have ignored data
optimization almost completely at the expense of code optimization.

Applying a few basic tricks lets a human typically save 50-75% off an object's
storage requirements and a compiler could combine this with profile feedback
data about access patterns & making it more cache friendly.

This would be a good fit for JIT where the compiler could do deopt when the
assumptions are violated, and this would let you make more aggressive
optimizations.

------
tasty_freeze
This article is making a misleading comparison. Yes, dynamically typed
languages have overheads that statically typed languages don't. But the better
question to ask is why Python implementations are factors slower than leading
javascript implementations, which have the same theoretical overheads that
Python does.

The V8 is is now about a factor of 4 or 5 off of the speed of C for general
purpose code, while CPython is 30x to 100x.

~~~
pekk
if Google would invest the same engineering effort in Python

------
hessenwolf
Runs 'slowly'. Led by the population of the North American continent, the
adverb is dying gradual.

------
rplnt
Yet another case of someone (the blog engine?) being too smart and replacing "
with “ and ” in code.

------
mumrah
If you don't want to mess with ctypes, this sounds like a straightforward use
case for namedtuple.

------
minimax

        std::hash_set<std::string, int> point;
        point[“x”] = x
        point[“y”] = y
    

This is a very interesting implementation of a set...

------
yeukhon
if you ever watch Robert's talk
[http://www.youtube.com/watch?v=ULdDuwf48kM](http://www.youtube.com/watch?v=ULdDuwf48kM)
you will find that different version of Python can have different run time for
the same code.

------
martinp
Wouldn't a regular tuple be the most pythonic (and possibly most efficient)
way to represent a point?

I think that's what most image libraries do (at least Pillow).

~~~
chmike
It was just an example to illustrate the problem. Pythons don't have legs. So
they can't run, not even slowly.

It would have been more instructive if the author showed the alternate
solutions classified by measured speed.

------
wcummings
Wouldn't a tuple be more appropriate than a class?

~~~
anaphor
Then you have stuff like point[0] and point[1] littered throughout your code,
or you write getter functions and then you have function calls as an overhead
which is roughly as expensive as the class. An instance of namedtuple is
actually the Correct (tm) solution.

~~~
Redoubts
>An instance of namedtuple is actually the Correct (tm) solution.

Do you have an implementation to support that? Because a comment from TFA
shows that's actually the worst idea in cpython.
[http://lukauskas.co.uk/articles/2014/02/13/why-your-
python-r...](http://lukauskas.co.uk/articles/2014/02/13/why-your-python-runs-
slow-part-1-data-structures/#comment-1242618646)

~~~
anaphor
Sorry, which comment are you referring to? All I see is a footnote saying "One
would be right to argue that collections.namedtuple() is even closer to C
struct than this. Let’s not overcomplicate things, however. The point I’m
trying to make is valid regardless" which doesn't really say it's a bad idea,
just that he didn't decide to talk about it in that article.

