Hacker News new | comments | ask | show | jobs | submit login
Modifying the Python object model (lwn.net)
63 points by Delgan 8 months ago | hide | past | web | favorite | 44 comments



It's maybe worth noting that richards is an "easy" benchmark. PyPy has been speeding it up quite significantly by about 40x for a long time.

Those changes are necessary for improved performance, but not nearly good enough. PyPy does that and tons and tons of more stuff and it gets dismissed here as "provided only modest speedup on company workload", which means there is far more involved here than just attribute lookup/function call which have been massively speed up by pypy for quite a few years now.


Pypy doesn't work well with cpython modules. That makes it a non-starter, sorry.


We have been addressing that for the last 3 years. Which CPython modules it does not work with for you?

That comment was about something slightly different though - even though richards is 40x faster, pypy still is not faster on some workloads (like instagram in their measurments), which makes me think that there are other forces at play.


Well, right now, I simultaneously want to use the python-gobject stuff, Pandas, numpy, bcolz, and other assorted parts of the scipy universe.


Strange that Guido or the other CPython devs would object to adding caching (though, rereading, maybe they only objected to the tone it was presented in - which still seems a bit sensitive). I get favoring simple code over optimizations for more extreme cases like switching from dictionaries to arrays, but what essentially sounds like a tiny LRU cache for method dispatch seems like a clear win for everyone.


A version of attr-lookup caching is already added in 3.7. Thus Guido objected to the tone of, "I'm the first person to notice this!"

> Mark Shannon said that Python 3.7 has added a feature that should provide a similar boost as the method-lookup caching used in the experiment.

As I understand it, class-attribute dicts now carry a version number which increments on every mutation. The first method lookup gets cached and subsequent lookups check the version number to decide whether to invalidate the cache.


CPython has always strived to be a simple easy-to-read reference implementation of Python. They have rejected many patches over the decades which would have sped up various things at the expense of readability.

People should not use CPython for speed; they should use PyPy for speed.


The article did mention that Instagram's code wasn't much faster on PyPy.

I agree with you though; one of the interesting and good things about Python is that it's a standard not an implementation. Although CPython is the the most popular by usage, PyPy and Cython are mainstream alternatives (or superset in the case of cython). There's also Jython, IronPython, Unladen Swallow, Grumpy, and others that I can't think of now. Some of those are defunct and others only are Python 2. But the point is, competition is good.


Yes, or Cython or Nuitka.

And keep an eye on the (poorly named) Grumpy project (Python compiled to Go.)

http://cython.org/

https://nuitka.net/

https://github.com/google/grumpy


And it is simple. I haven't written C since college and it's mostly very readable, really great for exploration. In many ways, I agree that should be kept.

But memoizing lookups can be a single branch at the top of a few functions. We should strive to have our cake and eat it too, not simply declare we shouldn't bother.


It ain't that easy (cache invalidation is hard). But they did it anyway, so, yeah, you can have your cake and eat it.


Great article worth reading for any hardcore Python fans.

I’d also add that the comments on the LWN article are good too.


LWN.net articles are generally quite good. I signed up after reading an excellent summary of Spectre/Meltdown work from Greg Kroah-Hartmann.

@all if you like this article please pay for a subscription. LWN.net deserves our support.


> LWN.net deserves our support.

Agreed. I subscribed a few months ago, in part because LWN.net seems to be the only publication providing significant coverage of the Python Language Summits.


I'm surprised the article left out mention of PyPy, which is pretty comparable to something like V8

http://speed.pypy.org/


I believe the article is a more or less verbatim transcript of what happened. I wasn't there and I don't really want to speculate, but I would expect LWN to mention everything important that was mentioned and not add their own interpretation either.


> Thomas Wouters asked if he had looked at PyPy. Shapiro said the company had, but there was only a modest bump in performance for its workload.

It was mentioned, but in a rather dismissive way


I would love to see python get much faster. Seeing all the work on node and V8 has made me jealous to the point of wondering if someone would ever take the Python syntax and just put it on top of V8 or (preferably) LLVM.

Yeah, I know that’s more work that I can imagine, but I can dream, right?


Language implementations don't work that way. However, the PyPy JIT has been around for over a decade and works wonderfully. It is a continuing disappointment that only about 1-2% of the Python community knows about and/or uses PyPy.

(PyPy on LLVM has been a thing before. It requires constant upkeep and isn't very fast. I'm sure that the PyPy team would love to hear from prospective maintainers!)


I think PyPy is relatively unused because of the difficulty in using C extensions written for CPython. With PyPy, everything will work until suddenly things go horribly wrong or the library you need is just not available (numpy). The work they've done is fantastic, it's just a very difficult situation.


FYI numpy both works these days and it's officially supported


Hm, it wasn't listed as functional in whatever list I looked at recently. I'll have to check it out again!


Which is why I'm very excited about Graal Python (https://github.com/graalvm/graalpython/blob/master/README.md)

It has the potential to bring an underlying framework which is built on industrial quality VM+JIT and it's primary goal is being compatible with the Scipy ecosystem at least ...Which is reason enough for unlimited optimism.


> industrial quality VM+JIT

I'll skip over this mostly, just wondering what exactly you mean about this and whether you consider LLVM not to be "industrial quality" seeing as the failed Unladen Swallow project based itself upon that and it didn't seem to get them anywhere.

> it's primary goal is being compatible with the Scipy ecosystem at least

Well... it's not like PyPy isn't "compatible" with the scipy ecosystem. It just has to use a lower-performing object access mode to use cpyext-based extensions, which I suspect is a compromise any JIT-based implementation will need to make to be able to make use of these more old-school extensions.

Ironically on the subject of "industrial quality" JITs, GraalVM is based upon the same meta-tracing interpreter ideas that were largely pioneered by PyPy.


Llvm can't be used. I wish it could - http://doc.pypy.org/en/release-2.4.x/faq.html#could-we-use-l...

I'm not sure about your point. There is a new Python version built on Graal, that promises to largely maintain C compatibility (like graal Ruby) and still deliver performance.

I wish Pypy were getting funded by someone and have a lot of respect for what those guys achieved...But the fact remains that it is not bring used. Maybe graal Python can change that.

I used the words "industrial quality" instead of "pioneering" or "innovative". I think it's accepted that the millions of man hours spent on the JVM has made it one of the most incredible VM anywhere - is the defacto foundation on top of which you build big data (spark/hadoop), language theory (scala, closure, kotlin) and a billion mobile phones.


I guess the TurboFan backend of V8 is so powerful now that my guess is that it would be prepared to receive a Python or Lua frontend to it.

Current Javascript is very complex, so maybe the current V8 Turbofan backend would already be fit to process a Python bytecode carefully crafted to fit this particular JIT backend?


From what I've seen, V8 is extremely wed to the javascript language model. I don't think it even has a concept of numbers that aren't javascript "numbers". Unless of course you go towards wasm territory, which I don't think is a territory particularly fun for dynamic languages.


WASM targets the TurboFan backend the same one the JS bytecode now does. So the backend can perfectly deal with a typed language like TS or Dart.

I think even in the frontend, the compiler try to guess the type beforehand, and anotate the real type for the compiler to optimize for the right type.

The problem is that i didnt dive that deep to know if a language as Python would be a good fit, nor im a "compiler guy" myself, so..

But i would love to do this as a backside project. The problem is my time is currently all taken by a big project.

But i would love to try to do this plug.. Thats why im winking here on HN. Maybe others also find a interesting thing to try themselves.


I never said that it is inappropriate for a "typed language", in fact, far from it.


The development of V8 was paid for by Google, and at that point they wanted to achieve market dominance against other big players.

It seems nobody is willing to put big enough money behind making Python much faster. My view is that the limitations are almost purely financial (as in, paying heavily somebody as skillful as e.g. Mike Pall or Lars Bak(1) and his team), not technical.

If Guido would not accept the "faster" Python, the fork would still be more popular if it would be compatible enough. And there are the technical aspects: it's not enough to make Python interpreter alone faster, whoever would take that challenge would have to adapt various important external libraries to be really accepted. Which is AFAIK also doable.

1) https://en.wikipedia.org/wiki/Lars_Bak_(computer_programmer)


It's exactly that.

The Python ecosystem in general is severely underfunded despite all big players using it extensively, which makes it really unfair if you compare it to the money poured into JS because of its monopoly on the web.

Remember Unladen shallow ? "Google" attempt to JIT Python ? It was just one guy during his internship (http://qinsb.blogspot.fr/2011/03/unladen-swallow-retrospecti...).

And look at the budget the PSF had in 2011 to help the community: http://pyfound.blogspot.fr/2012/01/psf-grants-over-37000-to-.... I mean, even today they have to go though so many shenanigans for barely 20k (https://www.python.org/psf/donations/2018-q2-drive/).

But at the same time you hear people complaining they yet can't migrate to Python 3 because they have millions of lines of Python. You hear of them when they want to extend the support for free, but never to support the community.

It's ridiculous.

Python needs a sugar daddy. It's used in Mac and Linux. It's used at Microsoft, Google, Facebook, Nasa and so many more.


The barriers are nearly purely social - the unwillingness to drop C API (or to have a phase out plan) and to declare certain kinds of behaviors as "implementation dependent" make it very hard for any meaningful competition to emerge.

It is harder to make a fast python than to make a fast JS, but it's not that much harder.


> unwillingness to drop the C API

Is a feature not a bug. It makes things like NumPy, SciPy and Pandas possible.


Aren't things like NumPy, etc. also possible through a FFI?

That is, I can understand that there's such a big installed based that people are loath to get rid of the Python/C extension API, but I think that's different than saying those projects are impossible without that extension API.


> the unwillingness to drop C API

What are you talking about? The article is about the Python core dev group rejecting speedups that preserve C module compatibility for the sake of, uh, readability or something?


I’m probably going to get crucified for this, but what the heck - open discussion on this topic is needed...

I like what you’re saying, but wonder if we made small incompatible changes over time, would that solve the problem? For example (and please forgive me on this), but there are so many similarities between Python and different languages. Objects are obviously everywhere - C++, Java, .Net, etc; and syntax’s are similar at a cursory glance to things like Fortran. All of the above are much faster.

We took a decade to go from Python 2 to 3, but that had some pretty big changes. Going from 3 to 4 and getting a 50% speedup while making some (hopefully small) incompatible changes would probably be a good motivator for people to migrate faster.

There are obviously pros and cons to this discussion, but i really believe that stagnation is the worst choice. (Ok, Perl made a worse choice, but I’m presuming we learned that the level of change from 2-3 is as far as we can go in a generational update (x.0) ).


> Going from 3 to 4 and getting a 50% speedup

That's way too low goal to even matter.

See this:

https://news.ycombinator.com/item?id=17007867

A simple loop in loop is 50 times faster in Javascript in the Firefox than the same loop in loop in Python. When you say "50% speedup" I understand you expect a speedup of 2 in "10 years". Before V8 the Javascript in Firefox was even slower than Python is now. And now it's 50 times faster than Python for "simple" things (which are actually the most important ones to be made faster). PyPy also proves that some code can be JIT-ed. One of the PyPy's problems is the "compatibility with libraries" part.

And yes, that kind of speedup like in Javascript is achievable in Python too. It's just a question of the right people being paid to do that. I've personally made some JIT compilers and I'm sure that Python can also have really useful faster interpreter and JIT. The approach of V8 for speeding up the "objects" as they are typically used and the calls can surely be applied to Python:

From the article: "The instrumented interpreter found that 70% of objects have all of their attributes set in the object's __init__() method."

As far as I know, V8 approach shines for such objects.

Also note that modern Javascript engines don't do only JIT or only interpretation, they adapt in runtime.


> My view is that the limitations are almost purely financial

Your view is wrong. Implementing highly optimized versions of different languages (which may from the layman's point of view look quite similar) is not completely comparable. Python is a much more dynamic language than javascript - huge amounts of the language are overridable object by object, even down to attribute access. Hell, even down to isinstance() behaviour. And these are all things that need e.g. deoptimization barriers added in the code fast path to check if xyz mechanism happens to have been overridden. Javascript doesn't even have operator overloading.

The PyPy team have put a lot of work into building a performant python implementation (there's your "fork" for you...) having to, from what I can tell, work through a lot of these issues with a good deal of ingenuity along the way.

Throwing money and/or "rockstar programmers" at projects isn't as wise as it always seems. Particularly when they don't seem to have significantly investigated the work of a team that's been working on the "fast python" problem for the last 15 years.


> Particularly when they don't seem to have significantly investigated the work of a team that's been working on the "fast python" problem for the last 15 years.

If you mean PyPy, PyPy was from the start intentionally "meta": it was never made to be simply a Python with a faster interpreter(s) and fast JIT(s) working in sync, like the modern Javascript engines work. It was intentionally a reimplementation of Python interpreter and all possible code in Python (that's where the name PyPy comes from), and then what's "optimized" is everything together: the new interpreter and the new Python implementation written in Python, all while generating a real C source. Which is then compiled as a normal C.

It's obviously too "meta" goal compared to the approaches used by fast Javascript engines, even if they also use some Javascript implementations for some library functions. When your starting goal is to "do everything in Python" you have already blocked yourself from being able to take the really best possible approach on every level of the engine. And even being meta enough, they still write:

"in code like this with a string-valued foo() function:"

   for x in mylist:
       s += foo(x)
"the JIT cannot optimize out intermediate copies. This code is actually quadratic in the total size of the mylist strings due to repeated string copies of ever-larger prefix segments."

So yes, there's definitely a room to make a faster usable implementation compared to the current PyPy. But it is a hard work, and needs a very focused and knowable leader(s), willing to take the "harder" approaches on every level, when needed. "Harder" than "we'll do everything in Python and only then optimize the whole thing together."


I’m afraid that you might be right. I’d love to see Instagram step up to the challenge here, as they obviously have deeper pockets than most.

Who knows, maybe releasing their experimental code would prompt some really good discussion and get things moving in a direction of speedy run times.


I know some of the words in that article.


things that immediately come to mind: - GVR founder, involved and opinionated.. argues against the TONE of the communication ! during a fairly ordinary technical discussion of language implementation. Certainly a trained compiler implementor can carefully measure and then show becnhmarks on function dispatch.. but the concern raised has to do with maintainability of the code, more than raw performance. Dont you see? GVR is a humanist and social leader here. Ecosystem participation does matter, as well as raw tech specs. Hardcore math or performance languages are zillions of times faster, and how many users are there.. how many libraries..

* The fashionable inner-circle of the current economic winners, making the academic who "works for so-and-so" an immediate authority. Think for yourself! Wealth-makes-leadership leads to some sick outcomes, frankly. Sure, some academic compiler writer knows his function call stats, but that doesnt suddenly make the years and years of participatory work by many hands, less relevent. This is not populist, but rather pragmatic.

* Comparison to yet-another Python 3.x development. Great! Python evolves.. but lets not throw out a stable binary system with well-understood characteristics.. and that is.. Python 2.7

very interesting peek into the phenomenon of this language


> "argues against the TONE of the communication! during a fairly ordinary technical discussion of language implementation.

The LWN piece says "Guido van Rossum, who loudly objected to Shapiro's tone, which was condescending, he said".

That sounds appropriate.

In your experience, do most people presenting an 'ordinary technical discussion' use a condescending tone? If so, I'm glad I don't work in your organization.

Otherwise, when should people complain when speaker is disparaging most of the people in the audience, even if accidentally?


"Guido van Rossum, who loudly objected to Shapiro's tone, which was condescending, he said" yes, appropriate -and- appreciated




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: