The blog posts by Kevin Modzelewski went into the internals: https://blog.pyston.org/
Also by the same person, a good article on Python's performance: http://blog.kevmod.com/2016/07/why-is-python-slow/
As a side comment: On the subject of type inference, I really come to like python's type hints. I don't use mypy itself yet, but already had a habit of adding types in docstrings, and like it when I can add a clue that a signature/return is dealing with a something that's not a basic type.
There are some nice little things out there for advanced typing, being added now and then:
- TypedDict and Literal: https://github.com/python/typing/blob/master/typing_extensio... (TypedDict was accepted in PEP 589)
- NamedTuple had variable annotation added in Python 3.6: https://www.python.org/dev/peps/pep-0526/
I wonder if typings are added to a codebase, we could give LLVM one more shot. It'd be pretty crazy to build able to build a large graphql server and build it into a statically linked binary like golang.
and this new recent issue that mentions targeting LLVM or other targets
Many of the "why Python is slow" articles usually hand wave the fact that other languages, just as dynamic, like Common Lisp and Smalltalk, have quite capable JIT compilers.
So it is a matter of having enough resources to throw at it.
Maybe PyPy and Numba are as good as it gets, unless some big corporation is willing to spend big bucks into improving Python's JITs.
> other languages, just as dynamic, like Common Lisp and Smalltalk, have quite capable JIT compilers.
Common Lisp implementations don't need a JIT compiler to be fast. In fact, most of them don't use JIT compilation at all.
What makes Common Lisp implementations (especially SBCL) so fast is a combination of sophisticated static analysis, optional type declarations, and the fact that Common Lisp has been carefully designed to allow for high performance. I cannot stress how important the last point is. The Common Lisp standard is a contract between the programmer and the compiler writer that allows the former to write portable programs, yet gives the latter enough freedom to optimize.
In contrast, the language "standard" of Python doesn't clarify what portable programs may rely on. Instead, programmers tend to rely on the specific behavior of CPython. And reproducing the exact behavior of CPython is much harder than implementing a carefully designed standard.
> So it is a matter of having enough resources to throw at it.
No amount of resources can heal the design decisions of Python. The only way to get Python fast is by going through a painful standardization effort and by breaking some existing code. And I don't see that happening anytime soon. (Python 4 anyone?)
Common Lisp has several execution modes and for compiled code: important are
1) AOT compiled, but fully safe with optional debug info: safety = 3 and debug = 2
2) AOT compiled, but fast and potentially unsafe with little debug infos: speed = 3, safety = 0, debug = 0
The usual goal is to be able to run much of the code in mode 1) and only compile portions in mode 2).
Thus much of the language is optimized around possible compilation. The language is very dynamic, but there is also a core, which is not object-oriented - thus potentially easier to compile to fast code.
Also incremental in-memory compilation in Common Lisp is AOT and not JIT.
Although stuff like dictionary ordering, GC implementation, or GIL shouldn't impact JIT implementation.
However going back to Smalltalk, which you didn't mention, not only it is as dynamic as Python, at any given moment can the image change its contents, and via messages like becomes: an object completely changes its internal structure.
Another thing: in Smalltalk the idea of a "stack frame" is actually encapsulated in an object called Context, and you can always inspect the current context. It's just an object like everything else in the system. I don't think Python has something like that, but I could be mistaken.
But these are not the parts where Common Lisp will be fast...
It seems like the outcome of the Unladen Swallow project was that there was no point to JIT generally because the language semantics prevent the sort of accelerations possible in other languages. (I'm aware of how PyPy and Numba are successful exceptions.)
Azul made it work for Java with what they said was ~20 man years of work.
"Python with LLVM has at least one decade of history. This session will be going to cover-up how python implementations tried to use LLVM such as CPython's Unladen Swallow branch (PEP 3146) or attempts from PyPy and why they failed. After that it will show what are the current python projects that use LLVM for speed, such as numba and python libraries for working with LLVM IR. In the end, it will mention about new ideas that would unite the powers of both LLVM and Python."
Which hasn't been updated in 5 yrs :(