
Pythran as a bridge between fast prototyping and code deployment - serge-ss-paille
http://serge-sans-paille.github.io/pythran-stories/pythran-as-a-bridge-between-fast-prototyping-and-code-deployment.html
======
rademacher
Looks pretty nice. Unfortunately, I never run into this issue because I only
get to write research code.

The Julia language was designed to target the two language problem and at
least from these benchmarks it looks pretty competitive [1]. I imagine over
time, pythran may fix some limitations and beat Julia in most benchmarks.

[1]
[https://github.com/fluiddyn/BenchmarksPythonJuliaAndCo/tree/...](https://github.com/fluiddyn/BenchmarksPythonJuliaAndCo/tree/master/JuMicroBenchmarks)

~~~
tomp
Note that the blog post is also about _deployment_ , not just about
performance. Does Julia support statically compiled executables without
dependencies or GC?

~~~
keldaris
Unfortunately, not really. There is some community work in that regard [1],
but it doesn't seem to get as much attention as one would like. Some people
have gotten it to work, but official support (guaranteeing maintenance and
decent documentation) for static compilation and easy deployment would make a
huge difference.

[1]
[https://github.com/JuliaLang/PackageCompiler.jl](https://github.com/JuliaLang/PackageCompiler.jl)

------
zedr
Isn't this similar to the Nuitka project?

[http://nuitka.net/](http://nuitka.net/)

Nuitka has fantastic Python 3 support (up to 3.7 currently).

~~~
dagw
They're similar in concept, but very different in focus. Nuitka main goal is
to be 100% compatible with cpython, something which will often mean
sacrificing performance compared to pythran.

Pythran main aim is to be fast, and to achieve this they're willing to only
support a small subset of python.

As Nuitka's performance gets better and Pythran starts to support more and
more of python, perhaps they'll converge at some point in the future.

------
fredsanford
Does Pythran work with things like opencv and sklearn as python modules or
does code have to be written to explicitly enable them?

It feels to me like Pythran + opencv would be a killer combination since it
can take 300+ lines of C++ to achieve what you can with 40ish lines of numpy,
opencv and python.

------
ktta
Was Cython given a consideration for this project?

I see that you are involved with the Pythran project, so could you tell us the
shortcomings of Cython? As I understand it, before Pythran didn't support
Python 3, but seems like that has changed

~~~
ktta
Can't edit on my app, but the first question was for the blog post writer and
my second to the submitter

~~~
serge-ss-paille
> could you tell us the shortcomings of Cython

In order to achieve top performance, in the context of numerical simulations,
you generally end up explicity writing the loops are implicit in high-level
numpy (less abstraction).

Cython does not perform any high-level optimisation on the code, while Pythran
does. For instance Pytrhan computes whether an array index may be negative or
not, and generates wraparound only when needed. On the otherhand Cython
requires a compiler directive to do so.

That being said, Cython can do plenty of stuff Pythran cannot: import native
libraries, wrap classes, mixed Python/native mode etc. It has a much stronger
codebase (more tested/validated) and a larger community.

~~~
jeanl
For me, the biggest shortcoming: Cython does not create independent C++ code
(independent of the python interpreter that is) that can be used in a separate
C++ code base. My main point is that pythran makes it possible to deploy
python/numpy code as C++ code.

~~~
ktta
I didn't realize you're the author of the blog post!

Thanks to both of you for the reply

------
kristofferc
I feel that a comparison to the handwritten C++ version would make the claims
a lot stronger. Making something 10x faster is not very hard if it is
incredibly slow to begin with and is, on its own, fairly uninteresting. On the
other hand, if the results here approcahed the speed of optimized C++ code,
then this workflow makes a lot of sense.

~~~
SubiculumCode
Seems there could be a cost/benefit analysis here. Ten times faster than
python might be sufficient for some applications given the potential for much
faster deployment, regardless of whether handwritten C might be faster.

~~~
jeanl
You're absolutely right: you don't necessarily need to be within 10% of pure
C++ if algo development is made far easier by using python/numpy. But it would
be good to have a hand-written C++ baseline to determine where the
cost/benefit point is (at least for this example).

