Hacker News new | past | comments | ask | show | jobs | submit login

"...The benchmarks support the claim that LPython is competitive with its competitors in all features it offers. ... Our benchmarks show that it’s straightforward to write high-speed LPython code. We hope to raise expectations that LPython output will be in general at least as fast as the equivalent C++ code."

At least as fast as C++ is a bold claim, but this is an interestingly documented process. I'm keen.

I'm quite partial to Nuitka at this point, but I'm open to other Python compilers.




We put this sentence there to drive the point home that LPython competes with C++, C and Fortran in terms of speed. The internals are shared with LFortran, and LFortran competes with all other Fortran compilers, that traditionally are often faster than C++ for numerical code. I've been using Python for over 20 years and it's hard for me to imagine that writing Python could actually be faster than Clang/C++, somehow I always think that Python is slow. Right now we are still alpha and sometimes we are slower than C++. Once we reach beta, if an equivalent C++ or Fortran code is faster than LPython, then it should be a bug to report.


What happens when including numba or pytorch, etc in the scripts? GPU acceleration in python is one really nice way of getting decent speed, but I would imagine it's difficult to shuffle over when doing this type of compiling. If the end compiled program allows for use of all available computational resources (some logic with python to determine what accelerations to allocate, what is available, etc) and then can compile to C++ speeds for CPU and use GPU where appropriate, this will be astoundingly good.


Right we support (currently a subset) of NumPy (just `from numpy import ...`) and SymPy (`from sympy import ...`) and some parts of the Python standard library. We want to support PyTorch, CuPy and other such libraries in a similar way, at least the subset that can be ahead of time compiled, which is quite large.

Yes, offloading to GPU we want to support naturally via NumPy syntax. We will look at this very soon, most likely via annotating that a given array lives on a GPU or host, and then array copy will copy it from host to device, etc.


I thought that Fortran was traditionally faster than C++ for numerical code due to stricter aliasing rules in the language, which I wouldn't expect to carry over to an IR?


That, but also being simpler and higher level, having multidimensional arrays in the language itself and simpler semantics (such as you cannot just take a pointer to an arbitrary variable, it has to be marked with "target"), no exceptions, and so on. What carries over to the IR today are all the language semantic features, such as all the array operations (minloc, maxval, sum, ...) and functions (sin, cos, special functions) as well as all the other features without any lowering, and we then do optimizations at this high level, then only at the end we lower (say to LLVM). Python/NumPy can be optimized in exactly the same way, and that's what LPython does. I think C++ can also be compiled this way, but the frontend would have to understand basic structures like `std::vector`, `std::unordered_map` as well as arrays (say xtensor or Kokkos, whatever library you use for arrays), and lift it to our high level IR. Possibly we would have to restrict some C++ features if they impeded with performance, such as exceptions --- I am not an expert on C++ compilers, I am only a user of C++.


Nuitka is just a packaging system which places all dependencies inside of an executable. It doesn't compile into machine code. Why do you conflate it with compilers?


It is absolutely a compiler. I've certainly spent enough time waiting for it to compile! On the contrary, packing dependencies is a side effect rather than its primary purpose. (Your sarcastic last sentence doesn't help your point, even if you weren't wrong.)

However, it really is different from projects like this one, in that it doesn't attempt to obtain C-like speed (but does hope to do some optimisations). For example, x+=1 will still dynamically dispatch depending on the runtime type of x, and (if it's an int) do the normal Python arbitrary precision operation. But those will be called from machine code rather than interpreted byte code.

(Essentially, it unrolls the main loop of the CPython interpreter, which is written in C, for every byte code operation, and eliminates every case of the switch statement inside except the one that corresponds to this operation. That's what gets compiled.)


Just to be clear, the reason I didn't (and won't) accept nuitka as a compiler is that it doesn't do what actual compilers do, it just plays around with bytecode. I experienced no speed difference when running large programs, but the startup is considerably slower. To me, it is just a docker replacement that is 1/10th as portable.


A compiler doesn't need to optimize. I think if it takes Python code, and translates it to something else, it's a compiler. An optimizing compiler is the one that will give you speedups.



Then it's an optimizing compiler for sure.


"plays around with bytecode"? Even if that is so, it translates the whole program to c++, which is then compiled. It relies on libpython to implement a lot of the core language types and such, so if your program isn't very computationally intensive or makes heavy use of core data types, you might not notice much, but it's definitely a compiler.


Since you're commenting from opiniated ignorance, you might want to read the Nuitka page itself:

> Nuitka is the Python compiler. ... It then executes uncompiled code and compiled code together in an extremely compatible manner. Nuitka translates the Python modules into a C level program that then uses libpython and static C files of its own to execute in the same way as CPython does.

I'm happy to stick by the tool's own description of itself. But if you don't accept that, and Nuitka still isn't a compiler by you, sure. Go ahead with your pedantic and strictest definition of the meaning of a 'compiler'.

Learn to read English, before insisting on your pedantry as some sort of truth, in disregard of the tool's own intent/description. (And I do hope you take issue with everyone's description of Typescript as a compiled programming language too.)


Let's be reasonable: Nuitka's sales pitch really isn't really a good source of information. And be aware that it doesn't say the code is converted or compiled to C. It just says it loads python code from C and runs it on the interpreter, which is basically what the python interpreter does.

I think my comment being devoid of content might have caused some frustration. So to make the best of everyone's time:

You can look into cython and pythran to see what I'm talking about. cython lets you optimize code step by step via generating an html page with your code and highlighting lines that still require the use of the python runtime. It lets you add types and cdef function definitions in order to reduce your dependency on the python runtime.

Another good example is pythran, which takes your python code and turns it into c++ code to be compiled by a c++ compiler. I understand that this isn't a direct compilation to machine code, but a middle step which lets you compile the output to machine code.

Then there is numba and taichi which have just-in-time compilation decorators. Taichi also provides a sophisticated runtime which lets you run parts of the code on a GPU.

Surprisingly, the best performance I've experienced among these examples was numba + numpy, even though numba alone can sometimes have optimizations that surpasses all compilation efforts, because it turns your loops into mathematical formulas and runs them at O(1) complexity when it can.


Nuitka is a compiler. With list it at the bottom of https://lpython.org/, together with the other 23 Python compilers, now 24. :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: