I don't think either of these statements is true.
Fortran is hard to beat for performance, but from a usability perspective NumPy (and other Python libraries) has some nice features for array-oriented computation that you can't find even in modern Fortran. The most obvious examples are broadcasting  and advanced indexing .
NumPy itself contains only C, though some SciPy routines wrap Fortran code. Yes, NumPy drew heavy inspiration from Fortran, but it's hardly a wrapper.
Elementwise arithmetic is coded in C (and not very fast...). Linear algebra uses the BLAS and LAPACK interfaces, of these only LAPACK is usually written in Fortran these days... but anyway, a Fortran matrix is a C transposed matrix and vice versa and these interfaces all support transpose as an argument, so there is no need for reordering. And if you do np.dot(A.T, A) there is no reordering either, just a change of flags to DGEMM.
I'm guessing LAPACK might not be too dependant on order though, but I haven't studied its performance tbh.
But it is not really about right vs wrong order, but using an algorithm to tile the data efficiently. Google "Anatomy of High Performance Matrix Multiplication" for examples from OpenBLAS.
This is where NumPy falls through -- "a.T + a" will be very slow no matter what you do, but it can be done efficiently with tiling.
For operations like adding two arrays or summing an array along an axis, this means that NumPy automatically iterates in the fastest order.
edit: Cython was always great for Python 2 and apparently supports 3. http://cython.org
Python 3 is the platform I use for almost everything, going back to 2.7 is always a major pain. 10 minutes into some 2.7 work and you have to deal with encoding and you start wondering if it would be quicker to just upgrade to 3.5.
Of cause if you mainly do numbers, there's perhaps little gained from Python 3.
So, nothing “hard”, really, you just have to do the work again (and prepare to support two versions for a long time, instead of virtually only one).
For someone to whom python is just a fancy pipe-wrench, why keep up with the new shiny?
For web stuff you have asyncio, and for hackers there is the venv improvements, but what is there for scientific users who have work to get done?
You don't need to freeze the language, but if the alternative is breaking backwards compatibility after all that time...
So far I have not heard anything like a real justification for why Py2 is still better
I think it needs to be the other way around. If you want me to move from something that works to something that you want me to move to, the onus is on you to show what the new thing does better (or would likely do better in the future). For scientific loads the Python3 story is mostly "not worth the trouble".
We have PyPy which is JIT for Python that emits assembly at runtime though not as successful Cpython for its support.
The complexity class of this problem, without type annotations, namely completely transforming an arbitary Dynamic Python program to a static, partially inferred typed language (C++) is O(N): https://en.m.wikipedia.org/wiki/Hindley–Milner_type_system
The transformations are also not guaranteed to improve runtime performance and might even do the opposite or have side effects for precision and a program that is slower due to generalizations that an algorithm makes.
With type annotations... why, just why? Use C++ or C if you want types and performance gains thereof..nThe potential for bugs and failures is less than relying on an algorithm if you realise the fear and laziness of compiled languages is umfounded.
def dprod(l0: list[int], l1: list[int]):
return sum(x * y for x, y in zip(l0, l1))
Wouldn't it be much nicer?
To the number crunchers, Python 3.* offers no compelling reason to move (except for threats, "you are on your own now, 2.7.* will not get updates"). I see two compelling advantages that 3.* offers (i) better abstractions for asynchrony (ii) UTF strings (that exacts a cost even if you don't need them). If you don't need these two there I see no technical reason to move (political/social reasons yes, technical not really).
PR #1970: Optimize the power operator with a static exponent.
PR #1961: Support printing constant strings.
PR #1927: Support np.linalg.lstsq.
My point being that the developers have to go about re-implementing (in LLVM IR?) features that are very mature in C libraries (e.g. numpy) and similarly C++ libraries.
Could it be that one gain of going from Python to C++ will be that one could take advantage of all the mature libraries that C++ has access to, without having to do this re-inventing of the wheel?
I am not sure, and regardless, I'll be using numba for a long time coming, because...I like the devs, they work hard on it, and numba works like a charm for what I need to do, so I'm not looking for an overkill.
Numba aside, though, it's strange to me that I see no mention of Cython on their project page. They ought to at least acknowledge that other fairly mature products exist in this space.
Also, maybe it's more of a curiosity for you than a practical tool, but Stan (which is becoming the de-facto standard DSL for Bayesian statistics) compiles to C++, where it takes advantage of libraries like Eigen and Boost.
We are currently building smart filters into it as well, so that we can do byte-code optimizations.
but utterly useless:
a python integer would be translated into a c++ int. Where python automatically starts using a wider type this one would just wrap.
So If you write a python program that is compatible with pythran, then pythran will almost certainly be much faster than nuitka. If however you take a random python program it will almost certainly compile with Nuitka and almost certainly not compile with pythran.