This was a few years ago but here's what I somewhat remember. First was obviously the 0-based indexing, non-inclusive ranges, and the row-major array ordering.
Then, there are a few issues when you get past non-trivial indexing of 1D arrays like a[3:]. For matrices, the semantics of numpy says that a 2D array is a 1D array of 1D arrays. So if A is 2D array, then A[3] is the 4th row of the array. Not great.
Another is that given 2D array A, and indexing vectors r and s, A[r,s] returns a 1D array while Matlab returns a 2D array with completely different semantics.
Another is negative indices.
Then you have things like A[(1,2)] being different from A[[1,2]]] and the whole concept of slices objects like Ellipsis, np.newaxis, and their combinations like (Ellipsis, 1, np.newaxis).
Another is that indexing returns a view and not a copy.
This is just indexing. There are tons of other issues with numpy like the multiply operator, the seemingly random way that they split methods (as in A.sum()) and applicable functions (as in np.sum(A)), and other nonsense.
Numpy is basically a library stuck on top of Python with Python not being away of numpy or multi-dimensional arrays at all. It's ridiculous that numerical and scientific programming has devolved into working with numpy.
> Numpy is basically a library stuck on top of Python with Python not being aware of numpy or multi-dimensional arrays at all. It's ridiculous that numerical and scientific programming has devolved into working with numpy.
This! So much this!
Many people have lived through different phases of numpy appreciation. First, when it appeared, it was amazing to be able to access huge arrays from within a script. Then when it evolved, there was a slight suspicion that things were going a bit out of hand. Later it became a caricature of itself when it began to replace matlab. Today, we are in the tragic state where many young people think that "the only possible way to multiply two matrices on a computer is by using numpy.dot".
Then, there are a few issues when you get past non-trivial indexing of 1D arrays like a[3:]. For matrices, the semantics of numpy says that a 2D array is a 1D array of 1D arrays. So if A is 2D array, then A[3] is the 4th row of the array. Not great.
Another is that given 2D array A, and indexing vectors r and s, A[r,s] returns a 1D array while Matlab returns a 2D array with completely different semantics.
Another is negative indices.
Then you have things like A[(1,2)] being different from A[[1,2]]] and the whole concept of slices objects like Ellipsis, np.newaxis, and their combinations like (Ellipsis, 1, np.newaxis).
Another is that indexing returns a view and not a copy.
This is just indexing. There are tons of other issues with numpy like the multiply operator, the seemingly random way that they split methods (as in A.sum()) and applicable functions (as in np.sum(A)), and other nonsense.
Numpy is basically a library stuck on top of Python with Python not being away of numpy or multi-dimensional arrays at all. It's ridiculous that numerical and scientific programming has devolved into working with numpy.