A lot more. Maybe they could be made similarly concise with more libraries, but the fact is that APL/J have the array as their native unit, and all of the functions in the example are from its base or core. NumPy and Pandas are the progeny of APL/J. GPUs became the "vector/matrix processing" machines that APL and J were looking for decades ago for the perfect match. For me, I study APL/J because they are like math. You learn the symbols, and now you can abstract your thoughts into a program the same way you can convey it in mathematical symbols - succinctly. The speeds are not performant, but allow for quickly learning or experiementing before optimizing. Interpreted APL code can run as fast as C, but in this paper they stay away from using functions that are not vanilla APL. I have been reading books on neural networks since the late 80s, but I have never fully owned or grasped some concepts until I see them succinctly and not buried in pages of a book or hundreds/thousands of lines of code. To me it is yoga for the mind!