> Anyway I agree with most of what you say, EXCEPT I think Perl's focus on text vs. Python's more general purpose focus can be seen from the creators' very early release announcements!
Oh, I agree with that part, too; Perl's growth into a general-purpose language was very uncomfortable and surprising. I just think they were about equally terrible at linear algebra to begin with.
What would make a language good at linear algebra? I think you'd want, as you say, efficient homogeneous vectors, and also multidimensional arrays (or at least two-dimensional), non-copying array slicing, different precisions of floating-point numbers, comma-free numerical vector syntax (maybe even delimiter-free, like APL), zero division that produces NaNs instead of halting execution, control over rounding modes, arguably 1-based indexing, plotting, and infix operators that either natively have linear-algebra semantics or are abundant and overrideable enough to have them. Python didn't have any of those built in, and a lot of them can't be added with pure-Python code.
You'd also want flexible indexing syntax (that either does the right linear-algebra thing by default or can be overridden to do so), complex numbers, infix syntax for exponentiation, and a full math library (with things like erf, gamma, log1p, arcsinh, Chebyshev coefficients, and Bessel functions, not just log, exp, sin, cos, tan, atan2, and the like). Python 0.9.1 evidently didn't have any of those (you can do x[2:] or x[:5] but even x[2, 5] is a syntax error), but they were mostly all added pretty early, though its standard math library is still a bit anemic. Like Perl, though, the first version of Python did have arrays and floating-point support (arithmetic, reading, printing, formatting, serializing) from very early on; unlike Perl before Perl 5, its arrays were nestable. (Perl 5, in 01994, also added a slightly simplified version of Python's module and class systems to Perl. I forget if "use overload" was already in there, but it seems to be documented in the 01996 edition of the Camel Book, so I guess it was in Perl 5 from pretty early versions.)
Numeric and Numpy added most of these things to Python, and IPython, Matplotlib, and SciPy added most of the others. Adding them to Perl 5 would have been about the same amount of work and would have worked about as well, but the people who were doing the work chose to do it in Python instead. It isn't the choice I would have made at the time, but I'm glad they had better technical judgment than I did.
Nowadays, for a language to be good at linear algebra, you'd probably also want automatic differentiation, JIT compilation, efficient manycore parallelization, GPGPU support, and some kind of support for Observablehq-style reactivity. Julia fulfills most of these but they're hard to retrofit to CPython.
A shell is sort of an "orchestration language", in the sense that a shell script tells how to coordinate fairly large-grained chunks of computation to achieve some desired effect. We've seen an explosion of such things in the last ten or fifteen years: Dockerfiles, Vagrant, Puppet, Chef, Apache SPARK, Terraform, Nix, Ansible, etc. Most of these are pretty limited, so there's a lot of duplication of functionality between them. And most of them don't really incorporate failure handling explicitly, but failures are unavoidable for the kinds of large computations that most need orchestration of large-grained chunks of computation. I wonder if this situation is optimal.
Oh, I agree with that part, too; Perl's growth into a general-purpose language was very uncomfortable and surprising. I just think they were about equally terrible at linear algebra to begin with.
What would make a language good at linear algebra? I think you'd want, as you say, efficient homogeneous vectors, and also multidimensional arrays (or at least two-dimensional), non-copying array slicing, different precisions of floating-point numbers, comma-free numerical vector syntax (maybe even delimiter-free, like APL), zero division that produces NaNs instead of halting execution, control over rounding modes, arguably 1-based indexing, plotting, and infix operators that either natively have linear-algebra semantics or are abundant and overrideable enough to have them. Python didn't have any of those built in, and a lot of them can't be added with pure-Python code.
You'd also want flexible indexing syntax (that either does the right linear-algebra thing by default or can be overridden to do so), complex numbers, infix syntax for exponentiation, and a full math library (with things like erf, gamma, log1p, arcsinh, Chebyshev coefficients, and Bessel functions, not just log, exp, sin, cos, tan, atan2, and the like). Python 0.9.1 evidently didn't have any of those (you can do x[2:] or x[:5] but even x[2, 5] is a syntax error), but they were mostly all added pretty early, though its standard math library is still a bit anemic. Like Perl, though, the first version of Python did have arrays and floating-point support (arithmetic, reading, printing, formatting, serializing) from very early on; unlike Perl before Perl 5, its arrays were nestable. (Perl 5, in 01994, also added a slightly simplified version of Python's module and class systems to Perl. I forget if "use overload" was already in there, but it seems to be documented in the 01996 edition of the Camel Book, so I guess it was in Perl 5 from pretty early versions.)
Numeric and Numpy added most of these things to Python, and IPython, Matplotlib, and SciPy added most of the others. Adding them to Perl 5 would have been about the same amount of work and would have worked about as well, but the people who were doing the work chose to do it in Python instead. It isn't the choice I would have made at the time, but I'm glad they had better technical judgment than I did.
Nowadays, for a language to be good at linear algebra, you'd probably also want automatic differentiation, JIT compilation, efficient manycore parallelization, GPGPU support, and some kind of support for Observablehq-style reactivity. Julia fulfills most of these but they're hard to retrofit to CPython.
A shell is sort of an "orchestration language", in the sense that a shell script tells how to coordinate fairly large-grained chunks of computation to achieve some desired effect. We've seen an explosion of such things in the last ten or fifteen years: Dockerfiles, Vagrant, Puppet, Chef, Apache SPARK, Terraform, Nix, Ansible, etc. Most of these are pretty limited, so there's a lot of duplication of functionality between them. And most of them don't really incorporate failure handling explicitly, but failures are unavoidable for the kinds of large computations that most need orchestration of large-grained chunks of computation. I wonder if this situation is optimal.