I posted this response in LTU, but it bears repeating.
Orthogonality (or representational independence), not size, is the relevant criteria. Harmonic analysis is the appropriate comparison. While every complete basis can represent any function in the relevant function space, it is better at representing some than others; notably the functions which are a finite linear combination of the basis functions. So trigonometric series represent one kind of function very efficiently, wavelets another. It's similar with programming languages.
Of course the orthogonal core may be mostly of interest to the language implementer,as orthogonality should, in the most faithful sense of the word, make the implementation of one construct independent of implementing another construct, particularly in optimization.
The "orthogonal core" is not only of interest to the language implementer, it is what determines the suitability of the language for the particular task. To use your analogy of harmonic analysis, a complete basis is not only useful as a method of representation, but also a method by which to ease the process of calculation.
Regardless, I think the post is postulating that a large, overcomplete "basis" is the new beauty de jour.
Languages should be small, libraries should be big.
The standard library should be comprehensive and well documented so that it's easy to (1) decide if there's a package that does what you want, and (2) work out how to use it. Simply documenting all the functions/methods on an API is not enough; documentation should start by saying what problems the module is trying to solve, then give some worked examples. If the worked examples don't give a bloody good idea of how to use it, then probably either the package is badly designed or the documentation badly written.
There are libraries in Python that I have given up on using simply because I've not been able to figure out how to use them.
It is not very clear what is "language" and what are "libraries", in a conceptual sense. For example, you can build Common Lisp from a handful of things / special forms, but the ANSI standard is very large, because it wanted to standardize a lot of things that could be built in many different ways (e.g. the loop facility).
Basically, the difference between "language" and "library" is normative. What you put in "language" are requirements for conforming implementations. It is a political decision more than a technical one.
> What you put in "language" are requirements for conforming implementations.
That's a language definition, not a language. Consider a language L which is implemented in implementation language M.
Stuff that's written in L is part of the satndard library. Stuff that could have been written in L but is written in M (e.g. for speed) is part of the language. And stuff that is written in M is also part of the language.
I like the idea of many small special-purpose languages. In that view, a language or an API (to some specific library) has the same goal: expose some functionality to the programmer. The nice thing with a specific language, is that it can be tailored, syntactically, to the purpose of the 'library'. It's a bit more work to wrap those functionality into a language instead of just an API, but I believe the amount of learning one would be in favor of the language kind.
Small is nice until you've exhausted learning the nooks and crannies of the language, making you crave the ability to do more with it. Then more gets added until it isn't small anymore.
I beg to differ. I want to completely understand the tools I use. Exploring a new language may be a fun process, but having an easy understanding of the complete language specification allows you to adequately frame a problem in the context of that language.
Having said that, I do use big tools in my day job.
Functional. Functions. And, the only important "Library" is a small and beautiful compiler which is invoked with a built-in function called compile that you can change, swap out and completely remove, of course. Anything else seems like big globs of crufty glue.
I remember taking a look at the list of operators commonly used in Haskell and musing at how long it is. The darling child of the functional community is far more complex than they think it really is.
Not that that's stopping me from hacking in Haskell.
Orthogonality (or representational independence), not size, is the relevant criteria. Harmonic analysis is the appropriate comparison. While every complete basis can represent any function in the relevant function space, it is better at representing some than others; notably the functions which are a finite linear combination of the basis functions. So trigonometric series represent one kind of function very efficiently, wavelets another. It's similar with programming languages.
Of course the orthogonal core may be mostly of interest to the language implementer,as orthogonality should, in the most faithful sense of the word, make the implementation of one construct independent of implementing another construct, particularly in optimization.