But I think a lot of the value of a large standard library is that it makes it possible to write more programs without needing that first third-party dependency.
This is particularly good if you're using Python as a piece of glue inside something that isn't principally a Python project. It's easy to imagine a Python script doing a little bit of code generation in the build system of some larger project that wants to parse an XML file.
When you have no third-party dependencies, then adding the first one requires picking amongst trade offs and lots of work. A subset of choices include using virtualenv, using pip, using higher layer tools, copying the code to the project, using a Python distribution that includes them, writing code to avoid needing the first dependency ...
* You have to document to humans and to the computer which of the approaches is being used
* Compiled extensions are a pain
* You have to consider multiple platforms and operating systems
* You have to consider Python version compatibility (eg third party could support fewer Python versions than the current code base)
* And the version compatibility of the tools used to reference the dependency
* And a way of checking license compatibility
* The dependency may use different test, doc, type checking etc tools so they may have to be added to the project workflow too
* Its makes it harder for collaborators since there is more complexity than "install Python and you are done"
I stand by my claim that the first paragraph (adding another dependency) is way less work, than the rest which is adding the very first one.
 https://github.com/jazzband/pip-tools  https://docs.pipenv.org/en/latest/  https://tox.readthedocs.io/en/latest/
I do think there's a lot of low-hanging fruit where Python could bake something in to auto-setup a virtualenv for a script entrypoint & have the developer just list the top-level dependencies & have the frozen dependency list also version controlled (+ if the virtualenv & frozen version-controlled dependency list disgaree rebuild virtualenv).
I used it to distribute dependencies to Yarn workers for PySpark applications and it worked flawlessly, even with crazy dependencies like tensorflow. I'm a really big fan of the project, it's well done.
It's also a nice API for dealing with XML.
I had enough trouble using it efficiently that I went and wrapped Boost property tree and can happily churn out all sorts of data queries (including calling into python for the sorting function from the C++ lib) in almost no time.
I was taking daily(ish) updates of an rss feed and appending it to a master rss file but sorting was pretty slow using list comprehensions so now I convert it automagically to json and append it as is. No more list comprehensions either, just hand it a lambda and it outputs a sorted C++ iterator.
Though I probably should've just thrown the data into a database and learned SQL like a normal person...
If only that were the norm amongst long-tail python users. Heck I don't do it; I have the anaconda distribution installed on Windows and when I need to do a bit of data analysis hope I have the correct version of packages installed.
Making this core to the python workflow (bundling virtualenv? updating all docs to say "Set up a virtualenv first"?) is the first required change, before thinking about unbundling stdlib
And it is a totally miserable experience on Windows, every single time.
But, I do wonder whether it needs to grow. Python is in a stage where adoption of new std library features is inherently slow, not only in third-party libraries like twisted, but also in applications, even if they use newer Python versions.
What kills Python here is that it is most commonly bundled with the linux distribution or the OS (true also on mac). This reduces cycle times drastically. Compare this to newr language platforms that people like to install in newer versions on older platforms quite regularly.
Some recent additions would be fine additions at an early stage of language development, but surely not for Python.
It doesn't kill it, quite the contrary, rather makes it ubiquitous. If you want another version you just install virtualenv. It's the same with Perl. We use the Perl version shipped with the distribution (openSuSE) and deploy to that. It's older but it's stable and it works. On our dev environment (mac) we have the same version with all of the modules installed in plenv. We also chose a framework with as little deoendencies as possible (Mojolicious). It looks like it was a great choice.
So all you need is one dep that needs Guava version X with method M that is removed in version X+2 (say) and another dep that needs something new introduced in version X+2, and you have a Guava version conflict. That's, Guava releases are not backwards compatible due to removal of classes and methods.
You can sometimes fix this with a technology like shade or OSGi or whatever to allow private copies but it does not always work.