
“Python's batteries are leaking” - narimiran
http://pyfound.blogspot.com/2019/05/amber-brown-batteries-included-but.html
======
dragonwriter
Note that similar issues were raised with Ruby stdlib, which is being
addressed in part with “Gemification” of stdlib, so that all of stdlib
(targeted for 3.0, though it's been going on since 2.4)[0] is being moved out
to externally-updatable packages that are included by default (default and
bundled gems), so that it is still “batteries included” but the batteries are
at least replaceable.

Amber's suggestion seems to be in the same direction (though perhaps not as
extreme.)

[0] [https://www.slideshare.net/mobile/hsbt/gemification-for-
ruby...](https://www.slideshare.net/mobile/hsbt/gemification-for-ruby-2530)

~~~
bsder
Python cannot be atomized effectively, and the issue is political.

The problem is that I cannot count on being able to install new software in
many environments.

If I fight the battle to get centralized IT to install Python, I now have a
guaranteed set of standard libraries as well. I'm _never_ going to get
permission to install anything other than default. Ever.

Consequently, the standard libraries need to be very complete and very useful.

And, while people seem to love the Rust approach to libraries, I'm not
necessarily a fan. _Far_ too many times I have pulled a library that is
"obviously" something that a language should consider to be "standard library"
and gotten bitten because it was broken. Only _VERY_ core libraries in Rust
are guaranteed to work across multiple architectures and OS's.

I think Rust is probably doing the right thing _for Rust_ as "batteries
included" is _NOT_ one of its tenets. However, that doesn't make it right for
everybody else.

~~~
hn_throwaway_99
> If I fight the battle to get centralized IT to install Python, I now have a
> guaranteed set of standard libraries as well. I'm never going to get
> permission to install anything other than default. Ever.

Can you explain this more? What kind of place do you work? I've had some
experience with large, bureaucratic companies, but nothing ever so far as "you
can't install any other libraries."

~~~
postgeographic
Not who you asked, but I work for a large international company, a big 4
professional services firm. I wanted to install anaconda and Jupyter on my
machine (data science is not a part of my 'official' job description, but I
wanted to see how much of my data exploration workflow I could speed up or
automate). I had to go up three separate hierarchy ladders to get sign off.
First, my own team, then IT, then our risk and quality team (thanks to our
audit practice and a few historical issues with whistleblowers and leaks, the
risk guys are pretty much the final arbiter of... Everything)

After about 9 weeks of emails, meetings, and pitches, I finally got Anaconda
up and running. A week later, I tried to upgrade the 3rd party packages.. No
dice. Blocked by the corporate VPN. I'd need the sign off every time I wanted
to `pip upgrade` anything

Needless to say, I do not bother anymore.

~~~
sebazzz
I also work at a big 4, and perhaps the same one as you, and I assure you,
there is a way. There is always a procedure or some way you can follow. You
just need to know how to look it up.

We have our own development team, our own servers, our own freedom to deliver
to clients fast without the hassle of the main corporation. How? We talked to
the right persons.

~~~
lugg
Big 4?

~~~
javagram
[https://en.m.wikipedia.org/wiki/Big_Four_accounting_firms](https://en.m.wikipedia.org/wiki/Big_Four_accounting_firms)

------
mjw1007
If your project has any third-party dependencies, and so (nowadays) you're
going to set up requirements.txt and virtualenv and whatever anyway, I can see
that you're going to think things like "this XML parser in the standard
library is just getting in the way; I can get a better one from PyPi".

But I think a lot of the value of a large standard library is that it makes it
possible to write more programs without needing that first third-party
dependency.

This is particularly good if you're using Python as a piece of glue inside
something that isn't principally a Python project. It's easy to imagine a
Python script doing a little bit of code generation in the build system of
some larger project that wants to parse an XML file.

~~~
porker
> and so (nowadays) you're going to set up requirements.txt and virtualenv and
> whatever anyway

If only that were the norm amongst long-tail python users. Heck I don't do it;
I have the anaconda distribution installed on Windows and when I need to do a
bit of data analysis hope I have the correct version of packages installed.

Making this core to the python workflow (bundling virtualenv? updating all
docs to say "Set up a virtualenv first"?) is the first required change, before
thinking about unbundling stdlib

~~~
schlenk
I develop in python since 10+ years. And setting up a virtualenv only happens
when developing patches for 3rd party packages. (due to the other commercial
environment that doesn't work with venvs).

And it is a totally miserable experience on Windows, every single time.

------
avar
She seems to be advocating that Python do pretty much what Perl has ended up
doing, which is "we have some batteries, but we haven't been adding new ones
for a decade or more".

The reasons are similar, it's a constant drag on core compiler development to
need to support various batteries included that most core contributors aren't
going to care about, so it's easier to tell people "use CPAN".

There was even talk of "distros" for the interpreter. Where the core bits
would be similar to what Linux is, and all the batteries would be provide as
collections of add-on packages.

Strangely enough these efforts seem to stop at OS distributors. They really
seem to like to install just the one "compiler", and wouldn't stand for a
project like Perl or Python telling them "we mean for you to distribute the
core compiler plus these 100 packages, because that's what forms our
'language'". "Strangely" because you'd think they'd be the best positioned to
make easy work of packaging up such a thing, and it shouldn't in principle
make a difference if you need to install 100 RPMs / APTs by default.

~~~
gcb0
And perl solved that perfectly: just let the OS/distro solve the 100s of
packages. And it have been solved, despite you claiming otherwise on your last
paragraph.

When did you have to use cpan in a modern system? Compare that to how many
times you had to use pip.

Now, if you use a crappy OS or distro (or god forbid, some container built by
you have no idea who on top of nobody knows what) then yeah, you are bound to
do the leg work yourself, but you will be doing that regardless of the
language/subsystem you are trying to use in that case.

Not to mention that it is the _only_ way to do things professionally. For
example, if you must have a system that parses XML but for company policy is
not allowed to have even the means of performing a network request. With
python you either have both xml and an http library and whatever else included
and you will either have to do a special package with a striped down
python+xml only or get a corporate exception. While on other languages you can
install only the xml parser component package and your code will run happily
and be compliant with company policy.

~~~
thaumasiotes
> if you must have a system that parses XML but for company policy is not
> allowed to have even the means of performing a network request. With python
> you either have both xml and an http library and whatever else included and
> you will either have to do a special package with a striped down python+xml
> only or get a corporate exception. While on other languages you can install
> only the xml parser component package and your code will run happily and be
> compliant with company policy.

...wouldn't the company policy involve removing the means of performing a
network request from the _computer_ , making the notional capabilities of the
_software_ irrelevant?

Python will let you drive network requests through the OS. It's just that you
wouldn't normally want to.

~~~
lalaland1125
+1 for this. Trying to ban programming languages that support network
connections is both foolish and impossible. Any Turing complete language that
allows any sort of OS interaction can be used to communicate over a network.
Even if you have to manually do the syscalls yourself.

------
stochastastic
The Python standard library has been a huge help for me. Evaluating which
third party packages to trust and handling updates is a hassle. (Would love a
solution for this. Does anyone have a curated version of PyPI?) I’m surprised
that people want to slim it down other than for performance on a more
constrained system.

As an aside, why doesn’t the Python standard library extend/replace features
with code from successful packages like Requests? Tried it and it didn’t work?
Too much bloat? Already got too much on the to-do list?

~~~
aasasd
> _Does anyone have a curated version of PyPI?_

Pypi have thrown out the downloads counter—a huge misservice to coders. Like I
got all day to figure out the best libs for ten different features which I
only need in passing, so my primary concern is to not pick complete garbage.

So, my solution to that now is to look up Github pages for the libs and choose
the one with most stars. As much as I dislike Github for its occasional
typical proprietary behavior, Gitlab doesn't help in this case.

~~~
j88439h84
Pypi still has a download counter, but it will tend to reflect which libraries
are used in ci, so it's a biased estimate.

~~~
jancsika
If only there were a way to filter machines on the internet by some kind of id
number, and to subtract numbers from other numbers at scale.

~~~
orf
And what about caches and proxies?

------
sametmax
Hawk Owl is a _fantastic_ dev. She was the main force behind the twisted 2->3
transition. But because she is, she is missing the point of batteries
included.

Asyncio is in the stdlib so that we have an official lib and API. The main
benefit is that most people now, when looking for async, are not wondering
about twisted or gevent or tornado. Most just go asyncio. Most dev efforts go
to asyncio. It's the end of the great async war. Is it perfect ? No. And I
don't care. It's one thing less to worry about. For those who know what they
are doing, you can still choose and pip install twisted, but most people
don't, and that's solved. Before that, just choosing the lib was a nighmare,
as basically it's a definitive call. Out it on pypi, even with a "stdlib" tag,
we go back to the 200X era. And it was not fun.

And the goal for having things like xml/sqlite/ssl without installing anything
makes python very useful in a load of situations where you can't install
stuff. Sometime you are offline. Sometime you are in a restricted env.
Sometime you are not on your machine. Sometime your security protocol is hell.
Don't assume people use Python as we do, from our comfortable dev laptop
driven by the knowledge of our craft. Python is used in banks, by scientists,
in schools, by kids, by poor people in the third world, by geographers and
pentesters. The python user base is incredibly diverse, it's why it's so
popular: it fits a lot of use cases.

So I see the benefit of having a side version of official modules we can pip
install that can move faster. I see the benefit of cleaning the stdlib of old
stuff, like the wave module, Template or @static.

But I'm glad I don't have anything to install to generate a uuid or unzip
stuff. I'm glad I don't have to worry about twisted anymore (depiste that I
did write a book on the topic !).

Also, pip install is NOT simple when you learn the language. I have to spend
some time in the classroom, even with adult professionals, to explain the
various subtleties of site-packages, import path, py -x on windows, python-pip
on linux, -m, virtualenv, header files, etc. before my students become
autonomous with it. Without a teachers, this turn into months of bad practices
and frustrations.

You'd have to fix that first, way, way before moving stuff to pypi. I do think
it should be high priority actually: it affects way more than pip.

~~~
1_player
Having a huge standard library also kills analysis paralysis and lets people
be more productive.

If you're in the flow and trying to hack together something, the last thing
you need is to lose all momentum to pick a date time library. I've had this
issue tons of times with Node and Rust, where I'm not up to date with the
current meta and my 30 minute hack job is interrupted 5 minutes in by having
to google which library should I use to do an HTTP request. (I've actually
lost interest in whatever I was doing a few times because of this.)

Python's stdlib is nobody's favourite, but when you start to get to its
limits, you're probably past your flow state, you've written most of the logic
and you can spend some time to replace http.client with requests because the
latter is much better.

On a tangent note, I've been trying to find another scripting language to
replace Python because I'm not a fan of it anymore (I won't get into it right
now), and considering what I just wrote, there's not much that can replace it,
as most languages have a bare-bones standard library and if you're not up to
date with the current best library to do X, you'll never achieve great
productivity.

~~~
agumonkey
I think go did this well, they provide a very solid toolset that is nothing
fancy but you can forget about it right away and start producing solutions.

~~~
sametmax
Give it 25 years.

------
peterwwillis
It's funny to me that they're making a point that PyPI is better than core,
because actually I think PyPI has created a rather crap ecosystem. The non-
hierarchial organization of packages, the lack of curation, lack of inheriting
past functionality and extending it as more standard functionality, etc has
resulted in a confusing sprawl of packages with duplicate, incompatible, buggy
functionality. It's a bit like Linux internals; it's grown haggard over time,
isn't organized well, is badly documented, and so it's difficult to pick it up
and use it without stumbling over a decade or more of stale documentation and
obsolete software.

Perl has a _much_ better set of modules that extend standard functionality,
which considering how much flack Perl gets for being hard to read, is rather
funny. Rather than every new feature being its own independent project, most
of the useful modules inherit a parent and follow the same convention, leading
to _very_ simple and easy to use extensions. And Perl Core isn't all that
great, but it does have _some_ batteries included, and everything else is
extended easily and in a more standard manner by CPAN.

~~~
j88439h84
What functionality would you like to have on PyPI, in addition to curation?

~~~
peterwwillis
The big "function" I would like is just organizing the packages differently to
get people to think about and use them differently.

Search engines are a "cool" technology that have become the de facto way to
find what you're looking for. But if there's a lot of content related to what
you're looking for, they can suck.

Go to PyPI and search for "semantic version". _10,000+ projects for "semantic
version" found_. As you go through page after page of different modules
related to versioning, the one module you _won 't_ find immediately is Versio
([https://pypi.org/project/Versio/](https://pypi.org/project/Versio/)), a
well-documented and useful module which I ended up using. I have no idea how I
found this module, but it certainly wasn't from PyPI's search engine.

Now go to CPAN (really metacpan) and search for "semantic version". Yes,
you're still looking at thousands of results - but wait! There are only two
modules here that look useful: Version::Dotted::Semantic, and SemVer. And the
description comes straight from the docs' README, rather than being a short
uninformative blurb. The first module, Version::Dotted::Semantic, is
inheriting a _separate_ module, Version::Dotted, and adding some extra
functionality. Not only does the search page give more information about the
module, but the hierarchy makes it easier to find (and later extend) useful
modules in an intuitive way. Since the base module's functionality is boring,
generic, and simple, it's less likely that people will make 20 different
versions of it, so it'll be reused more often and thus remain stable for a
long time.

A lot of CPAN's module names have sprawled over time and gotten less useful,
but there's still a general convention that you name your module as a
hierarchy of what it does (even if it's kind of verbose) and make small,
reusable modules, rather than giant modules that are hard to extend. Not all
modules measure up to this standard, and there's definitely room to improve,
but I think Python modules could benefit greatly from a system like this.

As far as curation goes, PyPI is often filled with cruft. While searching for
Jenkins packages, you will come across lots of entries like this:
[https://pypi.org/project/jenkins2api/](https://pypi.org/project/jenkins2api/).
The homepage leads to a GitHub 404, it's only ever had one release, and it has
no documentation. This project should probably not have been listed on the
main search page, or at least sorted well down the list by default with
intelligent filters and marked accordingly. (The "date last updated" and
"trending" sorting just results in having virtually no Jenkins-related modules
in the results at all)

------
yingw787
I agree with Amber’s point that more stuff should be moved from the standard
library to PyPI. I made my first pull request to CPython during the
development sprints this year, and it’s honestly not the best experience.
Everything is built from scratch in CI after every commit, even a
documentation change. There’s nowhere near enough CI builds and pipelines for
everything Python supports. Pull requests are outstanding for several months,
and there’s at least a thousand PRs open when I checked this morning.

I’m not sure if Python’s ideal solution is to reduce stdlib and have endorsed
packages in PyPI, but it would be an improvement over the current process.

------
twblalock
The story of Python 2 to Python 3 migration, in a nutshell:

> Van Rossum argued instead that if the Twisted team wants the ecosystem to
> evolve, they should stop supporting older Python versions and force users to
> upgrade. Brown acknowledged this point, but said half of Twisted users are
> still on Python 2 and it is difficult to abandon them. The debate at this
> point became personal for Van Rossum, and he left angrily.

~~~
someguydave
Hopefully the “python foundation” will declare python 2 deprecated soon so
that it can be handed over to responsible maintainers.

~~~
Groxx
That's happening: [https://pythonclock.org/](https://pythonclock.org/)

To ensure things move along: pip has been printing highly-visible "python 2.7
will deprecate soon" warnings for a couple months or so now.

~~~
schlenk
And backing out of it when running on pypy, as that does not deprecate python
2 compatibility...

~~~
Groxx
Sure. Pypy is a separate implementation, they only control CPython. That's a
pretty normal arrangement - official moves on, other forks might backport
fixes for longer or focus on stability or some other realm of performance or
something.

------
resoluteteeth
When I first used python like 20 years ago I was blown away by how much
functionality was blown in, and it can be annoying using languages where even
the most basic functionality involves downloading 50 packages from the
internet, but on the other hand the standard library does seem to be a mess
now.

~~~
new4thaccount
Yea. I'm using Rust at the moment for fun and the amount of things not in the
standard library is crazy to me. What? There is no built-in dictionary? What
do I use instead and where is it?

Edit: based off of all the replies below, everyone understands the validity of
what I'm trying to say, but also have fortunately pointed out my admittedly
grevious error of not knowing you can just import hashmap from stdlib. The
extra step is pretty minimal and not a problem. I'm hoping this is covered in
the Rust book.

~~~
nicoburns
Dictionary in the python sense? There are two!

[https://doc.rust-
lang.org/std/collections/struct.HashMap.htm...](https://doc.rust-
lang.org/std/collections/struct.HashMap.html) [https://doc.rust-
lang.org/std/collections/struct.BTreeMap.ht...](https://doc.rust-
lang.org/std/collections/struct.BTreeMap.html)

~~~
StavrosK
Oof, I wasted a good hour+ trying to convert from one to the other. Maybe
there's an easy way to do this, but not many people on IRC knew (or were
available at the time).

~~~
edflsafoiewq

        use std::iter::FromIterator;
        BTreeMap::from_iter(hash_map.into_iter())

~~~
steveklabnik
Or .into_iter().collect(), I believe. No import needed.

------
nerdwaller
> She thinks that some bugs in the standard library will never be fixed.

This is actually an interesting paradox to be in, and one that Linus Torvalds
recently commented on. His focus, like Guido’s, is the user and even fixing a
bug can break the user.

[https://lkml.org/lkml/2018/8/3/621](https://lkml.org/lkml/2018/8/3/621)

~~~
notatoad
This isn't a paradox. once it's released it's not a bug anymore, it's just
behaviour. document the behaviour, but breaking compatibility with previous
versions is a bug. it doesn't matter how obviously wrong the previous
behaviour is.

~~~
nnq
> but breaking compatibility with previous versions is a bug

That's how you get an inconsistent mess that never evolves. There's something
called _semver_ , increase the version number and do the fix / refactors /
radical redesign / whatever. People will see that you've went from version 1.0
to 87.3 in one year and they may choose not to use your thing because you're
moving too fast for them, but that's life...

~~~
cameronbrown
Better than breaking software. Linux is far more important when it comes to
ABI stability here than Python though.

~~~
adontz
Linux has no stable ABI :-)

~~~
cameronbrown
For driver developers sure, but it's userspace ABI is very stable.

------
jMyles
Guido is a good dude, through-and-through, despite his perhaps bad behavior
here.

Amber is nothing short of an open source hero, having brought Twisted, one of
the best open source projects in the world, to new heights. Her insights are
as important as anyone in the python community, and after six consecutive
PyCons sprinting at the Twisted table (including literally in a chair with
Amber to my left and Glyph to my right earlier this month), I consider Amber's
voice to be one of the truest and clearest among the leadership of the
language into the future.

Amber and Guido are both beautiful human beings.

In the dispute that is the topic of this blog post, Amber is basically totally
right. Moreover, the distinction has less to do with any kind of nagging
python 2 holdover than this article suggests. The standard lib's role as a
place where code goes to die is a view that is widely held and accurate for
many cases.

The following question went unanswered during the Steering Council Q&A:

"Every feature request has a constituency of people who want it. Is there a
constituency for conservatism and minimalism?"

...and that's really what this whole thing is about.

~~~
alexdong
The problem with rants is it stings and it divides.

When it comes to constructive criticism, I think Amber did a good job with her
criticism but can do better at the constructive front. Her problem statement
was spot on and I agree that the direction she proposed is a good one.

However, to separate the standard library from the core is probably even more
dramatic than the Python 2 to 3 migration. Is that what the community can
afford at the moment? What's needed to make the transition? What's the
opportunity costs? i.e. what other developments we can do for a bigger impact?
What are the pros and cons?

~~~
tptacek
Is "embrace PyPI and move things like asyncio there" not a constructive
suggestion, or is she sort of being penalized because the most reasonable
solution to the problem can be described in less than half a sentence so it's
seems like there's more complaint than solution?

~~~
alexdong
Yup. Totally agree with you on that. And yes this one is a constructive
proposal.

My problem is on the _like_ part.

Where shall we draw the line and how do we decide? To me this is a far more
interesting discussion. (Maybe it has happened. I don’t go to many conferences
these days so I might be missing something here. )

She mentioned http.client vs requests, datetime vs. moments etc, which are
also quite correct to me. How about the cgilibs? Or pickle? Or the
collections? Or unittest? Stay or go?

Lastly, the title of the talk can be tempered a bit. No? We all know what a
leaking battery mean right? Toxic.

~~~
dragonwriter
> Where shall we draw the line and how do we decide?

Why does there need to be a line? As long as the package manager is part of
the core distribution (even if it is itself an upgradable package) why not
moving everything into packages, even if some are maintained by the core team
and have the stable version at time of distribution release included with the
core distribution—but perhaps installed only on demand?

~~~
peteradio
How would you get standards like unittest?

pip install unitest? Oops I spelled it wrong wonder what I just installed?

~~~
dragonwriter
> How would you get standards like unittest?

“have the stable version at time of distribution release included with the
core distribution”

Ruby, for instance has both “default” and “bundled” gems with the core
distribution.

[https://stdgems.org/](https://stdgems.org/)

------
nurettin
It took me an hour to create a program that tails some logs and alerts when it
doesn't receive any logs for a given amount of time. For this task I did not
even need to leave the asyncio module. It lets you create subprocesses and
execute call_later on the event loop in order to simulate a heartbeat while
reading the output of tail at the same time.

Did asyncio module feel bloated? It certainly did. It seems like every module
from subprocess to networking to io is crammed into it.

On the other hand, did it get the job done without resorting to any packages
or threading? Yep, and that is pretty powerful and rare.

~~~
Waterluvian
Asyncio is an amazing tool that makes me not hate doing async with Python.
There was quite a learning curve with the library. A lot of Lego pieces. But I
found the handful I need and then learn new ones occasionally.

I think bloatedness of the stdlib isn't actually a practical problem. It's
just an inelegance that you kind of have to learn to tolerate.

------
jteppinette
> six is non-optional for writing code for Python 2 and 3

I maintain a Python 2 & 3 compatible project that has no external
dependencies.

~~~
mehrdadn
I do similarly as you (maintain 2/3 code without dependencies) but every time
I have to do string encoding/decoding it kills me to find a way that half
works, and I don't have a ready solution in my mind for these that doesn't
break half the time. How do you handle non-ASCII in a compatible manner? Like
Unicode stdio? Unicode file paths? Unicode sys.argv?
string_escape/unicode_escape? I feel like Python 3 completely wrecked strings
instead of making them better.

~~~
deathanatos
For the most part, when you do I/O to some external system, if it's text, you
encode/decode at the border to that system. Interally, all text data is `str`
(or `unicode` in 2) and all binary data is `bytes` (in both).

In some cases of common OS-induced pain, I'd say "do whatever 3 does" in 2,
since that'll make migration easier in the long run. (But I understand that
can be hard, and I think my responses to your examples below even demonstrate
that to be hard.)

To your specific pain points:

> _Unicode stdio?_

Mostly, `io` should handle this in both 3/2\. You might need to help it get
the right encoding in 2.

> _Unicode file paths_

This is going to be a mess in _any_ language, because file paths really aren't
text. On _nix, they 're byte strings that don't have nuls in them. _Hopefully*
they're encoded according to LANG, and _hopefully_ LANG is a UTF-8 variant,
but it isn't required, and it isn't required that two users on the same system
use compatible LANGs, so you get a Tower of Babel. I _really_ wish OSs would
just start enforcing a. Unicode filenames, and b. no newlines in filenames;
those two alone would make life so much easier.

Hopefully you've seen os.fsencode / fsdecode, but alas those aren't in 2, so
I'm not sure they really help you. Often one is not really munging paths that
much, and can just pass through whatever value/type you get, but it does
happen, of course. (E.g., adding or removing extensions)

> _Unicode sys.argv_

This is also a pain point, since again, the underlying type in _nix is a byte
string without nuls. I 'd hope it decodes w/ the LANG encoding, but since the
user could easily tab-complete a filename, fsencode/decode might be more
appropriate. I think I'd say "do whatever 3 does".

1 Jan 2020 is nearly here. Forget about 2 / assume UTF-8 in 2 and don't
support anything else?

> _I feel like Python 3 completely wrecked strings instead of making them
> better.*

A clear separation of text and binary is needed in the long run, and makes
other operations much clearer and saner. The pain you're feeling is introduced
from the OS not having the same clarity.

~~~
mehrdadn
>> Unicode stdio?

> Mostly, `io` should handle this in both 3/2\. You might need to help it get
> the right encoding in 2.

Does io handle _stdio_? I was referring to standard input and standard
error/output. How do you read/write Unicode in a cross-2/3 way from/to
standard input/output/error without adding your own translation layer?

>> Unicode file paths

> This is going to be a mess in any language, because file paths really aren't
> text.

Sorry I need to be more clear. That general mess is not the aspect of it I was
referring to. I'm specifically referring to a 2/3 compatibility mess.

I meant that, for example, to have any semblance of Unicode handling, in
Python 2 you do os.listdir(u"."), whereas in Python 3 you do os.listdir(b".").
I know how to handle it in both Python 2 and Python 3 in a way that's Good
Enough (TM), but how do I even get that with cross-compatible code? I'd need
to write a translation layer of sorts for every single I/O function I might
use.

>> Unicode sys.argv

> I think I'd say "do whatever 3 does".

Hmm okay thanks, I'll need to try to see what the implications are again. I
think the problems I recalled from this may have been just been a result of
the other issues, not sure.

I can't "forget about 2" though, it's still on Ubuntu LTS systems and there
are still packages in 2 that haven't been ported to 3.

>> I feel like Python 3 completely wrecked strings instead of making them
better.

> A clear separation of text and binary is needed in the long run, and makes
> other operations much clearer and saner. The pain you're feeling is
> introduced from the OS not having the same clarity.

Again, I think this is "cleaner" in theory, not in practice. What happened to
the string_escape/unicode_escape nonsense I pointed out with the new system?
Any rebuttal to that one? ;-)

------
tanilama
> Brown called out the XML parser and tkinter in particular for making the
> standard library larger and harder to build, burdening all programmers for
> the sake of a few

Tkinter needs to go...There is very little reason except for the legacy ones,
why it needs to be there still...

~~~
liability
If Tkinter goes, then Python is dead to me. I use Python for small dep-free
single file GUIs for small projects meant to be used by people who have Python
installed but can't be bothered to go through an installation checklist.
Tkinter is great for that and is the only reason I bother using Python for
anything.

~~~
zbentley
"pip install --user -r requirements.txt" is not a terribly odious
"installation checklist", and I don't think it's the start of a slippery slope
towards one either.

If you're worried about people not RTFM when using your projects, you could
always start your scripts with the standard try/import/except wrapper around
the required package, and tell them to run pip install when it's not found--or
(and this is a terrible idea) run it in a subprocess for them.

~~~
liability
Asking my users to use the command line is odious.

------
KaiserPro
I clicked on this link think "uh oh standard ill informed rant post"

However brown has solid good points.

The brilliant selling point of python is the massive standard lib. If the
quality of the libraries fall, then python's use as a tool drops dramatically.

One of Node's massive failures is that is has no standard lib.

~~~
mceachen
Coupled with weak infrastructure around third-party library selection. npms.io
has a "quality" score (which npm pulled in recently), but that magick number
includes things like download counts, if the homepage is on a custom domain
(!?) and if the readme has badges (!!?). It doesn't include code quality
metrics, if the package or git repo is using GPG signing, or clear signs of
abandonment, like N ignored PRs, or N,000 ignored open issues.

I've been astounded how few non-trivial packages are actually in a consumable
state, and how many seemingly-simple packages have N dependencies that pull in
M more. By and large it's a zombie wasteland of cruft.

I wouldn't really trust only-crowdsourced ratings, but I think that might be a
nice component for npms.io to include, perhaps. Stackoverflow answers, for
example, seem to be directionally correct if you sort by upvote count.

------
i386
> The debate at this point became personal for Van Rossum, and he left
> angrily.

Guido should stop acting like a child. Listening to people, hearing them out -
even when it’s uncomfortable - is the mark of a good leader.

I tell new PMs “this is the best job in the world 90% of the time but the
other 10% is eating shit with a smile”

~~~
sametmax
He is not. He has been working on it for 25 years and he is just fed up to
having to explain again and again the same things. He doesn't want to see the
careful and long work he did being dragged to a standard he considers lower.

There is always a new person coming with a new idea. It's exhausting, because
it's required to make the language evolve, but it's also a new opportunity to
screw things up evertime.

And if he had let most people got their way during the last 2 decades, python
would have ended just meh.

Of course, everytime a new debate starts, everybody thinks that this time,
just this time, he is wrong and they are right. I did too. We are all part of
it.

I get the reaction. There is a limit to what a person can take, and it's why
he stepped down as a bdfl.

But seing your baby and your reputation at stake is hard.

~~~
mruts
I dunno, Python is the epitome of a "meh" language. Guido has help Python back
with his antiquated "get off my lawn" attitude since he created it.

The language itself is inferior in expressiveness and performance to almost
any other modern language. The only reason anyone uses it anymore is the
network effects of the library are very strong, especially is fields relating
to ML and data science.

~~~
sametmax
Those libs did not come out of nowhere, and the language did not rise from
1991 to the today without inherent qualities that draw people to it.

Python didn't have any specialty like PHP, or an accidental monopoly like JS.
It didn't come with a killer app like Ruby. It hasn't been made by a giant
company like Go.

It's pretty much a self-made language.

~~~
anderspitman
I'm not sure if I'd agree. Python was pretty lucky to have numpy/pandas when
"data science" started taking off a few years ago. It easily could have been
another language.

~~~
sametmax
Numpy arrived in python because the language allowed mathematicians who were
not programmers to get their job done and then do stuff that were not math
related as well.

They don't have the desire to learn what a monad is, they don't want to type
variables when exploring a badly formated heterogenous data dump, and they do
want to be able to read the code of their intern freshly out of school once he
leave without writing down any doc.

------
codr7
Isn't part of the issue mixing general purpose code that doesn't change very
often (the kind that belongs in a standard library) with code that changes all
the time (the kind that belongs on PyPI)?

I remember one of Go's core devs voicing some of the same concerns regarding
the SMTP-library [0] a while back.

[https://golang.org/pkg/net/smtp/](https://golang.org/pkg/net/smtp/)

------
pariahHN
I may be terribly wrong about this, but I would think that in general if
someone makes an improvement to something you make then you would want to
integrate that person and their improvement in some way. Treat like any other
update: mention it in the version notes and warn about compatibility of code
using the previous version. I know that renovation can suck but it's something
that we need to be doing. A comprehensive stdlib means that once you've got
it, you don't need to worry about being able to access packages along the way
- how are you going to download a package if you can't connect to the
repository? How much can you trust a third-party dev vs the core team?

If a package is really niche, it may not make sense to put in the integration
work. But for a package that is used by a significant majority in a general
application - why would you want to keep it separate if it is so much better?

I am ignoring human interaction here - there are probably of dozens of answers
to that question if you count personal motivations.

~~~
latortuga
> How much can you trust a third-party dev vs the core team?

Seems like a false dilemma to me. The core team could still maintain "blessed"
packages that don't ship with the default installation.

> why would you want to keep it separate if it is so much better?

This is addressed in the article, most of the 5th paragraph is dedicated to
it.

------
_hardwaregeek
I've wondered about standard libraries for a while now. What happens if you
discover a security vulnerability in your stdlib? Presumably you'd have to
bump the language version, deploy it out and beg users to upgrade. Except,
users don't upgrade stuff. While if you version the standard library, every
new project will get the newer version of the standard library. Sure, there's
space tradeoffs, though you could offer a manual linking option. But at the
very least, the amount of new projects with the vulnerability will be next to
nil.

And what if the standard library just gets dated? Take Node for instance. The
fs module has a whole bunch of outdated callback based functions. Sure, you
can wrap them in promisify, but it sucks that we have these outdated functions
stuck around forever.

There's definitely tradeoffs with package/dependency multiplication, but I
don't think standard libraries are as clear cut as people make them out to be.

~~~
perlgeek
> I've wondered about standard libraries for a while now. What happens if you
> discover a security vulnerability in your stdlib?

That depends on what you are writing.

For an application that is deployed stand-alone, you'll likely fat-package it
with python and all the libraries. In case of a security issue, you create a
new version of your application that bundles the fixed python.

For an application that is deployed on the system python (more typical on
Linux), it's the system admin's task to update the system python.

~~~
_hardwaregeek
I'm talking from a language maintainer perspective. From a user perspective,
sure, you can upgrade. But that doesn't mean _everybody_ will upgrade. In
general, people don't like upgrading. That's why browsers all automatically
update these days.

------
jteppinette
IMO, Golang does the best job of maintaining a high quality standard library.
I disagree that modules should be moved into the external package ecosystem.
However, Go isn't preinstalled on most systems like Python.

I have to develop enterprise software that runs across a wide range of
platforms, and being able to take advantage of the fact that Python is pre-
installed on all of these systems with its standard library is a godsend.

~~~
nerdwaller
It’ll be interesting to see how Go (Rust, and other new languages) evolve and
if they can avoid some level of package decay when they reach the age of
Python, Java, etc.

~~~
pixelrevision
I’ve been working with go a lot lately and they seem really focused on not
letting this happen. Every single thing in the language and standard library
seem completely focused on minimalism and compiler time. The standard lib is
unlikely to change all that much and people are not picking the language for a
bunch of convenience features.

Third party package problems will be an issue at some point but that’s more
due to them be so focused on minimalism they don’t have clear guidance on
setting up and maintaining packages.

~~~
teek
3rd party packages are already a problem because a github repo shouldn't be
treated as a dependency source. Gomod solves some problems but still uses git
repos as the source.

The primary reason Go can get away with this strategy is because the Go
community actively promotes fewer dependencies = better. So if you write Go
you have to often accept the fact that the second you add a 3rd party
dependency that you're now officially on your own if that dependency breaks or
becomes unsupported.

This is not necessarily a bad thing. But in order to move software forward I
still think we can do better than to push this responsibility to all
individual end users.

This is one area where I feel like most popular languages today still fail
compared to CPAN. CPAN's value was not just packaging and distribution, it was
an integrated test report pipeline and infrastructure, actively managing and
gatekeeping of library maintainers, CPAN mirroring functionality, and easy
acceptance of bug reports and user feedback against a library.

------
girlsrule1234
Some of her concerns do make sense, but the using, “a lot of our users stil
use python 2.x” as a justification, in 2019, is ridiculous. Those same users
had years to adopt/change the code base.

~~~
altmind
dont put all the blame on users. there's a ton of software that is python 2
only, for example gyp.

------
nbAYT
I think it is important to notice here that a lot of the tension here is also
Twisted vs. asyncio.

I've never liked asyncio, while Twisted felt natural to me. So I would agree
that an inferior solution has been pushed heavily in the stdlib and also to
the syntax level.

Moving the entire stdlib to PyPI is of course entirely foolish and would
destroy Python.

------
znpy
I have mixed feelings about this because I've seen both parts of the same
situation.

In certain situations, I've been working with a python interpreter on a RHEL
machine where pip was not installed (and I was not allowed to install it as
well as make other modification: the machine was owned by the client and I had
to work with what I had available).

\- having some basic functionality in the core libraries was a godsend because
I could work with that, even though it was not "ergonomic"

\- not being ergonomic, it was a "poor experience" (and certainly not
optimized or anything nice to see).

------
killjoywashere
> Standard Library Modules Crowd Out Innovation

This heading is the essential problem in innovation writ large: some giant can
ignore you and squash you without any effort at all, without even considering
your existence.

------
rmtech
There's an important tradeoff going on between library code that is (in
theory) trusted and library code that is less trusted but has other advantages
like solving a problem better or being a solution to problems that the most
trust code can't solve.

Right now the equilibrium in this tug-of-war is that a certain set of
functionality comes by default in the python standard library and everything
else is just a package that you can install.

Obviously from a dev point of view it's a hassle to have to decide which of
two or more packages for X is best, the pythonic way would be that there
should be one and only one package for X. Of course at the cutting edge there
have to be competing packages because there needs to be room for innovation.

But obviously not everything can be in the python standard library.

Not really sure what the solution is, but maybe there should be tiers of
packages, with "Tier 1" being standard library, "Tier 2" having some kind of
official stamp that it has been security audited to a certain standard, that
Python has some control over who gets to modify it and why etc. Then maybe
"Tier 3" could cover everything else, i.e. any bob random can go make a
package on PyPi and it's Tier 3.

In addition, the process of going from Tier 3 to Tier 2 would give people a
chance to winnow libraries down to one way of doing each thing at the Tier 2
level.

This might not be realistic but it's what my gut is telling me. C&C welcome.

------
isuckatcoding
Wow this is not the conduct I expect from a language creator. I don’t care if
you’re Albert Einstein. Humility and being able to take criticism is far more
admirable to me.

~~~
hjk05
He asked her twice to be more specific about what she was arguing, which seems
fair after a long tirade of shitting on anything and everything non optimal
about the stdlib without any specific point. And he then left during Q&A which
is totally fair, he’s his own person and it wasn’t his Q&A, people are blowing
this out of proportion.

------
hermanradtke
> Brown went further adding that because few Python core developers are also
> major library maintainers, library authors’ complaints are devalued or
> ignored.

PHP has a similar issue to this. The people writing C were not using the
language. The best example is PDO. A lot of C was written, but it was
essentially abandonware because the PHP users could not make any changes
without getting the C maintainers to both agree and have the time.

------
sam0x17
A lot of this is side effects of the Python 2 vs 3 schism imo. If it weren't
for that situation, practically everyone would on be 3.x, and supporting older
versions wouldn't be important, so package maintainership wouldn't be as
difficult.

Put another way, the whole Python universe from my point of view has become a
cautionary tale about breaking changes. Given Python's popularity, this might
be an unpopular opinion, but I have yet to find someone who loves Python who
still loves it as much when they discover other newer languages (I'm sure you
exist, I just haven't met you!).

Python is having its time in the sun really because it is a default install
for most unix distributions, so even people stuck in government labs can use
it because 2.7 is already installed. Even apt depends on it via the debian
software-properties package, so it isn't going anywhere any time soon.

The real question is how many people would use Python if it was as little
known as, say, Elixir.

~~~
newen
Yep, Python is just a plain, boring interpreted language from the 80s that was
designed as a reaction to Perl's syntax, and is unfortunate enough to contain
a lot of dynamic properties that designing a JIT compiler for it is a massive
amount of work. It can be seen in context with interpreted languages during
that time such as Perl, Tcl, awk, sed, etc. It became popular because it was
baby's first language taught in universities to both CS and non-CS majors.
Python is simply not competitive in terms of language features with modern
programming languages. I tend to assume most of the people praising Python are
amateur or new programmers.

------
mattbillenstein
stdlib acts as the foundation upon which a lot of the 3rd party stuff us built
- it's a feature to have it move slowly and not break often.

------
will4274
What's the point of saying that a standard library feature was not added soon
enough? Nobody can go back in time and add it earlier. I can complain to death
about features missing from C++11 or I can start using C++14.

~~~
pfranz
I think the point is that if those things in stdlib were external packages
then older versions of the language would be easier to support because you
could just update the package. I think the way it was phrased made it really
easy to misunderstand.

I think this goes to her point of Twisted wanting to support really old
versions of Python and they would be a lot more comfortable not supporting
really old versions of packages.

~~~
llukas
So they can keep python 2 on life support longer? That is not a good idea.

------
hashhar
I think the best model i have seen for a lean stdlib has been that of Golang.
You have the standard lib and then you have the packages under Golang.org/x/
which are experimental packages that sometimes end up being merged into the
core language. The stuff which is not in the stdlib (TOML, yaml etc.) have
been supported very well by community packages.

------
pm24601
I got lost on the "we have to back port to python 2.7" argument.

Force the upgrade already. Code still on 2.7 if still useful can be upgraded.

------
raverbashing
Good points, some things on the standard library are just painful

My "favourite" library quirk. socket.fromfd is only available on "Unix" on
Python 2.X, that was fixed in Python 3.X

The worse offender being the logging library. It's the least pythonic thing in
the whole std library (ok maybe ABC is worse, but oh well)

------
esotericn
Most of this post seems to revolve around the idea of py2 having an outdated
standard library.

[https://pythonclock.org/](https://pythonclock.org/) has Py2 reaching EOL in 7
months. Realistically I'd say the time passed years ago.

py3 is over ten years old now. We're not talking about some new unstable piece
of kit, I'd imagine that a large percentage, perhaps even 50%, of the HN
audience started their career after the transition had already started.

~~~
detaro
You read that wrong. Most of the points apply to Python 2 and 3 equally, or
even only to Python 3. (obviously with the exception of the one of Python 2
not having gotten some useful bits, but the pattern continues)

------
axaxs
Why, time and time again, does Guido seem incapable of reasonable debate, or
ideas that challenge his own? It's completely rude to interrupt a presenter
with 'what is your point?'

Years ago I was in contact with the author of Nuitka, who was very excited to
share his work thus far. During his presentation, Guido kept huffing and
making snide comments under his breath. All because he disagrees with the
premise behind Nuitka.

I like Python, and can appreciate his work and contributions. That said, I
can't help think the community can become less toxic without him as BDFL.

~~~
pavlov
Maybe he’s simply burnt out? The pressure of being in charge of something as
big as Python must be intense. People make demands on his time and expect the
“benevolent dictator” to cunningly solve everything like a modern day Salomon.

I know he gave up the title, but as he personifies Python, I imagine the
influx of requests for his attention may not have subsided much.

~~~
agumonkey
Was he like that before ? some say he was always balanced even when python3
came out.

let's hope everything settles smoothly :)

~~~
DannyBee
My only experiences with Guido when he was a Googler were incredibly
unpleasant.

(This was years ago, and i don't have the email exchange anymore, so i'm doing
my best to describe it from memory)

I reported a bug I had debugged pretty heavily and believed was likely a bug
in the appengine datastore (this was before it was publicly available, IIRC),
with a fairly detailed repro recipe, etc.

(I was not on the appengine team, just building an app)

I had debugged all the client side code all the way down to the rpc to the
datastore server and was positive there was nothing weird going on there at
all.

Within a minute or two of me sending the email to the email alias , he replies
with "This must be a bug in your code, that can't happen". He didn't even look
at it (i looked at the logs)

I replied with "I agree it should be impossible, but if you could look at it
for a second, i think you'll see that it's not and is actually happening. The
code is very simple, etc".

He says "I don't have time to fix your code".

So i spend the time and reduce it to a simple, 20 line piece of completely
obvious code (IIRC i believe it had no real code except to instantiate the
class and store it in the datastore) with no dependencies.

and say "here, i took the time to try to make it as clear and obvious that
this is really not a bug in the code, because there is no real code here"

He replied with something else abrasive and dismissive.

Then, about 20 minutes later, one of his teammates replies with basically, "oh
shit, this is bad".

(Because what i had discovered turned out to have caused data loss that they
couldn't automatically fix. The could get the data back from restores, but it
wasn't clear what to do with it, you needed intervention from a user)

Honestly, it would have been better for him to not respond at all. (he was not
the on-call team member at the time anyway)

We all have our bad days/times of course (i definitely did!), so i do hope
he's found more inner peace than he had back then. But yeah.

~~~
zzzeek
Whether you gave him the most succinct demonstration up front or not, it
doesn't excuse Guido's behavior and dismissiveness towards you _at all_. Guido
should have found a way to be patient with you and ask for additional
information if he felt he didn't have a clear enough explanation, and should
have taken the time to prove his hunch (which turned out to be wrong) that the
problem was on your end.

I did observe that you first provided a "fairly detailed repro recipe", and
then after he rebuffed you, you provided a "simple, 20 line piece of obvious
code". What Guido should have done was ask you to make the extra effort to
provide the latter case if the "recipe" you provided was in fact too much
effort for him to look at, for something that didn't seem to be a bug to him.
That is, he wanted you to "spend the time" as you said, and if he had more
experience with this kind of thing perhaps he would have known to just ask for
it rather than dismissing the whole thing.

What happens to me, at least, a lot, is I get bug reports that are like "here
just unzip this 10M attachment, install theses libraries and datafiles and
then watch the log output for the thing I spent five paragraphs not really
describing". These are not coworkers or customers of mine, for whom I might be
obligated to go through all those steps for, they are regular users who have
downloaded my software for free (as they should), and likely saving their
company thousands or even millions of person-hours of work by doing so. The
thing I ask these people in return is that they A. report issues to me and B.
do as much work as they can to help me fix the problem - I'm not a concierge,
the help process is more of part of how the give and take of open source
software is supposed to work between parties.

I encourage these people to please pass along an MCVE, e.g. the most succinct
demonstration script possible, and quite often when I get the sense that they
don't really have the experience to know what I'm looking for (despite my
sending them the link to what an MCVE is), I will often read their verbal
description, then write my own MCVE in about one minute that shows what they
are asserting is not true, and then I paste that into the issue; I have a
Python fragment script that I use as a starting point for writing 90% of these
test cases. I ask them to please modify the MCVE to show the thing they are
actually trying to do. That's how I get them to send me a succinct problem
description that isn't a huge waste of my time.

I understand this is likely not at all what happened in your case as you were
both at Google and I'm sure folks there are more sophisticated than this. I
just had the thought based on how you had sent two versions of the issue.

~~~
DannyBee
I don't disagree with your assessment at all. There is never a reason to act
like that and i didn't take it as "the normal course of business".

FWIW: the detailed repro recipe was hermetic and single-binary (and guaranteed
to work on his machine for various reasons), it just had more code than
strictly necessary (It was still <500 lines, it was a very simple webapp).

The actual recipe was closer to "run this binary, click on new record button,
click save, observe results in datastore".

I actually reduced it not because the recipe was too detailed, but to remove
the argument that it was my code.

------
killjoywashere
At this point my MVP version of python is Anaconda-latest.

~~~
orbifold
For me it is Miniconda + a list of package requirements. Then you can just
create an environment for each project and install new dependencies as you go.

------
VectorLock
It seems weird to me that they specifically call out only Guido's behavior. I
feel like there was probably a lot of context lost in "he left angrily" and
its a bit unfair bordering on disingenuous.

------
3327
Its never too late to rewrite python

------
CodiePetersen
I like python, its a nice simple language that you can use to pump out a proof
of concept real quick with little hassle, but for full on production I avoid
it. But I think this is a larger trend in programming, in my opinion the
majority of programmers are super lazy. Everyone is in a mad dash to get the
cool new thing out so they just slap a bunch of dependencies on it and damn
the consequences of developer debt down the road. More people would rather
roll with an MVP as the final product than build something from scratch that's
more robust, resilient, and efficient. Then after a while you have this huge
mess of old broken code that can't be fixed anymore and just needs to be
redone from scratch.

Sure you are going to need to redo code anyways from scratch eventually. But,
programmers like I mentioned, which is surprisingly a huge chunk, make
problems worse for themselves throughout the lifetime of the code by being
short sighted and stamping their approval on code their too lazy to rewrite
because their boss doesn't know any better.

~~~
tomrod
> More people would rather roll with an MVP as the final product than build
> something from scratch that's more robust, resilient, and efficient.

Well, yeah, of course. It's expensive to reinvent the wheel.

~~~
CodiePetersen
Its more expensive trying to fix broken code and working around other work
arounds that should not be in production. Slows you down, bloats your code,
makes it harder for new employees to learn the system when they come on board,
etc.

~~~
Felz
Is the quality of the dependencies you use really so bad that it'd take you
more time to fix them than to write your own code and then fix that?

Like I can see where I'd make that tradeoff, but it'd have to be a small
function in a really niche use case, which isn't really that common. (I work
on the JVM though and don't know how good/bad Python libraries would be in
general.)

~~~
CodiePetersen
Well it matters when you roll it out at scale. I suppose client side you don't
have to worry too much. I think it's still a bit lazy but it doesn't have too
much of an impact on small projects. But, as an example, the last company I
worked at we made a neural network engine from scratch because we were just
not happy about the available frameworks, their heavy dependency usage, and
their very specific requirements of os and language versions. In our tests
while I was there, we were able to get 3 times the speed up against the
fastest framework of the ones we tested, which was Tensorflow. A lot of the
speed up came from just taking good programming practices into account and
knowing the system in and out and potential weak points. That's how much the
dependencies were killing the speed. Since I've left, that's one of their main
things now, just making high quality machine learning modules that get the
best bang for their buck.

So yeah, I'd say the dependencies are pretty bad. It's a process outside of
your control and you kind of just have to deal with whatever is under the
hood. More importantly though, what's under the hood was likely meant to be
general purpose and there is more than likely a better way to do it for your
particular situation. And as she mentions a lot of the bugs are indefinitely
there so you just have to have permanent workarounds, which is never good.

------
epx
Why is it so difficult to admit Node.js did the package thing right, by
keeping a local folder just for the app, isolation from other apps with zero
effort?

~~~
zbentley
I think people conflate NPM-as-package-installation-system, which I would
agree works very, very well (though a lot of that is the JavaScript
module/import system in general and not NPM specifically), with two other
things: NPM the web platform (which has had a lot of pretty severe
security/community issues), and the JavaScript community's tendency to
proliferate lots and lots of modules, many in competition, to solve problems
that other languages' communities tend to solve either via reimplementation or
via the standard library.

I think any discussion about NPM or JS packaging compared to other package
managers needs to discuss those things as orthogonal, largely unrelated
concepts. Otherwise everyone just picks a favorite punching bag (e.g. left-
pad) and talks past each other.

------
wirrbel
Its not only that python stdlib libraries are getting old and sometimes appear
to be unmaintained, some of the more recent additions and changes are lacking.

------
kentm
I was rather shocked to find that python didn’t have a full-featured crypto
library included in its standard lib. The alternatives all ended up being
unmaintained or maintained by small groups (which makes trust in the soundness
difficult). I tapped our security team, who were in disbelief, but ultimately
they gave up to and I wrote the software in go instead.

~~~
tptacek
You want pyca/cryptography. The last thing in the world you want is a standard
crypto library that no experts are enthusiastic about maintaining. Golang had
an unfair advantage here, because the language team included cryptography
engineers. It would be weird if most languages had the same kind of crypto in
their stdlibs.

I think there is in general nothing wrong with a language ecosystem where key
parts of the whole platform are in well-maintained third-party libraries
rather than the standard library. Which is also something Amber Brown is
saying here.

~~~
TheOtherHobbes
The problem with third-party libraries is that it's impossible to know which
ones are "standard" and which ones are copycat hobby projects in various
states of disrepair. And in the worst case you can pick a library, get a day
or two into using it, and discover it has a show-stopper limitation.

I recently hacked together some Python to process MIDI files. Should I have
used this:

[https://pypi.org/project/MIDIUtil/](https://pypi.org/project/MIDIUtil/)

or this:

[https://pypi.org/project/mxm.midifile/](https://pypi.org/project/mxm.midifile/)

or this:

[https://mido.readthedocs.io/en/latest/](https://mido.readthedocs.io/en/latest/)

or this?

[https://github.com/vishnubob/python-
midi](https://github.com/vishnubob/python-midi)

A well-maintained and thoughtfully curated stdlib makes these choices for you
- which is one less thing to worry about, and can be a significant time saver.

~~~
tptacek
In a perfect world everything would be in the stdlib, everything would be
well-integrated, everything would serve every use case, and everything would
be well-maintained by engaged and motivated maintainers.

For some kinds of libraries, you can sacrifice a whole bunch of those
constraints and still have it make sense to host it in the stdlib. I'm fine if
the JSON library is slow or inflexible if it covers a bunch of use cases and
doesn't impede people from writing better ones. You can see this to some
extent with the state of "router" HTTP libraries in Go.

But for some things, most notably cryptography, it's worse to have a
suboptimal version in the stdlib than to have none at all.

~~~
Rapzid
To add to net/http not being very competitive performance wise within Go's
language peer group, the stdlib file system walking methods took liberties in
baking in assumptions without escape hatches that nerf the performance
unnecessarily for a lot of heavy-lift use cases. Go definitely has a few
shortcomings in the otherwise fantastically useful stdlib.

------
banachtarski
I had to code in Python recently and had PTSD over all the syntactically
significant whitespace. I have no idea how I ever found productivity in the
language coming back to it now with fresher eyes. Refactoring, editing, and
writing new code feels like such a drag.

~~~
mruts
I don't think Python is worth using for most new non-data science projects
nowadays. The language is ugly and hard to work with. Also the performance is
terrible.

~~~
mixmastamyk
There’s no accounting for poor taste.

