
PEP 594 – Removing dead batteries from Python's standard library - jorshman
https://www.python.org/dev/peps/pep-0594/
======
imglorp
Nobody talks much about the original goal of "There should be one-- and
preferably only one --obvious way to do it." \-- PEP 20 (!)

Back when, I lurved me some Perl TMTOWTDI as much as the next guy--plenty of
uses for such a thing in a Swiss army chain saw--but then Python had a good
(or at least different) answer to that by reducing the language load and
focusing more on the problem.

So tell me, how many string formatters do we have now? Could we please decruft
and dump some of them while we're at it? % or f' or .format or whatever, but
please just pick one and get the rest out of my face.

[https://www.python.org/dev/peps/pep-0020/](https://www.python.org/dev/peps/pep-0020/)

[https://en.wikipedia.org/wiki/There%27s_more_than_one_way_to...](https://en.wikipedia.org/wiki/There%27s_more_than_one_way_to_do_it)

~~~
simonh
It's not that it's never been possible to do things other ways in Python. The
obvious way to do stuff is to use the latest module or feature. The obvious
way to do string formatting is now f-strings.

~~~
imglorp
Until the newer way comes along and then there's n+1 ways.

~~~
pas
The new ways come for a reason. And the new ways should respect PEP20, meaning
they should work for the old use cases, so that everyone can start using the
new ways as soon as they can upgrade to that version.

Now, of course Python is interpreted and packages are shipped as raw source
code and this makes things harder for library authors, because were there a
compile/transpile step, they could write using the new and shiny obvious ways,
all the while providing compiled blobs (or transpiled libs) targeting multiple
versions.

~~~
wund3rb4r88
So what reason is there for adding a new string formatting syntax that looks
nothing like string formatting syntax used in any other language, and obliges
a non-insignificant userbase to adapt?

Technological purity or scratching some philosophical itch would be a poor
reason, IMO, given how broad the externalized costs run.

So is it provably easier to reason around the new style? Reduce computing time
by an order of magnitude (in the modern era where compute cycles and memory
are hella cheap)?

Code is to be written so it’s understandable and useful for humans first,
right?

How does a third syntax for a solved usecase offer real value to the user
base?

Or are we just being sycophants of so-called experts, peddling some novel
“look”. Experts who largely built a rep on first mover advantage, but have
since seen that excuse for being deified evaporate as the rest of world
learned all the same tricks?

~~~
simonh
>that looks nothing like string formatting syntax used in any other language

PHP echo "There are $apples apples and $bananas bananas.";

Ruby puts "I have #{apples} apples"

TCL puts "I have $apples apples."

Typescript console.log(`I have ${apples} apples`);

Python print(f"I have {apples} apples")

Looks pretty similar to me, and if you like understand-ability by humans so
much I think f-strings win hands down. But if you disagree, there's plenty of
code out there still using % strings or .format() and they're not going
anywhere any time soon.

~~~
olooney
Python's string format is also very similar to the "double mustache"
convention widely used in web development. The following template string:

    
    
        "I have {{apples}} apples"
    

To my certain knowledge this works in Jinja2, Django templates, Vue.js,
mustache.js, handlebars.js, and probably many others.

PEP 498 mentions that they did look at other languages to see what was
supported:

[https://www.python.org/dev/peps/pep-0498/#similar-support-
in...](https://www.python.org/dev/peps/pep-0498/#similar-support-in-other-
languages)

And they link to this Wikipedia article, which lists many examples:

[https://en.wikipedia.org/wiki/String_interpolation](https://en.wikipedia.org/wiki/String_interpolation)

Scanning through that, my impression is that "there is nothing new under the
sun" and aside from an occasional "$" or "#" prefix, or a willfully arbitrary
deviation like ColdFusion, the conventions can all be traced the Bourne shell
/bin/sh/ which was released in 1979. Prior to that, the prinf() syntax was
presumably the most common, but it's obscure.

Does anyone know if the ${variable} convention was original with the Bourne
shell, or if it can be traced back further?

~~~
fanf2
I am surprised to find that the 6th edition shell had only $1 etc positional
parameter expansion, and named shell variables are not described in its man
page

[http://man.cat-v.org/unix-6th/1/sh](http://man.cat-v.org/unix-6th/1/sh)

------
k4ch0w
Someone linked to Amber Brown's talk and I thought this point would be
important for discussion. I think there is something that isn't being
considered here when it comes to relying on PyPi.

3PP(Third Party Packages) issues are responsible for a lot of application
security vulnerabilities today. Every large enterprise organization has no
idea what packages a developer is pulling onto their laptop and into their
codebase.

As a security engineer, I like having a core team and a standard library in
place that has gone through a long mature process with experienced developers,
instead of someone who just git pushes code every night. You have no idea who
is on the other end of that push too. It's too hard to keep track of changes
and causes us to have to pin packages and versions that have been ok'd for use
instead of a new release of Python. You have no guarantee the security
engineer who reviewed the code didn't miss anything either.

~~~
jahewson
There’s really two separate concepts here: review and the standard library.
Why not separate concerns and have a review process that isn’t coupled to the
standard library? There’s no reason why any PyPI package can’t have meaningful
reviews published.

The question that you’re ultimately seeking the answer to is “what code has
been reviewed and which reviewers do I trust?” - lots of ways to solve that.

~~~
anoncake
One reason for this proposal is to save manpower. Separating concerns doesn't
help with that.

~~~
kgwgk
It does save manpower for Python maintainers if reviewing non-core libraries
becomes someone else’s problem.

~~~
anoncake
Either the reviewer is trusted and might as well review the libraries as part
of the standard library or they are not trusted and reviews by them are
useless.

------
korethr
Hmm, should perhaps this be worthy of a major version number bump? Yes, it's
not as large or as breaking of a change as the 2.7->3.0 transition, but by
removing modules from the standard library, backwards compatibility _is_ being
broken. Now, perhaps most people don't use those features anymore, but for
those people who do use and rely on these modules, a major version number
change would be a welcome signal that their code will no longer work with the
new version.

But, after the initial pain of the 2.7->3.0 transition, I doubt we'll ever see
another major version number jump, even if a logical use of version numbers
would merit it.

~~~
m463
> but by removing modules from the standard library, backwards compatibility
> is being broken.

API 101 - there is very little cost to leaving them in, but a hidden major
cost to having them disappear, usually for non-developers.

\--

By the way, "batteries included" is one of the _BEST_ features of python.

Have you tried to fix something in your house with a "homeowner's toolkit"
which is usually something like a hammer, pliers, 2 screwdrivers, a putty
knife and a few more basic tools?

It is REALLY tedious, like writing a C program with a few basic tools like
stdio and ctypes.

More languages need "batteries included", maybe like Perl.

I think if the cost of deploying a script is 1, deploying it + a dependency is
literally something like 100x. You have to make assumptions about all the
environments the script will run in, and they are usually wrong.

~~~
arcticbull
> By the way, "batteries included" is one of the BEST features of python.

None of these libraries need to be nuked from existence as a result of this
change. I'd wager they'll move into PyPI modules so that teams relying on them
could safely continue to do so.

> Have you tried to fix something in your house with a "homeowner's toolkit"
> which is usually something like a hammer, pliers, 2 screwdrivers, a putty
> knife and a few more basic tools? It is REALLY tedious, like writing a C
> program with a few basic tools like stdio and ctypes.

That's one extreme, but I don't think that's what's being proposed. The
proposed model is closer to what Rust does today, where the core is slim,
opening up new potential use cases, and the more complex functionality built
on top of it is left to the community to maintain.

Take a look over some of the modules they're deprecating, like smtpd. What
kind of standard library requires an SMTP daemon built in? That's akin to a
homeowners toolkit including a planishing hammer [2] for some reason.

> I think if the cost of deploying a script is 1, deploying it + a dependency
> is literally something like 100x. You have to make assumptions about all the
> environments the script will run in, and they are usually wrong.

Depends on how it's done, honestly. Check out this Rust "scripting" system
[1]. It has full support for third-party crates.

[1] [https://github.com/DanielKeep/cargo-
script](https://github.com/DanielKeep/cargo-script)

[2]
[https://en.wikipedia.org/wiki/Planishing](https://en.wikipedia.org/wiki/Planishing)

~~~
m463
Sorry, my "batteries included" comment was independent of the changes
proposed. I just love the rich library python provides (compared to other
scripting languages).

~~~
arcticbull
I hope you didn't take my reply to be aggressive in any way! Standard library
is always a point of contention in any environment and it's always good to
explore all angles. It's kind of the ultimate bikeshed in a lot of ways.
Python indeed is a very batteries-included language and there's a lot to like
about that. I wonder if there's a way of preserving that by differentiating
between a 'lite' and 'core' distribution so we can still meet the goals of the
embedded / small system community?

------
avar
Relevant discussion 4 days ago, "Python's batteries are leaking":
[https://news.ycombinator.com/item?id=19948642](https://news.ycombinator.com/item?id=19948642)

------
anoncake
> Modules in the standard library are generally favored and seen as the de-
> facto solution for a problem. A majority of users only pick 3rd party
> modules to replace a stdlib module, when they have a compelling reason, e.g.
> lxml instead of xml. The removal of an unmaintained stdlib module increases
> the chances of a community contributed module to become widely used.

Developers don't have a compelling reason to use 3rd party modules instead of
the standard library. Therefore they don't. You consider that a problem and
want to encourage them to use 3rd party modules more.

In short, you want developers to use 3rd party libraries because there is no
compelling reason to do so?

> A lean and mean standard library benefits platforms with limited resources
> like devices with just a few hundred kilobyte of storage (e.g. BBC
> Micro:bit). Python on mobile platforms like BeeWare or WebAssembly (e.g.
> pyodide) also benefit from reduced download size.

That's a silly reason. Just make a separate distribution with a stripped down
standard library.

~~~
int_19h
To give a specific example, there's a lot of headache that new Python users
could be spared if only they used requests instead of urllib2. But they don't
know that it's compelling if they never go look, and just stick to stdlib.

~~~
anoncake
Good point. However, there's a box saying that request is recommended at the
top of the urllib.request docs.

[https://docs.python.org/3.8/library/urllib.request.html#modu...](https://docs.python.org/3.8/library/urllib.request.html#module-
urllib.request)

~~~
ben509
Unfortunately, I think people usually search StackOverflow, or just type into
Google and get pointed to SO. If they type in "how do I download a web page in
python" they get:

1\. urllib [1] 2\. BeautifulSoup and a comment mentioning requests [2] 3\.
requests [3] 4\. urllib, httplib [4]

Which is looking better than I expected... but no information on 2 or 3
relating to how you install those libraries. So 1 and 4 will Just Work.

[1]: [https://stackoverflow.com/questions/45717889/read-the-
text-o...](https://stackoverflow.com/questions/45717889/read-the-text-of-an-
web-page-in-python) [2]:
[https://stackoverflow.com/questions/26050064/automating-
down...](https://stackoverflow.com/questions/26050064/automating-download-of-
executable-which-is-within-several-nested-urls-by-listeni) [3]:
[https://stackoverflow.com/questions/44553348/how-to-
download...](https://stackoverflow.com/questions/44553348/how-to-download-
from-an-html-link-href-in-python) [4]:
[https://stackoverflow.com/questions/2646288/retrieve-some-
in...](https://stackoverflow.com/questions/2646288/retrieve-some-info-from-
the-web-automatically)

------
chubot
_Since the parser module is documented as deprecated since Python 2.5 and a
new parsing technology is planned for 3.9, the parser module is scheduled for
removal in 3.9._

Hm interesting comment. Does anybody know what the new approach to parsing
Python in 3.9 is ? I searched python-dev@ but couldn't find any references to
it.

I found an interesting tidbit about Rust "switching" from LL to LR here:

[https://www.reddit.com/r/ProgrammingLanguages/comments/brhdt...](https://www.reddit.com/r/ProgrammingLanguages/comments/brhdt2/notes_on_rusts_grammar_ll_vs_lr/)

And I noticed some rules in Python's grammar that are awkward in LL parsing
(set and dict literals, and comprehensions).

I wonder if those things motivated the switch? They certainly work though.

~~~
ivoflipse
They're having the discussion on Discuss:
[https://discuss.python.org/t/preparing-for-new-python-
parsin...](https://discuss.python.org/t/preparing-for-new-python-
parsing/1550/42)

~~~
chubot
Thanks a lot! That led me to find this November thread:

[https://discuss.python.org/t/switch-pythons-parsing-tech-
to-...](https://discuss.python.org/t/switch-pythons-parsing-tech-to-something-
more-powerful-than-ll-1/379)

And I posted here about it:

[https://www.reddit.com/r/ProgrammingLanguages/comments/brz2y...](https://www.reddit.com/r/ProgrammingLanguages/comments/brz2yj/switch_pythons_parsing_tech_to_something_more/)

It looks like the set and dict literals I noticed weren't so much the
motivating use cases, but even more fundamentally assignments and keyword
args!

------
jl6
> Times have changed. The introduction of the cheese shop (PyPI), setuptools,
> and later pip, it became simple and straight forward to download and install
> packages.

I’ve been out of the Python loop for a few years, but my last impression was
that packaging and distribution of Python modules was far from a solved
problem. Has this changed?

~~~
carlmr
No, it's still one of the worst package management systems out there.

I'm always wondering if it has to do with the language and or just the package
manager itself. Rust's cargo, Node's npm and probably quite a few others exist
that work amazingly well.

~~~
bow_
Why do you think it is the worst? If you don't consider Python, what would you
then think is the worst?

Genuinely curious. I use Python daily and I rarely encounter problems with it.
There is some getting used to in the beginning, but I went through a similar
phase when I started using npm and cargo as well.

What I can say is that I had to go through a lot of experimentation myself to
arrive at the tools I use now (pyenv + Poetry). And if anything, maybe the
lack of one way that is adopted by everyone in the community is the problem.

~~~
bmn__
Tests are not automatically run and centrally reported as part of the
installation process.

There is no good heuristic for picking one package over the other when they
occupy a similar problem space. This is mostly a cultural problem, the
community's efforts are lackluster.

virtualenv is not installed by default, making bootstrapping into a separate
prefix that is independent from the system installation unnecessarily
aggravating.

Semiautomatic packaging tools (e.g. pypi into rpm) produce low kwalitee
packages and manual intervention more often needed compared to similar
languages.

Worse are languages that simply don't have much manpower behind them in
absolute numbers, e.g. CL/quicklisp. Given Python's mindshare, the results are
subpar.

~~~
ptx
> virtualenv is not installed by default

It is, but it's called venv these days:

    
    
      python3 -m venv my_venv
    

(Unless you're sticking to a very old version of Python, but in that case
there's nothing the Python developers could do about it.)

------
nomel
> A lean and mean standard library benefits platforms with limited resources
> like devices with just a few hundred kilobyte of storage (e.g. BBC
> Micro:bit). Python on mobile platforms like BeeWare or WebAssembly (e.g.
> pyodide) also benefit from reduced download size.

Python, lean and mean? Seems like an incredibly niche use case to restrict the
python community to.

~~~
yjftsjthsd-h
At that point I'd expect to be using micropython anyways.

------
coldacid
Considering how many different types of data have and can be stored in IFF
chunks (and their order-swapped RIFF counterparts) I'm almost insulted by the
PEP author considering IFF to just be "an old audio file format".

I think my Commodore/Amiga persecution complex is acting up again.

~~~
oblio
Wasn't the last Amiga sold in 1996? That's 23 (!) years ago.

~~~
coldacid
You can still buy Amiga platform hardware, although it's not the classic
Commodore-era stuff but rather the modern PowerPC based stuff.

------
kd5bjo
> 3.8.0b1 is scheduled to be release shortly after the PEP is officially
> submitted. Since it's improbable that the PEP will pass all stages of the
> PEP process in time, I propose a two step acceptance process that is
> analogous Python's two release deprecation process.

Why should this be fast-tracked outside the normal process? I can’t imagine
any of these removals are urgent.

------
peterwwillis
If we're going to lean more on PyPI, it would be nice to clean it up some.
There's a lot of cruft that makes it very hard to find the right module to use
to write a new project with. And it would be nice if package naming convention
were more standard, and suggested extending existing modules rather than
writing entirely new ones. If you want HTTP functionality, you use the
"requests" package, which uses urllib3. Why not io::net::http::client,
extending io::net::http, and so on?

~~~
torlakur
Well, that sounds like the package names (and namespaces) that the Perl 5
community have landed on with CPAN modules :)

------
aitchnyu
Some batteries are surprising
[https://docs.python.org/3/library/](https://docs.python.org/3/library/)

difflib - Text diffs, even html output

textwrap - obvious no?

rlcompleter - autocomplete symbols and identifiers, used in interactive mode

pprint - for printing complex data structures with indentation

reprlib - repr with traversal depth and string size limits fraction -
Fraction('-.125') becomes Fraction(-1, 8)

statistics - averages and deviances tempfile — Generate temporary files and
directories

glob — Unix style pathname pattern expansion

gzip, bz2, zipfile, tarfile - obvious

configparser - ini-like format

secrets - use this instead of random for safe cryptography

sched — Event scheduler

turtle — Turtle graphics

shlex — Simple lexical analysis

webbrowser — Convenient Web-browser controller, for the antigravity module!

~~~
jimbo1qaz
LZMA is better than gzip i think.

Also secrets deserves to stay in the library, Python has hashlib and it should
have a secure RNG by default.

glob too (or Path.glob).

And I've used shlex for command-line-escape parsing (it might not have been
the optimal solution?)

~~~
masklinn
> LZMA is better than gzip i think.

LZMA provides much better compression at much higher costs. Generally speaking
it's pretty strictly better than bzip2, not necessarily gzip (DEFLATE,
really).

In my experience, zstd can be considered better than gzip/deflate (almost
every time I tried it, it provided as-good-or-better compression at much
faster throughput).

~~~
maxnoe
Yes, zstandard turned out to be the best option in all our tests on small to
really large data (couple of mb to gigabytes filesize).

A few percent better compression than gzip and nearly 50 % faster
decompression.

Gzip is pretty slow in the python standard lib.

The python zstandard bindings unfortunately do not allow back seeking.

------
monocasa
MSI support removal seems premature. Can you do unsandboxed stuff from an
AppX? Also can you install AppXs on Windows 7/8?

~~~
chungy
How many people actually used Python's MSI library?

It was Windows-only (presumably a wrapper around the native tooling), and was
primarily for creating Python's own installer, which apparently doesn't get
built as an MSI anymore.

~~~
ptx
I use it for packaging my Python application. But it seems pretty low-level,
so I suppose I could call the win32 API directly through ctypes when they
remove it.

------
Alex3917
The only thing I worry about is the uu module. Even though 99.99% of people
will never use it, there are also probably literally millions if not billions
of pieces of content that are encoded using that standard. I realize it's only
a few lines of code, but from an archival perspective it seems like there's
something distasteful about making it harder for future generations to access
content that may have historical value. At least send a quick note to the
library of congress or the archivist community first to get an idea of to what
extent this stuff is getting used.

~~~
roblabla
This module is pure python. It looks like if people rely on it, they could
easily put the code in a pypi module, and maintain it there. It doesn't really
make sense to keep it in the core if it has no dedicated maintainers.

~~~
takeda
That code doesn't need any maintainers, it is not like UU encoding is evolving
or really complex.

------
ehsankia
I honestly didn't realize there was so much random stuff in stdlib.

~~~
varelaz
There are much more of it. I think around 1/3 of standard library is legacy
and burden for python. I recall there was a PyCon talk about why you may not
want to be a part of standard library. Release process and support of old
versions is very strict and heavy, it could delay releases and API change
dramatically. You can look at situation with urllib for example, there are 3
versions of them, but most popular is requests 3rd party module.

------
yingw787
This seems like a vast swath of things to deprecate with one PEP, but it looks
like the author of the PEP talked with a large number of core developers about
issues that they had with the stdlib and decided deprecated packages that way,
which I think should be good for approval of the PEP.

I have a couple of questions:

\- I realize there may not be a fork with this PEP implemented, but how might
this impact Python's local relative build time, and how might that convert
over to the build pipelines? Drastically faster build times would be really
nice.

\- Are Python 2 -> 3 migrations for stdlib packages mostly rewrites or mostly
tacking on compatibility layers like `from __future__ import
unicode_literals`? If stdlib packages were updated with Python 3 syntax, it
might indicate sustained demand for said package going forward. I'm not sure.

~~~
Znafon
> Are Python 2 -> 3 migrations for stdlib packages mostly rewrites or mostly
> tacking on compatibility layers like `from __future__ import
> unicode_literals`?

Packages in the stdlib don't need compatibility layer since they are always
used with the Python version they were written for, it's motly incremental
rewrites.

Regarding the build time, I don't think most of the time is spent testing
those parts of the stdlib.

~~~
enedil
I'm not the OP, but I suspect that what was means was backporting features to
py2.7, which presumably use some new Python 3 which haven't been built in the
old language version.

~~~
simcop2387
I don't think that back porting features to 2.7 it's in anyone's radar anymore
given that it goes end of life in just over 7 months. Anything that isn't
already there is likely not going to benefit many people

~~~
Dylan16807
Lots of people are on python 2, and if they didn't already migrate then an EOL
notice isn't going to be a big motivator.

~~~
dual_basis
You'd be surprised I think. In my job, EOL noticed are all it takes, often, to
push people onto the next version, even reluctantly. I imagine many situations
where Python 2 is still used are not developer driven but, rather, business
value propositions. Many of my clients wouldn't understand a discussion about
upgrading because of Unicode strings, but if I were to mention to my clients
that Python 2 is no longer receiving bugfixes and officially has reached EOL
status you can bet they would all throw money at it.

------
jstimpfle
[https://docs.python.org/3/library/fileinput.html](https://docs.python.org/3/library/fileinput.html)

I've never had a strong opinion about "batteries included", but boy there are
some weird ones in there...

~~~
ehsankia
Especially when you start looking at the code for some of these

[https://github.com/python/cpython/blob/master/Lib/imghdr.py](https://github.com/python/cpython/blob/master/Lib/imghdr.py)

~~~
Znafon
To be fair, it dates from 1992. I don't think PyPi existed at the time, it's
rather impressive the Core team supported this for so long.

------
larkost
My one objection to this is the `imp` module. As far as I can tell it is the
only way (short of `sys.path` modification) to specifically load a module
found at a specific path. For testing systems I use this quite extensively...
Looks like I need to make some comments...

~~~
tgb
I use imp.reload(my_module) all the time when %run-ing things from ipython
that import my_module after editing it. What's the alternative for that use?

~~~
maxnoe
Using importlib.reload

------
mleonhard
I love this. I hope more and more cruft gets removed from our languages and
tools.

------
0xADEADBEE
This has been a long time coming and I'm surprised it's taken until 2019. A
better time to have done this would have been back at the 2-3 migration; I'm
not sure the community will be able to survive another transition like that,
but hopefully lessons have been learned from the previous schism. A step in
the right direction for sure!

------
saila
Just one data point here, but with the exception of the cgi module a long time
ago, I haven’t used any of these modules in the past 15 years of web
programming, ETL scripts, or data processing. In the keep list, I’ve used
fileinput once that I can remember.

------
Waterluvian
I think an 80/20 solution is documentation UX. Just take all these old modules
and put them into an "old modules" section and preface that section with a
short discussion on what this all means and what the wisdom is.

------
robobro
noo not the cgi module :(

~~~
zestyping
I was wondering about that too. CGI is still just about the easiest and most
universally supported way to serve dynamic web content from a Python script.

The complaint is that the module is designed poorly, which is fair; to remove
CGI support without any "batteries included" replacement seems a bit of a
shame, though.

~~~
mjw1007
The cgi module doesn't actually contain anything CGI-specific. It dates from a
time when "CGI" was a sensible shorthand for "handling http requests".

What's actually in the module is a rather clunky, but serviceable, system for
processing html form data.

------
icodestuff
I'm surprised at the removal of aifc. AIFF is still used somewhat in macOS.

~~~
schrijver
AIFF is used widely in audio production software; for exporting uncompressed
audio many application default to AIFF. It’s the most universally useful
lossless format; WAV support is ubiquitous too but AIFF has better metadata.
FLAC and ALAC are good alternatives today, but not on the same level of
support yet. For example, for purposes of price differentiation, Pioneer
reserves FLAC and ALAC support for their most expensive digital turntables.

If you’re writing a Python script that outputs an audio file, outputting to
AIFF seems like a save bet. Then again I’m not sure how many people are
actually using the module from the stdlib--but neither does the PEP’s author,
it seems. His level of research was asking his friends on Twitter:

[https://twitter.com/ChristianHeimes/status/11302577994753351...](https://twitter.com/ChristianHeimes/status/1130257799475335169)

------
jedberg
My only concern with this is that they won't be made available on Pypi for
those that still need it. They will pull out the code but they expect the
community to package it back up and make it available. Their hope is that the
community will improve it before making it available, but I suspect the
reality will be that the moment it is deprecated, someone will just go through
and make all of them available in Pypi exactly as is.

~~~
JeremyBanks
Why is that a concern?

------
jancsika
audioop is an interesting little thingy.

For example-- why does "add" add two fragments, but "mul" multiplies one
fragment by a scalar value? Was the assumption that "mul" will probably just
be used for attenuation?

It would be fun to build a tk-based audio app around audioop for the sole
purpose of complaining about the deprecation of this module. :)

------
dlbucci
This seems like a good idea to me, but what are the chances that something
like this actually lands? I know Guido did not seem to like this approach (to
put it mildly), although I'm not sure he still has veto power having stepped
down as BDFL.

Also, the changes sound like a major version bump to me, but I doubt people
want that to happen after Python 2 to 3...

~~~
stOneskull
can you step down from something that is for life?

~~~
knolax
Kings abdicate all the time.

------
leothekim
Serious question - Why put off DeprecationWarnings to 3.9? If these modules
are as dead as this PEP claims, then I'd wonder if Python scripts using such
modules would be actively looking to transition to 3.x in the first place.

~~~
puetzk
Because the 3.8 beta (and hence 3.8 feature freeze) is next week, and there's
basically no chance of completing the PEP in time to make that cutoff.

------
truth_seeker
Very practical move!

I think few other legacy platforms Java(JDK), JS(NodeJS), Ruby etc. also in
need of this kind of refactoring exercise and removing the old cruft.

------
gtirloni
It feels like this PEP should have more practical examples of how these
modules are impacting the work of core Python developers. Some number of bug
reports or time spent dealing with them.

------
HelloNurse
Many ridiculous suggestions, like the whole "data encoding" group (obsolete
data formats don't need maintenance) and the various modules about old file
formats (ditto).

~~~
Blackthorn
They do need maintenance. Python updates cause code or tests to need updating
due to syntax or other changes.

------
analognoise
Again, Tkinter isn't on the hit list and I'm glad.

------
diydsp
Old people stuff below:

I guess we're getting to the point where people now say things like "it was
developed for TSR-80", where it is more correct to say "for _the_ TSR-80." :)

> The uu module provides uuencode format, an old binary encoding format for
> email from 1980.

while true, uu was also used extensively in usenet, at least throughout 1997.
But as we learn later, noone has touched nntp code in 5 years! and there is no
interest in porting nntplib to py3.

> originally introduced for Commodore and Amiga. The format is no longer
> relevant.

not disagreeing. Just saying "ouch."

> The module only supports AU, AIFF, HCOM, VOC, WAV, and other ancient
> formats.

Again, not disagreeing, but implies .wav is ancient. It's still in heavy use
(they're keeping "wave"). And ancient is hyperbolic and slightly insulting,
but I'm just going to let it go as humor in service of persuasoin. It's hard
to let go sometimes when these formats provided so much joy and delight in
such a pure fashion. Particularly AIFF vs WAV.

AIFF was for Amigas and Macs. Lots of great early techno used that format,
while clumsy windows machines used .wav and lagged seriously in music
software. Life then was more natural and computers were these shiny,
productive boxes of creativity. We sat down at them for an hour at a time and
expressed wonderful songs. Then got up and lived life. I don't need to mention
that we now spend almost all day and all night inside a computer, at its
keyboard or tablet, its screen and increasingly its headphones. Instead of
augmenting our lives like a microwave, they "shelter" us.

I know the tools are objectively better now. They're more reliable, more
productive and have more features. But I disagree that they provide more of
what we actually need as human beings. Thinking like a designer, we need
nourishing human experiences, challenges and interactions. We need
opportunities to use our bodies physically, to run and cook, and especially to
work, play and function together. But our health is objectively decreasing.
Suicide is on the rise in several demographics. We're spending more time
e-socializing so we eat more fast food. Obesity is on the rise. We can't even
put the darn things down to drive.

Traditionally, human functioning meant organizing socially in groups that
combined strangers' abilities. It required us to have patience and instruct
each other. Increasingly, computers/apps/networks give us all the same
concentrated superpowers. Now I can find any recipe I want, but I don't have
anyone to cook it with. Our interactions are increasingly mediated, robotizing
our communication. All this pausing caused by http-to-sql operations after
every mouse click or sentence stilts us. It's almost as if our world is more
black and white and lower resolution because we're experiencing so much of it
through a densely mediated stack.

I'll also point out that in the old days, a library that supported five
formats was a _flipping miracle_. But time marches on.

Having watched and been part of this whole python and internet phenomenon,
experiencing it rise from nowhere and evolve from a neato thing into a
mechanized, hyper-official "means of production" this is somewhat emotional
for me. Guess I'm just getting old and I'll suck it up, but I just want to
write these notes for people who wonder about what life was like before, or
forget to wonder.

------
noname120
Dupe:
[https://news.ycombinator.com/item?id=19971131](https://news.ycombinator.com/item?id=19971131)

------
hyperion2010
This really, really, bothers me. The rationale seems absurd. Is there no more
uu encoded data in the world? Throwing this code away is a recipe for a future
where some data is never recoverable. Python is a swiss army knife. The
imagination that pypi is going to survive forever and be accessible forever is
absurd. A programming language that throws away the past is doomed to become a
passing fad. If the code is old who cares as long as it still works?

~~~
masklinn
> Is there no more uu encoded data in the world?

The uu codec is provided by the binascii module. The uu module is a "high-
level" interface for conversion between binary and uuencoded files ("-",
paths, or file-like objects).

Basically uuencode(1) / uudecode(1) reimplemented in python.

> A programming language that throws away the past is doomed to become a
> passing fad.

That's complete nonsense.

> If the code is old who cares as long as it still works?

Keeping it working is a maintenance burden. Why keep it if it's worthless?

