There are a couple of promising tools written in Rust looking to replace Pip for most users.
Rip (https://github.com/prefix-dev/rip/issues) which is more of a library for other rust tools to be on top of like Pixi (which is looking to replace both Pip and Conda).
And now uv, which seems to be looking to replace Pip, Pip-Tools, and eventually Poetry and PDM.
A lot of the explosion in tools in the Python world is coming from the desire for better workflows. But it has been enabled by the fact that build configuration and calling has been standardized and tool makers are able to follow standards instead of reverse engineering easy install or setup tools.
I know a lot of people are put off by there being so many tools, but I think in a few years the dust will settle and there will emerge a best practice work flow that most users can follow.
As a primarily Python developer and someone who occasionally contributes to Pip to solve complex dependency resolution issues it does make me wonder if I should hang my hat on that and learn enough rust to contribute to one of these projects eventually.
My experience with Rust developers who dwell in Python land is that they fundamentally disagree with most of the language that is Python and think they know better than incumbents what belongs and what doesn't.
The Rust ecosystem gets so much right that honestly, even as a career-long Python developer myself (and Rust for many years, but that's less of my point), they honestly probably do know how to build a good devxp better than much of the Python ecosystem.
Put other ways: the Python ecosystem has had 30+ years to figure out how to make packaging not suck. It has continually failed - failed less and less over time, sure, but the story is still generally speaking a nightmare ("throw it all in an OCI container" is an extremely reasonable solution to Python packaging, still, in 2024). I welcome advances, especially those inspired by tooling from languages that focused heavily on developer experience.
There's a lot of important work that happens in python. Most of it isn't being done by software engineers. I think the idea of improving things for that group is plenty meaningful.
To be clear Microsoft isn't directly funding Python, excluding any PyCon sponsorship.
Microsoft hired Guido in late 2020 giving him freedom to choose what project he wanted. Guido decided to go back to core Python development and with approval of Microsoft created a "faster-cpython" project, at this point that project has hired several developers including some core CPython developers. This is all at the discretion of Microsoft, and is not some arms length funding arrangement.
Meta has a somewhat similar situation, they hired Sam Gross (not the cartoonist) to work on a Python non-gil project, and contribute it directly to CPython if they accept it (which they have), and they have publicly committed to support it, which if I remember right was something like funding two engineering years of an experienced CPython internals developer.
The Python standard library is primarily managed by volunteers, and different sections have distinct maintainers, resulting in diverse choices.
When there's no strong advocate for a module, implementing changes becomes a challenging task. However, modules with dedicated champions, such as datetime by Paul Ganssle or pathlib by Barney Gale, may undergo significant modifications after consideration and discussion.
Not everyone in the broader Python community will be pleased with these alterations, but they aren't made hastily. I suggest we show empathy towards those who willingly take on the responsibilities of being open-source maintainers, it's a fiery task.
While I've personally expressed dissatisfaction with the utcnow change on the Python discussion page, I also acknowledge that I'm not responsible for maintaining this module. Consequently, I've updated my code in a completely backwards compatible way: datetime.datetime.utcnow() -> datetime.datetime.now(datetime.UTC).replace(tzinfo=None).
No matter how many maintainers there are, the policy to eagerly deprecate and quickly remove things is something common to all of stdlib. And I think that’s not a good policy for a programming language. If you look at Java, for example, only very few APIs were removed between Java 9 and 20, many of which minor or broken/useless (Pack200, RMI): https://docs.oracle.com/en/java/javase/20/migrate/removed-ap...
There has been a recent push to remove completely unmaintained modules with PEP 594, but the steering council has made it clear that PEP was an exception and all future module deprecations will have to be done on a case by case basis. The PEP 594 modules were discussed for well over two years before the PEP was accepted, and includes unmaintained modules that were technically deprecated as of Python 2.0.
I also rarely see methods removed at all, which is why the utcnow one sticks out so sharply for myself and others.
I can't reconcile this with the statement of "policy to eagerly deprecate and quickly remove things", but maybe you have some evidence?
Further there is a clear path discussed on the Python board of how to move any pure Python modules to PyPI for any part of the community that wish to maintain them. In the earlier years of Python it was not a serious option to ask everyone to use third party libraries, but now for almost all use cases it is a reasonable option.
If having copyright were a prerequisite of training data this would be true.
But in the US this hasn't been tested in the courts yet, and there's reason to think from precedent this legal argument might not hold (https://www.youtube.com/watch?v=G08hY8dSrUY - sorry don't have a written version of this).
I would imagine if we use a very strict interpretation of copyright, then things like satire or fan-fiction and fan-art would be in jeopardy.
As well as learning, as a whole.
Unless there is literally a substantial copy of some particular piece of copyrighted material, it seems to be a massive hurdle to prove that analyzing something is copyright infringement.
Most people in the fanfiction community recognize that it's probably not strictly allowed under copyright. However, the community response has generally been to do it anyway and try to respect the wishes of the author. Hence why you won't find Interview with a Vampire fanfiction on the major sites.
If anything, I think that severely hinders the pro-AI argument if fanfiction made by human authors are also bound by copyright.
ETA: I just tested it out and you can totally create Interview with a Vampire fanfiction with Bing Compose. That presumably is subject to at least as strong copyright as human authors and is thus a copyright violation.
> Copyright protection is available to the creators of a range of works including literary, musical, dramatic and artistic works. Recognition of fictional characters as works eligible for copyright protection has come about with the understanding that characters can be separated from the original works they were embodied in and acquire a new life by featuring in subsequent works.
Creating a work using Harry Potter or Darth Vader or Tarzan ("As of 2023, the first ten books, through Tarzan and the Ant Men, are in the public domain worldwide. The later works are still under copyright in the United States.") is a copyright infringement.
Creating Interview with a Vampire fan fiction with Bing - Bing didn't have any agency. The question of copyright infringement (I believe) should be only applied to entities with agency to (or not) ask for copyright infringing works.
> if we use a very strict interpretation of copyright, then things like satire ... would be in jeopardy.
Satire, criticism, reviews and journalism are explicitly permitted under fair use.
If I wish to publicly express my disdain or praise for your art, it is necessary that I can show samples / pictures/ photos when I express whatever my deal is.
The difference is when writing satire its not strictly necessary to possess the work to do so. You can merely hear of something and make a joke or a fake story. Training data on the other hand uses the actual material not some derivative you gleamed from a thousand overheard conversations.
The difference between this attempt and previous attempts is:
* The PEP has been announced to be accepted (Steering Council are still working on details for final wording)
* Many things are already landing on CPython main in preparation for this
Unless something absolutely show stopping comes up in the next 12 months it will almost certainly released in 3.13 or 3.14 as an optional compiler flag.
My understanding is most low hanging fruit were picked for Python 3.11. And the faster CPython team have been looking at more mid to long term goals with the addition of a JIT and other accompanying infrastructure around for 3.13 and 3.14.
However, the recent no-GIL decision I think has sent a few things back to the drawing board to see what can and cannot be salvaged from their progress and plans so far.
What is or isn't "pythonic" is largely determined by the community not the language. Nothing is stopping the Python community from monkey patching everything like the Ruby community does.
The f-string changes arrived because there was a need to formalize the syntax, so other Python parsers, for CPython to move off having a special sub-parser just for f-strings, and to be able decide whether weird edge cases were bugs or features.
Once formalized it was decided not to put arbitrary limits on it just because people can write badly formatted code, people are can already do that and it's up to the Python community to choose what is or isn't "Pythonic".
FYI, one of the things I'm really looking forward to is being able to write: f"Rows: {'\n'.join(rows)}\n Done!"
The link you've given shows Google had the majority of market share by October 2004, which is in line with what the parent comment states (at least the way I parse it).
What were they paying to be default on prior to October 2004? Smartphones didn't exist, Firefox didn't exist, Internet Explorer didn't have a dedicated UI element that led you to a search engine.
Are we talking about software that made Google the default home page? IE Toolbars? Feature phones like the Blackberry?
I was active on the Internet since the late 90s, but I only remember having to type "www.google.com" prior to 2005, but I'm certainly no means an expert.
That's a very under specified criticism, what type of complexity do you think it has introduced and how do you quantify "minor gain"?
Some people clearly think it is an appropriate complexity to solve for the gain it gives, hence the detailed proposal and long discussions that have already taken place:
Rip (https://github.com/prefix-dev/rip/issues) which is more of a library for other rust tools to be on top of like Pixi (which is looking to replace both Pip and Conda).
And now uv, which seems to be looking to replace Pip, Pip-Tools, and eventually Poetry and PDM.
A lot of the explosion in tools in the Python world is coming from the desire for better workflows. But it has been enabled by the fact that build configuration and calling has been standardized and tool makers are able to follow standards instead of reverse engineering easy install or setup tools.
I know a lot of people are put off by there being so many tools, but I think in a few years the dust will settle and there will emerge a best practice work flow that most users can follow.
As a primarily Python developer and someone who occasionally contributes to Pip to solve complex dependency resolution issues it does make me wonder if I should hang my hat on that and learn enough rust to contribute to one of these projects eventually.