The entitlement of users calling open source maintainers that try to limit the surface area of support tickets to the systems that provably 95% of users are on “user hostile” is always sad to see.
The sociopathy of people who pretend "it's user-hostile to be told you can never speak of running this on an OS version > 2 years old. Ever. All attempts will be deleted" is "expecting endless support for free products that I'm not paying for or contributing to."
I reject your name-calling via a strawman, though I support the message.
Not quite what you're asking for, but Microsoft (my employer) has a free tool for checking web and Windows apps for accessibility best practices: https://accessibilityinsights.io/
Kivy's marketing seems to be targeting LOB apps. If I was going to develop one of those, I'd optimize for something standardized and easy to maintain (HTML/JS) vs. the performance benefits of a native UX or cross-platform framework.
The landing page is weird; it talks more about the funding for the framework than the framework itself. There's only one image showing UI, and the way its styled (cropped, tilted) makes me think its a stock photo, not a screenshot. The stock photo of a train right underneath isn't helping this perception for me.
And that's one of the main show stopper for me with kivy: it comes with very few built-in UI controls, so you have to code a lot of things yourself.
I much prefer Python to JS, but things like react native win because of the community libs you can install save you tons of time, and produce a better result.
This is especially true when you use a lot of tooling. I love jupyter, but installing it in a venv means pulling a lot of deps which will affect a lot what I can install.
Fortunately the Python community is much more serious about making deps that work together than the JS community, and the fact it works at all given the cartesian products of all the python modules is kind of a miracle and a testament to that.
Unfortunately, that's a problem that is unlikely to be solved in the next decade, so we all live with it.
The reverse problem is true for JS, and I see many projects shipping very heavy frontend code because despite all the tree shaking, they embed 5 times the same module with different versions in their bundle. That's one of the reasons for the bloated page epidemic.
I guess it's a trade-off for all scripting languages: choosing between bloat or compat problem. Rust and Go don't care as much, and on top of that they can import code from 10 years ago and it sill works.
However, and while I do know how hard it is to ship python code to the end user (at least if you don't use a web app), I don't think the version problem is the reason. We have zipapp and they work fine.
No the main reason iscompiled extensions are very useful and popular, which means packaging is solving more than packaging python, but a ton of compiled languages at one. Take scipy: they have c, pascal and assembly in there.
This can and will be improved though. In fact, thanks to wheels and indygreg/python-build-standalone, I think we will see a solution to this in the coming years.
My ideal situation is that the system should maintain authoritative versions of every package and version that is ever requested, and they should not need to be shipped. Multiple versions of a package should coexist.
When a package requests 2.1.1 it fetches it right out of there, installing from PyPI if it doesn't.
The same should be true of JS and even C++. When a C++ app's deb package wants libusb==1.0.1 it should NOT overwrite libusb-1.0.0 that is on the system, it should coexist with it and link to the correct one so that another app that wants libusb-1.0.0 should still be able to use it.
> Fortunately the Python community is much more serious about making deps that work together
This is very not true at least in ML. I have to create a new conda environment for almost every ML paper that comes out. There are so many papers and code repos I test every week that refuse to work with the latest PyTorch, and some that require torch<2.0 or some bull. Also, xformers, apex, pytorch3d, and a number of other popular packages require that the cuda version that is included with the "torch" Python package matches the cuda version in /usr/local/cuda AND that your "CC" and "CXX" variables point to gcc-11 (NOT gcc-12), or else the pip install will fail. It's a fucking mess. Why can't gcc-12 compile gcc-11 code without complaining? Why does a Python package not ship binaries of all C/C++ parts for all common architectures compiled on a build farm?
I'm assuming by system you mean OS, which is a terrible, terrible idea. Dev stack and system libs should not coexist, especially because system libs should be vetted by the OS vendor, but you can't ask them to do that for dev libs.
> I have to create a new conda environment for almost every ML paper that comes out
That's how it's supposed to work: one env per project.
As for the rest, it's more telling about the C/C++ community building the things bellow the python wrappers.
That causes 50 copies of the exact same version of a 1GB library to exist on my system that are all obtained from the same authority (PyPI). I have literally 50 copies of the entire set of CUDA libraries because every conda environment installs PyTorch and PyTorch includes its own CUDA.
I'm not asking the OS to maintain this, but rather the package manager ("npm" or "pip" or similar) should do so on a system-wide basis. "python" and "pip" should allow for 1 copy per officially-released version of each package to live on the system, and multiple officially-released version numbers to coexist in /usr/lib. If a dev version is being used or any version that deviates from what is on PyPI, then that should live within the project.
Actually conda creates hardlinks for the packages that it manages. Found this out a few weeks ago when I tried migrating my envs to another system with an identical hierarchy and ended up with a broken mess.
> but rather the package manager ("npm" or "pip" or similar) should do so on a system-wide basis.
I basically agree with this. With the caveat that programs should not use any system search paths and packages should be hardlinked into the project directory structure from a centralized cache. This also means that a dev version looks identical to a centralized version - both are just directories within the project.
Kind of, but not really. Nix is extremely complicated. Programs / projects including their dependencies is exceedingly simple.
Also, Windows is my primary dev environment. Any solution must work cross-platform and cross-distro. Telling everyone to use a specific distro is not a solution.
It is complicated... but honestly I have found claude 3.5 to just 'fix it'. So you hardly have to spend much time spelunking. You just give it all your dependencies and tell it what you want. It'll whip up a working flake in a few iterations. Kinda magic. So yeah when you can abstract out the complexity it moves the needle enough to make it worth it.
It’s not the researcher’s fault if the libraries they use make breaking changes after a month; proof-of-concept code published with a paper is supposed to be static, and there’s often no incentive for the researcher to maintain it after publication.
At this point, venvs are the best workaround, but we can still wish for something better. As someone commented further up, being able to “import pytorch==2.0” and have multiple library versions coexist would go a long way.
I install most tooling, including Jupyter, using pipx. The only thing I then need to install in the project venvs is ipykernel (which I add as a dev dep), and then create a kernel config that allows Jupyter to be run using that venv.
The problem I see a lot of JS developers having when they start using Python is they try to do the "import the entire world" strategy of development that's common in JS, and there isn't good tooling for that because Python just doesn't have that culture. And that's because it's a bad idea--it's not a better idea in JS, it's just more part of the culture.
Pick one package source. Stick with it. And don't import every 0.0.x package from that package source either.
There are obviously reasons to use more than one package source, but those reasons are far rarer than a lot of inexperienced devs think they are. A major version number difference in one package isn't a good reason to complicate your build system unless there are features you genuinely need (not "would be nice to have", need).
And it doesn't provide any way to use a link to any other package repository if you want to stick to vanilla pyproject.toml + build (the official build tool). So if you want to use the CUDA or rocm version of torch, for example, you have to add a direct link to the package. That means that you'd have to hardlink to a platform specific version of the package. There's no way to just make a package look at a non pypi repository to get the version you want otherwise.
So say you want to add pytorch, with GPU acceleration if it's possible on a platform. You want to make it multiplatform to some extent.
You can't add another index if you want to use vanilla build, as that's not allowed. You can add a direct link (that's allowed, just not an index) but that's going to be specific to a platform+python version. Pytorch doesn't even provide CUDA packages on pypi anymore (due to issues pypi), so you need to be able to use another index! You'd need to manually create requirements.txt for each platform, create a script that packages your app with the right requirement.txt, and then do it again whenever you update. Otherwise, I think the most recent advice I've seen was to just make... the user download the right version. Mhmmmm.
The other option is to use poetry or something like that, but I just want to use "python build . "...
But you can do that, obviously not with this syntax. It’s non standard but I have built programs that install all dependencies as a first step. It’s pretty trivial.
reply