
Why Python is so much fragile? - craftoman
Python works on every machine but every time you pack some lines of code and release them in public, either through GitHub or pip, many users will stumble upon various bugs that will drive them crazy. I have basic Python knowledge but every time Im trying to install various packages from other devs I <i>always</i> come across at critical bugs and sometimes I rage quit. Javascript considered unstable language but I never experienced anything similar to Python (even on modules that have bindings). Based on years of system administration and testing I&#x27;ve done on both platforms (Linux and Windows) on the 100 modules I install in Python almost half of them fail and I have to start troubleshooting, compared to 5-10 in every 100 Javascript modules.
======
PaulHoule
A few causes:

1\. The python 2/python 3 split

2\. Quite a few python lines such as 3.5 and the early 3.6 have deadly bugs

3\. Diamond dependencies; That is package A depends on package B1 and package
C depends on B2 but you can't have B1 and B2.

4\. Packages installed in a user's home directory will normally be visible in
any virtualenv or conda environment so if you install something there you are
hosed.

5\. The inverse not-invented-here syndrome which is expecting pip, conda,
pipenv, poetry, pyenv, etc. to solve the problem for you as opposed to
understanding what factors are at play.

The number of factors at play are finite and the problem is solvable but you
have to start with xenophobia and distrust of the platform, get complete
control of part of the space first and expand the space you control as opposed
to attempt to find the 20% of the whole that gets you 80% of the way there
because that last 20% is a woozy and will shift underneath you and make you
doubt your sanity...

I feel like I've made it out the other side but I don't know if I could ever
communicate what I've learned to a neurotypical.

~~~
zeristor
What about Anaconda? Don’t they test things run together, I imagine updates
might be a bit slower though

I pity the neurotypicals

~~~
PaulHoule
Anaconda is part of the problem, not part of the solution.

Here are a few reasons:

1\. Continuum analytics doesn't have a real business model and someday it will
go away.

2\. In the bad old days, before wheels, there really was a problem integrating
C libraries with Python. Wheels work basically the same way conda packages do,
so there is no problem. You can get an "official intel MKL" numpy, the only
trouble is that (officially) the python packaging system has no way of saying
"these N packages all satisfy the same dependency and you can install the one
that is best for you".

3\. "Testing together" is a wild goose chase because the combinatorical
explosion of different libraries you could test together is practically
unlimited.

4\. Anaconda for a long time has been unable to package a "tensorflow-with-
gpu-that-just-works" because NVIDIA won't let them. Without that, the window
is broken and Anaconda is not giving you a problem free experience.

5\. Numerous things about the way Anaconda is implemented make it slower than
it has to be by a lot.

6\. Since Anaconda doesn't confront all of the problems I've mentioned above
you will still need to implement stabilizers to stabilize it, and if you're
going to do that you can just build on wheels and have fewer things to
understand, have go wrong, etc.

~~~
adamson
Can you expand on point 4? Conda-forge releases perfectly fine tensorflow-gpu
builds, with the caveat that they don’t ship stubs or the actual NVIDIA driver
with them so it’s not truly standalone, but the same can be said of pytorch or
really any GPU-enabled package.

~~~
PaulHoule
That's exactly what I mean.

A particular build of Tensorflow X requires version A of the CUDA library,
version B of the CDNN library, etc.

It is a common situation if you work on a data science team or want to play
with models you find on Github that some of them require X1, A1, B1 and some
others require X2, A2, B2.

The CUDA and cuDNN libraries are ordinary userspace libraries so if you
package them for anaconda you can install them into a virtualenv and have
different versions of the libraries sitting side by side and never get an
error because the library versions don't match -- and I've done that on both
Windows and Linux.

Anaconda can't ship conda packages like the ones I describe because NVIDIA
insists that you download the libraries from their website, register to get
senseless spam, screw around with installers, etc.

------
renoir42
Scripts != "dev" language. Between lack of static types (mypy is an
improvement but compared to typescript-ish annotations in VS code...) and the
packaging state we are in you are quite forced to have one virtualenv per key
functionnality/magic library dependency. So my guess is... be prepared to use
any "simple" web service (flask?) and wrap your magic library dependencies as
pseudo webservices in their own virtualenv if not docker/gpu-docker
containers. Multithreading is a failure anyway (might get better with per
thread interpreter support in 3.8+). Less exciting than the promises of the
past but at least "kubernetes+virtualenvs+web services" will work and be
testable. Or just use javascript/scala/ocaml/go/erlang/elixir/... +
c++/cuda/opencl (real platforms) ;-)

