>tools like type-hints, mypy, (frozen)dataclasses, Pydantic, etc. is trying to address at least the latter two points here (type-driven & illegal-states-unrepresentable)
At some point you have to admit to yourself that it's the wrong tool for the job, when is this going to be?
Dependency/environment management is poor as well given how popular the language is. I believe it's popularity is similar to that of PHP in the past; flooded with carefree users.
I would strongly advise against using python for anything other than PoC/experimental endeavours.
It's a scripting language, and should be used as such.
> I would strongly advise against using python for anything other than PoC/experimental endeavours.
I agree with your points, but this seems to lean a bit too far the other way. Yes, Python has plenty of limitations and cases where there are better choices in programming language. But saying that it shouldn't be used in real production systems when there are so many examples of it being used effectively for precisely that seems a bit hyperbolic, no? It's like saying Javascript shouldn't be used in production.
I work in data science and engineering, so not using python is not really an option. I'd love to use Julia and Rust instead, but the ecosystems and users aren't there yet.
Python continues to earn its reputation as the second-best option for most problems.
It's always a matter of choosing the least worst solution with Python, and additionally patching over shortcomings of the language with afterthought-tooling.
I don't know. I like Haskell, but practically I would still choose a dynamic language "with benefits" over Haskell for BE development. [1] For example, Python frameworks like FastAPI can enforce type discipline at the system boundary, and frankly it feels like a development sweet-spot. Rigour at the API level, but fast-and-loose reasoning can proceed as normal in the implementation.
It's a worse-is-better approach, to be sure, but it has a very appealing effort:result ratio.
[1] with the normal caveats -- every project is different, everybody's notion of a "backend" is different, etc. Haskell might be the sanest solution to some backend challenges.
FastAPI is hard to contend with for anything marginally complex, without having to dig deep in its internals.
If you want performance, or to apply more rigorous (read: enterprisey) development practices, you really are better off looking elsewhere.
However, you can get a Data Science python developer to front their code behind an API with minimal ceremony.
Type safety does not exist in Python. Type hinting does not solve this problem, neither does Pydantic. These are bandaids for huge shortcomings of using the wrong tool for the job.
Sure, but there's a continuum at play here. Haskell leaves some correctness on the table, right? -- you should be using Idris 2; or better yet, proving your API in Coq and extracting the code. From some perspectives, Haskell is the "worse" in "worse is better".
Personally I find python-is-wrong arguments to be a bit naive. "It's a crappy language that's only good enough to build prototypes..." -- like YouTube, for example? :) Ultimately, smart developers are smart, and can get work done with whatever tools they have at hand.
Just continually saying Python is the wrong tool for the job doesn't cut it. I've gotten a lot of mileage out of type hints in Python and have caught my share of bugs statically, so I disagree with your contention that it's merely a "bandaid for huge shortcomings". It's a helpful tool that serves a purpose.
You could pick at every tool, every language. What's better than Python at (since you brought it up) data science? Julia? That's hardly any more type safe, if that's an issue for you. Haskell? Good luck getting non-CS types to buy in to the restrictions, and good luck porting everything you need. R? Slower than Python, even! I mean, I dunno, if you have something in mind that's better in every way than Python then I'd love to hear it.
R is way better than Python for most (all?) data science things. It's better at data munging, has more packages, it's array based so way more terse while being easy to reason about, has super easy C++ FFI, etc...
I dispute at least some of that. R has more statistical packages, perhaps, but can't compete with Python in terms of sheer array of packages and developer mindshare in general. This is no different for data science, where major tools are either Python-only or are accessed in R only through Python (like Tensorflow).
R is also hardly more terse in my experience, though perhaps that depends on style; I'm a tidyverse fan but it's not particularly concise.
Finally ... RStudio. It's just okay. If you're willing to use a language-specific editor that may not get keybindings right (I hope you're not an emacs user), it works fine. I like the RMarkdown integration. I don't use it, though, and I don't feel like I'm missing that much.
BTW, you don't mention what actually does make R better than Python: lazy evaluation allowing something close to syntactical macros. You'll never get a magrittr or a dplyr in Python.
At some point you have to admit to yourself that it's the wrong tool for the job, when is this going to be?
Dependency/environment management is poor as well given how popular the language is. I believe it's popularity is similar to that of PHP in the past; flooded with carefree users.
I would strongly advise against using python for anything other than PoC/experimental endeavours.
It's a scripting language, and should be used as such.