Hacker News new | past | comments | ask | show | jobs | submit login
Writing Python like it's Rust (kobzol.github.io)
538 points by thunderbong 10 months ago | hide | past | favorite | 350 comments



Lots of comments here are stating that typing is half baked in Python, and that if you gotta use types, you should use another language.

But that's missing the point that Python is still not meant to be the best at anything, but good at most things.

And in this case, it's exactly what you get: optional typing, with decent safety if you need it.

You can quick script or design seriously, you can explore in a shell or commit a public API into a file, you can start with untyped code and add some later.

That makes it cumbersome sometimes, yes. mypy is slow, and the typing system + match / case have many rough edges.

But it also keeps Python incredibly versatile.

It's amazing what you can do with this (not so) little language, and the whole ecosystem is always getting better and better.

Every time I try another language, I keep getting back into Python, because the fields I need tools for are vast, and it's the only one that I'm pretty sure will handle a problem decently in all it's various forms, especially the one I didn't consider when I started.

There is immense value in this, because it keeps opportunities on the table no matter what you do. I often find my self during a project adding something I could only because Python let me do so easily.

Python is quirky, yet it still damn practical.


On the one side, yes Python is incredibly versatile.

On the other side, I have worked on many Python projects, some of them fairly high profile, and I have seen exactly two kinds of Python codebases:

1. a few were written by extreme professionals, plugging at every single hole, with ~100% coverage, plus considerable maintenance because every dependency upgrade tends to break something;

2. many that feel cobbled together, with undocumented (and often inconsistent) invariants "because something will eventually throw a TypeError if I make a mistake", undocumented metaprogramming, and heroic levels of maintenance because this stuff works until it doesn't.

Option 1. feels like "writing Python like it's Rust", but of course without any of the benefits of Rust either on performance or on safety.

Option 2. feels like "experiments running out of control". That is the unfortunate price of this incredible versatility.

I don't have a clear conclusion to this, except perhaps that Python is really good for the initial experimentation (when versatility actually gets you to initial results much faster), but really bad for... well, let's call it industralization.


To be fair, a division between hell and heaven will happen with any language.

The question is: is this particular hell worth the result?

There is no generic answer to that of course, it just happens is has been the case for me during those 20 years.

First, you have to get to the industrialization phase. And of course, you have to get there, with the constraint of time, budgets, and talent.

Second, Python does have less benefits for that phase than rust or haskel, but it's not impotent. Industrialization is just hard, no matter the language.

Because we are an industry where a lot of self-taught people get a career, we tend to forget this last point. Creating a serious computing system is engineering, and this includes the project management part, and being rigorous about a lot of thing.

Granted, the rigor needs to be higher with Python once your reach that scale, and at a certain point (which I wish everybody would reach), you may want to ditch it. It's a good problem to have though.

Yet we have to remember not all industrialization attempts are equal. Most are really tame. I'd argue you could replace half the website code base out there with a bunch of bash scripts on a VPS and they would still make money. So even in this context, Python can deliver a lot.


This is roughly my experience, too.

Static type safety has interesting YAGNI characteristics. For the bits of the code that must work, and where defects and regressions due to type errors may be subtle and difficult to detect, it's indispensable.

But it can also be an impediment to iteration. Sometimes the code you're working on is still so experimental that you don't really know what the best structure and flow of data will be yet, and it's easier to just bodge it together with maps and heterogeneous lists for the first few iterations so you can let the code tell you how it wants to be structured. Having to start with static types subtly anchors you to your first idea. And the first is typically the worst.

Something I really like about Python for this sort of thing is that I have a lot more ability to delay this industrialization stuff to the last responsible moment, and be selective in where I spend my industrialization tokens. Here's a list of the languages where I've found selectively rewriting the important bits in Rust to work well: C, C++, Python. I know which of those I'd rather use for prototyping, scripting, and high level control.

Relevant Fred Brooks quote: "Plan to throw one away. You will, anyway."


> But it can also be an impediment to iteration. Sometimes the code you're working on is still so experimental that you don't really know what the best structure and flow of data will be yet, and it's easier to just bodge it together with maps and heterogeneous lists for the first few iterations so you can let the code tell you how it wants to be structured. Having to start with static types subtly anchors you to your first idea. And the first is typically the worst.

That's a very interesting observation.

Let me add a counterpoint, though. Indeed, you will refactor everything. But by having strong, static typing, you have a much clearer idea of what you're breaking along the way. Cue in hundreds of anecdotes by each of us, when we broke something during a refactoring because the dependency was not obvious and there were not enough tests to detect the breakage. I've seen two of these in Python code just this week.


That was what I originally thought, as someone who grew up on static typing and then migrated to Python.

But what I've discovered in practice is that, during those early iterations, I don't really need the compiler to help me predict what will break, because it's already in my head. The more common problem is that static typing results in more breaks than there would be in the code that just uses heterogeneous maps and lists, because I've got to set up special types, constructors, etc. for different states of the data such as "an ID has/has not been assigned yet". So it kind of ends up being the best solution to a problem largely of its own making.

I'm also working from the assumption here that one will go through and clean up code before putting it into production. That could be as simple as replacing dicts with dataclasses and adding type hints, but might also mean migrating modules to cython or Rust when it makes sense to do so. So you should still have good static type checking of code by the time it goes into production.


As someone who has primarily gone in the other direction, my anecdata supports the opposite conclusion: static typing tends to help me to prototype over more dynamic languages, even if it takes (slightly) longer to physically write things down. I think this is because of two things:

1. If I'm prototyping things, I find I spend a lot of time trying to figure out what sorts of shapes the data in my program will have - what sort of states are allowed, what sort of cases are present, what data will always exist vs what data will be optional, etc. If I'm doing that in my head, I may as well write it down at the same time, and voila, types. So I'm not usually doing much extra works by adding types.

2. If I change my code, which I often do when prototyping (some name turns out to be wrong, some switch needs extra cases, some function needs more data), then that is much easier in typed languages than untyped ones. Many times my IDE can do the refactoring for me, and if that isn't possible, I can start making the change somewhere (e.g. in a type declaration) and just follow the red lines until I've made the change everywhere. One of the big results of this is that statically typed prototypes are often immediately ready to be developed into a product, whereas in dynamic languages, the prototype already bears so many scars from refactoring and the natural back-and-forth that comes from prototyping, that it needs to be rewritten over more from scratch. (The corollary to that being that I have never once had a chance to rewrite a prototype before releasing it to production.)

I can imagine that some of this comes down to programming/architectural style. I tend to want to define my types up front, even in dynamic languages, because types and data are how I best understand the programs I work on. But if that's not how you work, the tradeoffs might not be the same. The other side is that the type systems I regularly use are almost exclusively modern ones, with decent support for things like product and sum types, so that I can use relatively simple types to model a lot of invariants.


> But what I've discovered in practice is that, during those early iterations, I don't really need the compiler to help me predict what will break, because it's already in my head.

Reading this, I have the feeling that you're talking mostly of single-person (or at least small team) projects. Am I wrong?

> The more common problem is that static typing results in more breaks than there would be in the code that just uses heterogeneous maps and lists, because I've got to set up special types, constructors, etc. for different states of the data such as "an ID has/has not been assigned yet". So it kind of ends up being the best solution to a problem largely of its own making.

There is definitely truth to this. I feel that this is a tax I'm absolutely willing to pay for most of my projects, but for single-person highly experimental projects, I agree that it sometimes feels unnecessary.

> I'm also working from the assumption here that one will go through and clean up code before putting it into production. That could be as simple as replacing dicts with dataclasses and adding type hints, but might also mean migrating modules to cython or Rust when it makes sense to do so. So you should still have good static type checking of code by the time it goes into production.

Just to be sure that we're talking of the same thing: do we agree that dataclasses and type hints are just the first step towards actually using types correctly? Just as putting things in `struct` or `enum` in Rust are just the first step.


> I have the feeling that you're talking mostly of single-person (or at least small team) projects. Am I wrong?

Small teams. And ones that work collaboratively, not ones that than carve the code up into bailiwicks so that they can mostly work in mini-silos.

I frankly don't like to work any other way. Conway's Law all but mandates that large teams produce excess complexity, because they're basically unable to develop efficient internal communication patterns. (Geometric scaling is a heck of a thing.) And then additive bias means that we tend to deal with that problem by pulling even more complexity into our tooling and development methodologies.

I used to believe that was just how it was, but now I'm getting to old for that crap. Better to push the whole "expensive communication" mess up to a macro scale where it belongs so that the day-to-day work can be easier.


> Small teams. And ones that work collaboratively, not ones that than carve the code up into bailiwicks so that they can mostly work in mini-silos.

Well, that may explain some of the difference between our points of view. Most of my experience is with medium (<20 developers) to pretty large (> 500 developers) applications. At some point, no matter how cooperative the team is, the amount of complexity that you can hold in your head is not sufficient to make sure that you're not breaking stuff during a simple-looking refactoring.


Sure but at that point we're probably not on the first iteration of code anyway. Even at a big tech company, I find it most effective to make a POC first-iteration that you prove out in a development or staging environment that uses the map-of-heterogeneous-types style development. Once you get the PMs and Designers onboard, you'll iterate through it until the POC is in an okay state, and then you turn that into a final product that goes through a larger architecture review and gets carved up into deliverables that medium and large-scale teams work on. This latter work is done in a language with better type systems that can better handle the complexity of coordinating across 10s or 100s of developers and can generally handle the potential scale of Big Tech.

There's something to be said that the demand for type systems is being driven by organizational bloat but it's also true that large organizations delivering complex software has been a constant for decades now.


Do you work in an organization that does this? Because most organizations I've seen who don't pick the approach of "write it like it's Rust" rather have the following workflow.

1. Iterate on early prototype.

2. Show prototype to stakeholders.

3. Stakeholders want more features. At best, one dev has a little time to tighten a few bolts here and there while working on second prototype.

4. Show second prototype to stakeholders.

5. Stakeholders want more features. At best, one dev has a little time to tighten a few bolts here and there while working on third prototype.

6. etc.

Of course, productivity decreases with each iteration because as things progress (and new developers join or historical developers leave), people lose sight of what every line of code means.

In the best case, at some point, a senior enough developer gets hired and has enough clout to warrant some time to tighten things a bit further. But that attempt never finishes, because stakeholders insist that new features are needed, and refactoring a codebase while everybody else is busy hacking through it is a burnout-inducing task.


> Do you work in an organization that does this?

Yup! I'm at a company that used to be a startup and ended up becoming Big Tech (over many years, I'm a dinosaur here.) Our initial phase involved building lots of quick-and-dirty services as we were iterating very quickly. These services were bad and unreliable but were quick to write and throwaway.

From there we had a "medium" phase where we built a lot of services in more strictly typed languages that we intended on living longer. The problem we encountered in this phase was that no matter the type safety or performance, we started hitting issues from the way our services were architected. We started putting too much load on our DBs, we didn't think through our service semantics properly and started encountering consistency issues/high network chatter, our caches started having hotspotting issues, our queues would block on too much shared state, etc, etc.

We decided to move to a model that's pretty common across Big Tech of having senior engineers/architects develop a PoC and using that PoC to shop around the service. For purely internal services with constrained problem domains and infrequent changes, we'd usually skip this step and move directly to a strictly typed, high performance language (for us that's Java or Go because we find them able to deal with < 15 ms P99 in-region latency guarantees (2 ms P50 latencies) just fine.) For services with more fluid requirements, the senior engineer usually creates a throwaway-ish service written in something like Node or Python and brings stakeholders together to iterate on the service. Iteration usually lasts a couple weeks to a couple months (big tech timelines can be slow), and then once requirements are agreed upon, we actually carve out the work necessary to stand up the real service into production. We specifically call out these two phases (pre-prod and GA) in each of our projects. Sometimes the mature service work occurs in parallel to the experimentation as a lot of the initial setup work is just boilerplate of plugging things in.

===

I have friends who work/have worked in places like you describe but a lot of them tell me that those shops end up in a morass of tech debt over time anyway and eventually find it very difficult to hire due to the huge amount of tech debt and end up mandating huge rewrites anyway.


That's nice! Feels like your company has managed to get Python to work well for your case!

Most of the shops I've seen/heard of don't seem to reach that level of maturity. Although I'm trying very hard to get mine there :)


>impediment to iteration

The "aha!" moment that converted me from a Clojure guy to a Haskell guy was realizing that types aren't an impediment to iteration, they are an enabler of rapid design iterating. Once written, code has a way of not wanting to be changed. Types let me work "above the code" during that squishy beginning period when I'm not sure 'what the best structure and flow of data will be'. Emotionally, deleting types is a lot easier for me than deleting code.

This is in no way saying that types are the One True Way™ ^_^ Just that I've found them, given the way my brain is wired, to be a great tool for iteration.


Yeah, types are a type of (usually) easy to write, automatically enforced, documentation which I find endlessly useful during experimentation, if sometimes constraining. I'm sure that there are other means to achieve similar results (some variant of TDD, maybe?) but I haven't experienced them yet.


> Something I really like about Python for this sort of thing is that I have a lot more ability to delay this industrialization stuff to the last responsible moment

This is one of my favorite aspects of Python. I can start with every module in prototype form and industrialize each module as its design firms up. I can spend my early development getting an idea fleshed out with minimal overhead.


> I can start with every module in prototype form and industrialize each module as its design firms up.

That is definitely the theory. And this flexibility is indeed very precious for some types of work.

However... does it actually happen as you describe? I can count on half of the fingers of one hand the number of Python codebases that I've seen that actually feel like they've properly been reworked into something of industrial quality. All the other codebases I've seen are of the "I guess it works, maybe?" persuasion, quite possibly because there is always something higher priority than quality.


The thing is type hints in Python are less a code quality feature and more a quality of life feature for developers. As long as I've got descriptive argument names and docstrings I can just tell you how to use a method. Your IDE can at least tell you argument names.

Type hints help reduce cognitive load when someone else (or you in the future) is trying to use some code. If you have strict type requirements you're testing that inside a method or with a decorator or something (and verifying with tests).

Even a big project can hum along happily without type hints. They're also something you can add with relative ease at any point.


> The thing is type hints in Python are less a code quality feature and more a quality of life feature for developers.

They are absolutely a code quality feature.

> As long as I've got descriptive argument names and docstrings I can just tell you how to use a method.

Yes, you can, but that doesn’t seem to be germane to the argument, since “it is possible to communicate intended use without typing” doesn’t support your QoL vs. code quality argument.

With typing, the type-checker can statically find potential errors in your description, or in my attempt to follow it—that’s a code quality feature. (Of course, that it does provide a description, and a better chance that the description is correct, is also a QoL issue.)


> Yes, you can, but that doesn’t seem to be germane to the argument, since “it is possible to communicate intended use without typing” doesn’t support your QoL vs. code quality argument.

Jillions of lines of quality Python were written before type hints. They're not strictly necessary for writing quality code. If you find modern code that's high quality it probably uses type hints but type hints don't automatically make high quality code.


Wandering offtopic, perhaps, but I've noticed that this kind of behavior seems to strongly correlate with Scrum.

The work starts getting rushed toward the end of the sprint. Every two weeks, people start furiously cutting corners to meet a completely artificial due date. And then there's basically zero chance that you'll be able to get the PO to agree to cleaning it up in the next sprint, because they can't see the problem.

Scrum of course prescribes all sorts of adornments you can add to try and counteract this effect. But I'm a firm believer that an ounce of prevention is worth a pound of cure.


The two week rush + everything is always broken failure pattern is solvable.

Given dubious project management schemes, do the refactoring and cleanup first. Then the functional change is easier to review as it's against a sane background, and there's minimal planned cleanup after in case that phase gets dropped. Call writing the tests characterisation if that helps.


Oh, interesting observation. Would you say it is scrum itself or just the existence of arbitrary deadlines?


I think it's the application of Scrum to work that doesn't primarily fall inside the (Cynefin) "simple" domain-type work that Scrum was designed for.

It's fine if the work is straightforward and easy to estimate. But, if it isn't, things get problematic. There are three variables that interact with each other when working on a project: time, scope, and quality. If you pin down both your acceptance criteria and the time you have to implement (which is basically what happens in a sprint planning meeting), then quality is the only remaining variable you have to manipulate when things aren't going according to plan.


Something that just about seems to work is something that has often met the threshold for "solves a real problem" while also meeting the other requirements that a product needs to be successful.

It's easy to make something that works, is well designed, and either doesn't solve a problem someone has or nobody knows about it.


I realize that this is the common wisdom these days: write code that works just well enough that we have a chance to fix before it does too much damage whenever it breaks. I suspect that this approach is strongly fueled by the unlimited VC money available in tech, since it means that any company can employ an unlimited number of full-time developers (and PR) just to handle catastrophes.

We'll see how that wisdom holds if/when/as VC money dries up and/or moves to other sectors.


> I suspect that this approach is strongly fueled by the unlimited VC money available in tech,

Well, I suppose we could trade anecdotes and counter-examples, but my position largely comes from my own experience rather than received wisdom (though there's plenty of that).

Instead I'll just say that I disagree, largely because because a business is a complex and shifting arrangement of various factors competing for limited time and resources. Even in a software business, software is only one of those.


I definitely agree with your premises. Just not with your disagreement :)


> Having to start with static types subtly anchors you to your first idea. And the first is typically the worst.

Not just that, but dynamic typing conveniently allows you to try multiple ideas in parallel. In static languages, you tend to refactor The One Representation for a thing and try multiple ideas sequentially, which may or may not be better.

Of course, none of this is really inherent to the type system -- plenty of Python folks try various shapes for their data sequentially, and you can have multiple representations of the same data in a static language too. But I feel like the languages encourage a particular kind of experimentation, and sometimes either is more helpful than the other.


That is a very good point.

However, the other side of the coin is that in many Python codebases I've seen, people keep using multiple antagonistic ideas/paradigms in the same code, way past the point where it should have become clear that these ideas are actually incompatible and no amount of heroic hackery will solve the issues.


One of the core values in the Python community is, "we're all adults here."

I think, though, that a lot of Python programmers - particularly less-experienced ones - fail to realize that that typically functions as more of an expectation, perhaps even an obligation, than a liberty.


All this discussion is kind of interesting to me since in the beginning Python seemed to be focused on being the anti-Perl.

Rejecting "There's More Than One Way To Do It" in favour of "There should be one - and preferably only one - obvious way to do it" (IMO they did not find the right balance there in terms of python3 strings)

Ultra minimalism on syntax to the point of introducing invisible errors...

I suppose this greater leniency on approaches now is simply due to broad adoption.


Wholeheartedly agree. I'm a self taught programmer who programs to get things done rather than just doing programming for fun (I started in ML/data science field). That way, my code often resembles the caricature of these hacky codebases alluded above. And I'm well aware of it.

I often have arguments about utility of Python with my more technically solid engineer friends who keep telling me "Python doesn't scale" etc. And I often come back to the same point you highlighted: is the particular hell worth the result? For 85-90% of the cases the answer is resounding yes.

For nimble teams, startup projects, internal tools : most of the code is often thrown away eventually. Python's beauty is that the language quickly gets out of the way. Everyone then is forced to focus on complexity of the business domain, "does the code solve the business problem". The architecture/scale/speed problems will eventually arrive with Python. But most purist engineers overestimate how soon it will come. At most of the failed startup attempts I was part of, the PMF problem was more pressing to solve than software constraints.

Early days of YouTube, Dropbox, Instagram (and maybe OpenAI too) are big testament to this. I've made my peace these days by not fighting to prove Python is the best language etc. If someone tells me "Rust beats Python" kind of argument: I say I agree, wish them luck and focus on shipping things.

tl:dr; Python is still the best choice to "quickly deliver value and test your business hypothesis".


Python scales fine as long as you know microservices.

The issue is people assuming that they don't need to learn anything new to use Python.

You got people stating that using static typing in a dynamically typed language like Python is a good or reasonable idea. It's not.

But people don't want to put in the effort to learn things like dynamic typing and microservices.


How do microservices factor in this conversation? Do you find that they actually reduce complexity?


I think you have to grade code in your project, i.e not every piece of code has the same value, high value => more strict, low value => less strict.

Less strict code is something that you can throw away and rewrite without too much effort.

This has the implication that you have bulkheads (like in a ship) to separate high value and low value code, thus you have to avoid code that is toxic (i.e magic) that spreads between them, e.g. ORM entities.

When using the type system correctly you don't need 100% coverage, now I'm not a Python programmer so I don't how feasible that is with that type system, but in PHP I use the type system extensible and I don't write that many tests anymore, except for legacy code or when I need to test something specific, like an algorithm. By using the type system you can catch most programmer errors, and combine that with routinely throwing exceptions for any unwanted state, you will detect most bugs.

The way unit testing has been interpreted and implemented it has become more of a nuisance than a help in fast moving projects. Usually what happens is that the team refactors some unit and then multiple test breaks and then time needs to be spent rewriting these test, but what that means is that the tests were actually useless after they where committed if you can change them as you like, when you change a test you are no longer testing the same thing.

Thus tests can also can be graded, from high value tests, i.e stable, and low value tests. Depending how you organize your code, high value tests will usually test a larger set of functionality in one run, like a system (or integration) test from call to finish, these should never change regardless of much you refactor. High level tests should only change when your business logic changes.

Low value test typically involves a lot of mocking and stubbing, those are pretty much pointless to commit after you implemented your unit. Too much mocking and stubbing is code smell.


This is my experience as well. There are very few code bases with 100% coverage and strict development practices. Moreover, if the average developers get their hands on one of them, they'll degrade soon.

The problem comes from the very "top", since CPython developers are in camp (2) and celebrate their development practices.

Other languages also have this divide of course but not to the extent that Python has.

As for maintenance: This is also true. I have rewritten a Python code base in C++. Even in C++ it is easier to maintain, because the C++ compiler does not break between versions and there are no new bugs. And modern C++ can look quite like Python.


>Option 1. feels like "writing Python like it's Rust", but of course without any of the benefits of Rust either on performance or on safety.

...but with the benefits of Python. You might not see them/need them but they are there :)


Fair enough.


I would like to see a codebase of the first kind. All larger python projects I have seen are of kind 2...


The Firefox build system feels to me like kind 1. The Synapse Matrix server also feels pretty close (although I don't understand why it doesn't use Pydantic or something like Pydantic).


I think this is a form of selection bias. Certain other languages wouldn't even allow a project like your point 2. to get started. In Python, the bar for something to start kind of working is quite low, even if you're a beginner, a scientist of another discipline who needs a little script to manage some data, a teen who's learning to code, etc. So of course you see a lot of half-baked stuff. It's important we distinguish between 1. and 2., but 2s might sometimes be better than nothing.


Absolutely. In some domains, the choice is between 2. and nothing – and 2. is better. That's true for Python or JavaScript, for instance (in each their domain).

I believe that we agree that there are actually many domains in which you really want 1., and if you can't have it, then "nothing" is generally preferable to 2.


How are any of these points exclusive to Python among the commonly used languages?

Everything you said applies equally to many C++ projects I work with as well


You are right, this is probably not specific to Python. It just happens that:

1. we're currently discussing Python;

2. anecdotally, the C++ projects on which I have worked were all case 1. ("code it like it's Rust") – I make no claim that this is representative, though.


I see exactly this with Ruby. In that regard, Ruby and Python too, are very alike.


> But that's missing the point that Python is still not meant to be the best at anything, but good at most things.

The most important thing about Python is readability and its part of its syntax and is one of the best languages out there for readability.

Zen of python:

  >>> import this
  The Zen of Python, by Tim Peters

  Beautiful is better than ugly.
  Explicit is better than implicit.
  Simple is better than complex.
  Complex is better than complicated.
  Flat is better than nested.
  Sparse is better than dense.
  Readability counts.
  Special cases aren't special enough to break the rules.
  Although practicality beats purity.
  Errors should never pass silently.
  Unless explicitly silenced.
  In the face of ambiguity, refuse the temptation to guess.
  There should be one-- and preferably only one --obvious way to do it.
  Although that way may not be obvious at first unless you're Dutch.
  Now is better than never.
  Although never is often better than *right* now.
  If the implementation is hard to explain, it's a bad idea.
  If the implementation is easy to explain, it may be a good idea.
  Namespaces are one honking great idea -- let's do more of those!
  >>>


I don’t know how people reconcile “python is beautiful and elegant” with “name your file __init__.py or __main__.py” while keeping a straight face.


Heck, the package manager having to execute random code in every package (setup.py).

And speaking of beautiful and elegant, dunders everywhere, really?


> Heck, the package manager having to execute random code in every package (setup.py).

Nope! Not since the mid-2010s. It took a long time, but with wheels[1], install-time actions are finally separate from packaging-time actions, and the former do not include any user-defined actions at all.

Just about every sufficiently general system allows for arbitrary code in the latter, be they in debian/rules or PKGBUILD or the buildPhase argument to mkDerivation or—indeed—in setup.py. (Most systems also try to sandbox those sooner or later, although e.g. the Arch Linux maintainers gave up on cutting off Go and Rust builds from the Internet AFAIU.)

Don’t forget to `python setup.py bdist_wheel` your stuff before you upload it!

> And speaking of beautiful and elegant, dunders everywhere, really?

It’s the one reserved namespace in the language, so its usage for __init__.py and __main__.py seems—perhaps not beautiful, but fairly reasonable?

[1] https://packaging.python.org/en/latest/specifications/binary...


> Just about every sufficiently general system allows for arbitrary code in the latter, be they in debian/rules or PKGBUILD or the buildPhase argument to mkDerivation or—indeed—in setup.py. (Most systems also try to sandbox those sooner or later, although e.g. the Arch Linux maintainers gave up on cutting off Go and Rust builds from the Internet AFAIU.)

Most other programming language package managers don't, see Maven for Java.

But then again, NONE of the scripting languages ever wanted to learn stuff from Java, especially regarding packaging (lack of packaging namespaces is another major blunder re-created by Python and several others, including Javascript).


> Most other programming language package managers don't, see Maven for Java.

So it’s not able to express native extensions or even codegen then? I suppose that’s OK for something that’s not the only available solution, but I don’t think I’d love it, either.

> But then again, NONE of the scripting languages ever wanted to learn stuff from Java, especially regarding packaging[.]

Both Java and its docs are just extremely tedious to read, to be honest. I say that with all due respect to its principal designers and acknowledging my ever-hopeless crush on Self where a lot of the tech originated. (And I don’t only mean 2000s Java—it took me literal days to trawl through the Android frame pacing library to find[1] the two lines constituting the actual predictor, commented “TODO: The exponential smoothing factor here is arbitrary”. The code of the Go standard library, nice as that is, gives me a similar feeling with how... vertical it is.)

That’s not an excuse for ignorance, but it’s a centuries-old observation that you need to avoid inflicting undue pain on your readers if you want your ideas to survive. The flip side is that there may be untapped ideas in obscure or unpleasant texts written by smart people.

So if you can point to things one could learn from Java, I’d very much be interested.

(And no, literature searches are not trivial—I’ve occasionally spent months on literature searches, sometimes to realize that the thing I wanted either isn’t in the literature or is in someone’s thesis that’s only been published a year ago.)

[1] https://android.googlesource.com/platform/frameworks/opt/gam...


Well, they didn't need to read much about Java in this case, frankly. Just creating a simple Java project with Maven would have showed the groupId concept (namespaces preventing top level squatting), for example.

https://maven.apache.org/guides/getting-started/index.html#h...

Anyway, too late now, now Python & co. are finding their own, alternative, ways to retrofit stuff like this.


Lack of namespacing is a trap most language package managers have fallen into sadly: even Cargo sees it as an “advantage” not to have them, apparently.

I think that is extremely short sighted, and that all packages including standard ones should be namespaced.


That applies to Python 2.7 and the code that Tim Peters writes. It does not apply to current Python and the coding styles that most people employ.

Current coding styles are either:

- Java-like ravioli, with class hierarchies that no one understands.

- Academic functional and iterator spaghetti, written by academics who think Python offers the same guarantees as Haskell.

Both styles result in severely broken code bases.


Are you really saying that these are the two coding styles of Python? Any source for this claim?


and classes are most of the time not needed anyway, it's just Java programmers that aren't used to the idea that code can be perfectly correct and readable without a single class


Everything you just said is true for Typescript, as you can set it up as strict or as forgiving as you like, and writing one off scripts for node is just as easy as for Python. But unlike Python, it has a great type system.


Not really.

First, the JS ecosystem is very web oriented, so if you want to dabble out of there, you often gonna fall short.

Secondly, the JS packaging has very poor support for compiled extensions, which mean everything that needs a perf boosts is unlikely to get good quality treatments.

Finally, the community makes it a constant moving target. After 20 years of writing both JS and Python, I can still install old django projects that use 2.7 (did it 2 months ago), but JS projects for even 5 years ago are a very hard to build.

Bottom line, I use JS for the Web because I have to, given it has monopoly on the browser, and now good GUI, but if I want to keep my options open, I would rather go rust or go than JS.


We gradually wanted to move our Java, C# and C++ into a more “generalised purpose” language because it would be easier to maintain and operate a small team with one language in what was becoming a non-tech enterprise. Python was our first go to, because well, it’s just a nice language that’s easy to learn, but we eventually ended up in Typescript and our story was basically the polar opposite to what you mention here.

We found the package support to be far superior in JavaScript. Even a lot of non-web things like working with solar inverters and various other green energy technologies (which at least here in Europe is very, very old school) we’re significantly easier with JavaScript than they were with Python. I guess FTP is web, but it’s the example I remember the best because I had to write my own Python library for it because the most used packages couldn’t handle a range of our needs. This may be because it’s FTP is not me shortening SFTP, no no, that’s just what solar plant engineers figured would be practical. Sure they recommend you don’t put the inverters directly on the internet in their manuals, but who reads those? Not solar plant installers at least. Anyway, I fully expected Python to be king for those things considering it’s what runs a lot of IoT, but JavaScript was just plain better at it.

Which is generally the story of our migration of basically everything to typescript. Some or our C++ code ended up as Rust, and I really, really, love Rust, but most of our code base is now Typescript. It might all have been Rust if Rust was a frontend web language, but it would never be Python. The reason for this is the environment. Not just the package and module systems, but also how you can set up a very “functional programming” (in quotes because it’s not really FP but JSFP) to cut down on the usage of OOP unless absolutely advantageous, type safety, specific coding systems and how easy it is to actually enforce all of those things automatically. Is just a much better experience with Typescript compared to Python in our experience. I think you could very likely achieve some of the same with JSDoc and JS and not Typescript, but we couldn’t get that to work (as well) in our internal tooling.

Somewhat ironically, the two areas JavaScript wasn’t better than Python were in what I’d consider web-related parts of your tech stack. There aren’t any good JS SQL ORMs, Prisma is ok and so is MikroOrm but none of them compare to what’s available in Python. The other is handling XML, which is soooo easy in Python. I mean theoretically it should always be easy to handle XML, but that would require the XML you need to handle to actually adhere to any form of standards, which I guess is utopia.

But I guess you can have very different experiences.


I find Python works very well to keep my options open, but eventually, there is no replacement for what you exactly did: your job as an engineer.

You evaluated the needs and figured out what tools you needed for the specific job you are doing.

That's what we are supposed to do.


I would have just kept Java and updated the language version and style.

Everything Java 11+, modern libraries (no Spring, no Hibernate).

The Java ecosystem is so deep that you can avoid the top 2 libraries/frameworks in any major domain and #3-#5 would be highlights in other ecosystems.


Java was never really an option because of how hard it is to hire for in my region of Denmark (and maybe the field of green energy). Java certainly has some presence at some of the larger tech focused enterprise orgs, but most developers we come in contact with aren’t interested in working with it. Not sure why considering C# is quite popular among them, but it is what it is.


Python has really poor support for compiled extensions. I know this sounds weird to say, given that they are used everywhere, but this is the number one pain point in Python. It’s really awkward to say, develop on Mac and deploy on Linux.


Might I ask what scripting-like language does have good support for compiled extensions? Such that you can easily develop on Mac and deploy on Linux?

Because it seems to me that once you compile something you are in the awkward world of ABI and CPU differences. And binary portability has been a paint point of programmers since before I was born (and I'm not that young).

So if there is a programming language that neatly gets around this problem, me and a lot of other folks would really like to know about it.


C# does. It's not exactly a scripting language and authoring a native dependency NuGet package isn't exactly an obvious task, but when you learn how, it's a straighforward solution:

- a NuGet package with native deps (win, linux, osx) cross join (x86, arm) - a P/Invoke package that depends on the native one - actual software that uses the dependency

When you publish the actual software, it pulls the deps and includes the correct native build.


F# by extension has this. It’s a pretty good scripting language although not perfect.


That's a characteristic of the language you write the extension in, not Python.

E.G, if you use, rust, through maturin, cross platform compilation is pretty decent: https://www.maturin.rs/distribution.html


I’m talking about the ecosystem. I was unable to get a small Python project with some mainstream native libraries to compile for Linux on a macOS host without Docker.

This works far better in Rust, for example.

Of course if Python wasn’t so dog slow we wouldn’t need so many native packages.


TypeScript doesn't have the fantastic numpy/scipy/torch ecosystem though.

I wish I could have TypeScript's block scope and type system but have access to Python's ecosystem. That would be a great combination.


With Typescript I have nothing comparable to Django.


It’s mad how there isn’t a Django clone in the JS world. They just stitch together half finished, buggy ORMs, migration tools and web frameworks? After all this time? Something like Django requires focus and concerted effort over a period of many years, so I guess it makes sense.

I get the impression JS devs would rather have a new framework with bugs and cool emojis in the commits than something more stable and less buggy.


I'd like a Django in Go ! But so farI haven't found anything that productive. I guess the "traditional" web framework for multi-pages website is not trendy enough.


In just world it's more focused on nestjs and angularjs.

At work we are moving to nestjs and I love it.


Typescript is way way better for one off scripts than Python - using Deno you can have single scripts that can actually import third party and first party dependencies reliably.

Neither of those work well in Python. To import third party libraries you need to use Pipenv or Poetry or one of the many other half baked options.

Importing first party dependencies (e.g. code from another file) is also a nightmare because the path searching strategy `import` uses is insane. It can even import different code depending on how you run Python!


Absolutely agree. I've been writing Typescript for years for work, but I constantly try and explore other languages. The only language I'm never 100% sure how to handle in terms of importing both local and third party code is Python. Virtualenvs are a joke, and version management is terrible. Local imports don't make sense too.


I’ve been a Python developer for about 15 years and it isn’t good at most things. Performance is bad, package management is bad, typing is bad, async is bad, etc. Mostly it shines these days because of early mindshare in an exploding niche (AI/ML/numeric computing).


This, after eight years with py I’m sick of it and of the stupid direction it’s taking


Hey now. Async in Python ain't that bad :)


I’ve seen a lot of outages in production which were very hard to debug because someone blocked the event loop with a sync call or some CPU-intensive thing. The failures weren’t in the route that was blocking the event loop, but all over the place, including health checks, which would cause instances to be bounced until the whole service fell over.

Go doesn’t have problems like this—you could theoretically block the event loop with something sufficiently CPU-intensive, but Go schedules work across all cores (and moreover, Go gets hundreds or thousands of times more work done per core than Python, so it’s far less likely to run into these problems) so this becomes highly unlikely.


Python can only be second-best at anything if you take advantage of its strengths and avoid its weaknesses. If you insist on taking advantage of its weaknesses, it will suck completely.

The typing system is a huge weakness. If you insist on focusing on it, it won't be a good language.


>Lots of comments here are stating that typing is half baked in Python, and that if you gotta use types, you should use another language.

Its weird that so many conflate "this would be nice to have" with "this is the most important factor when deciding on a programming language". I think, obviously, very few Python users have type safety as their most important factor when deciding on a programming language.


> very few Python users have type safety as their most important factor

This is a text-book example of selection-bias though.


The main drawback of Python is that it ruins you for other languages.

Python is a scripting language, it's great for scripts and one-offs, etc. But once you get above, say, 100K LoC it starts to get out of its sweet spot and into more direct competition with languages like, e.g. OCaml.


I agree.

In some cases I hate when I have to use Python for a task that really deserves full typings because Python’s type hints are so half baked. But all other times, I like just sprinkling them in where relevant. Python is great at being okay at everything. That’s a real power.


Hei !

This comment inspired me because I was working on data-types at my company at the moment and I realized how static typing would have really benefited us.

So I made a library that does just that !

https://github.com/6r17/madtypes

I hope this will help some people !


Plus excellent testing support.

If you eg interact with remote apis a lot, you can write the natural code: some api calls for setup, then your business logic, then more api calls... and then test this. In one linear function that is straightforward to understand. With great mock support, no need for code generation for mocking, etc.

Ruby is one of the few languages sharing this excellent support. All the other common languages require some combination of dependency injection, code generation for mocking, spewing unnecessary interfaces all over your code and turning it into a tangled mess of logic smeared across many functions, etc.


> Python is[] not meant to be the best at anything, but good at most things.

I haven’t yet had the time to look at dataclasses and pattern matching in recent versions, so serious question: do we finally have a standard solution for doing algebraic-datatype-style stuff? I do have a generic visitor implementation lying around somewhere, but that doesn’t change the fact that every time I see any writeup of the type “Let’s do <a thing involving syntax> in Python!”, I spend most of my reading time wishing they had used SML or OCaml—and I don’t even know OCaml.


> Lots of comments here are stating that typing is half baked in Python, and that if you gotta use types, you should use another language

... I thought this is how most of us use Python? I find it repulsive to use without types. type_hints enable me to use it for Software Engineering, and I get to keep the Data Science stack, which is solid gold.


> the fields I need tools for are vast, and it's the only one that I'm pretty sure will handle a problem decently in all it's various forms

Whilst I agree with this stance to some extent, we should be clear that Python is not in fact the panacea you've made it out to be here. It is very possible that a particular problem requires performance Python can't match, so that any Python solution will be too big/ slow/ clumsy and must be rewritten in a better language. It is also very possible that a particular problem requires safety Python can't match, so that any Python solution will be too dangerous and must be rewritten in an appropriate safe language to avoid unacceptable losses.


> the panacea you've made it out to be here.

This is misreading my comment, at best.

> It is very possible that a particular problem requires performance Python can't match, so that any Python solution will be too big/ slow/ clumsy and must be rewritten in a better language.

This has been discussed again and again on HN, and the answer to it still stands to this day. So now I'm not giving you the benefit of the doubt.


> This is misreading my comment, at best.

If you don't like the characterization as a panacea, what would you prefer - jack of all trades maybe? All-purpose language ?

> This has been discussed again and again on HN, and the answer to it still stands to this day. So now I'm not giving you the benefit of the doubt.

I'd guess you're thinking you'll measure and just rewrite the hot code paths. The problem is in too many cases when industrialising software it's basically all hot, the heatmap just all glows red. Google people did talks about this maybe a decade ago, it's why Go ended up getting internal support, because if you write software the first time in a language with better performance you don't need to do the rewrite.

I think Python's actual strength is as a language for people whose job isn't primarily to write software. Let me give an example of a choice Python made (admittedly not for years) that is exactly what you should do for that audience, and then the opposite:

Ordered Dictionaries make dict have reasonable performance and yet also behave how naive users who have only a limited understanding of how the machine works would expect, which means they produce less buggy software in practice

Multiple Inheritance is too complicated to teach to a class of say, Geographers, and yet it's not really crucial for the underpinnings of the language, so why go to such lengths to support this feature ?


who's using hypothesis these days ? or any advanced testing paradigm even.


It could have a better type system and be just as versatile.


But you can't overload a function name.


> the whole ecosystem is always getting better and better.

That is debatable. In many ways the ecosystem is getting more enterprisy, more ceremonious, overall "heavier". I feel Python ecosystem is rapidly losing its winning characteristic lightness that was so defining for it 10ish years ago.

I don't think "modern" Python code like in the article would inspire comics like this https://xkcd.com/353/


You are allowed to include more than once sentence in a paragraph here. It would substantially improve your comment.


I can’t get over how clunky Python 3 is getting. Does anyone like working with type hinting in Python? All of my code is typed but compared to basically every other type system the process was far more painful than it should have been.

Even years later I still keep printouts of the typing documentation next to me so I can save time when I invariably need to look up the multiple ways you can define ‘T’ when it would just be ‘class Foo<T>’ in almost any other language. I never have this problem C++/C#/TypeScript. I’d love to see type hinted Python look more like TypeScript.

I use Python every day and I like it less and less as it gets more of the features I want/need to use. I want types, I want generics, I want abstract classes, I want enums, I want interfaces. But I don’t want to have to import all of those features from modules when they should be built into the language itself.. When they’re built into the syntax of practically every other language that supports them.

When was the last time you saw a `Protocol` in the wild? Just add interfaces.

Teaching Python to a new developer is a ridiculous experience. There are just too many, usually half-baked, ways of doing things. Even this article, which does a great job going over some of Python’s newer/more advanced features, doesn’t use the latest version of the syntax or the generic versions of most types.

If I didn’t have such a massive investment in Python I would have moved to Rust by now. And don’t me wrong, I love Python, I want to use it. I just wish it would stop eating its tail.

Let’s just do a Python 4. Even if it means another decade long 2/3 adoption foot dragging.


> [...] when it would just be ‘class Foo<T>’ in almost any other language.

Good news on that front at least. PEP-895 [1] removes the need for `T = TypeVar(...)` boilerplate in Python 3.12.

[1] - https://peps.python.org/pep-0695/


Yes. Incremental change is good imo. They may actually be moving too fast.


Incremental change is good as long as the increments can themselves be changed. When you're promising indefinite backward compatibility, it's important to be VERY confident of each change before committing to it.

It seems like the price of all these small incremental changes in Python is that the language as a whole keeps falling further and further away from the 13th item in PEP20: "There should be one - and preferably only one - obvious way to do it." Also, for that matter, "Never is often better than *right* now."


I think you're right, but maybe times that those mantras change.


A good IDE like PyCharm makes it pretty simple, will offer to write all the import statements for you when you reference something that's new to the file you're working on. I'm taking a pretty decent course on Udemy right now, but the instructor said they didn't see the value in PyCharm and they're writing all this extra boilerplate by hand in VSCode.

I started using copilot recently and that's accelerated the process even more.


These are not complicated tasks that should require a plugin or an AI assistant to make bearable. Just add the syntax to the language.


> These are not complicated tasks ... Just add the syntax to the language.

Hahah, I've never designed a language, but I've been around long enough to know these 2 phrases are incredibly at odds.


I've never seen another language with generic syntax as poor as Python's. What's a better solution adding sensible syntax to the language or requiring every programmer to us an AI agent to handle the byzantine imports?


Oh I totally agree with you on that. It's just that "just add syntax" is incredibly difficult even as a greenfield effort, much less trying to fix an existing language, especially one as old and used as Python.

If the (studied) metric that "if over 20% to 25% of a thing needs to be redone, restart it from scratch" is true, then a new language altogether is probably the way and let Python just be Python.


Personally, I switched to Go.


Personally, I switched to Go and soon after to Nim. https://nim-lang.org/ It's great!


Same, I love it! Having done ruby, javascript, python, I'm allowed one niche language (although I really hope it grows in popularity!)


I've started this transition as well. I love Python, but loathe it's typing and anything to do with asyncio.


I've been thinking about getting into go - do you mind giving some pros/cons?


Some pros:

* Go is "simple" (the language has a "small surface area", if you will)

* Go has a broad and deep standard library that is generally well-documented

* Go comes with a reasonably good general-purpose build tool (it builds, formats code for you, runs tests, etc.)

Some cons:

* Go's structural subtyping is a pretty terrible approach to handling the problem it's meant to solve. It's not quite as bad as a runtime failure when a 'type' doesn't implement a method (since that would be caught at compile time in Go), but it leads to what I consider an abusive over-reliance on ad-hoc interfaces that just get in the way of understanding the code (runtime or test code)

* Go's error handling method is inefficient and generally awful. I'm not talking about the verbosity (it's a relatively small issue, in my opinion), I'm talking about that in Go errors are just strings with a gross, inefficient library and system for adding context

* Go's generics implementation is, to put it mildly, inadequate; doing anything mildly interesting with it will be a cumbersome chore

* Go's handling of dependencies...leaves something to be desired


>it leads to what I consider an abusive over-reliance on ad-hoc interfaces that just get in the way of understanding the code

I actually like it, from the architectural angle. Say, I have an entity (class, service etc.) which does only one thing X and it does it well. For a certain scenario, it wants also to do Y, and it wants to delegate it to a different entity, because it's not its responsibility. It doesn't really care how it's done and who will do it, it wants to just delegate it. Why should it know or care if there's an existing interface in some package? That's an implementation detail. The consumer specifies what it wants by declaring an ad hoc interface close to its own definition, and as a result, there's no explicit dependency on a different package. Sure, there will be duplication if several entities in different packages want similar interfaces but, as they say, duplication is cheaper than the wrong abstraction.

>in Go errors are just strings with a gross, inefficient library and system for adding context

Not quite true: errors are interfaces, there's a common pattern to construct an error from a string (because most of the time, that's all you need), but no one stops you from using other ways to construct an error. What is inefficient about it? It's no different from constructing an exception in Java/C# etc.

>Go's handling of dependencies...leaves something to be desired

Can you elaborate?


"duplication is cheaper than the wrong abstraction" is only true if the duplication: a) isn't also an abstraction (which, in go, it is in this case) and b) is "contained" (i.e. not overly used), which it often is not in non-trivial go code.

Go's errors package is riddled with `reflect` and other inefficient code constructs. Printing an error (e.g. to stdout or the log) is fine. But if you want to actually do anything with the error in the code (e.g. to discriminate/branch based on the error), you have to resort to bloated types that implement a poor interface or else rely on string inspection. In either case, you are using the underlying errors package. In small scale applications it's fine, but when you're doing anything at even modest scale and your application encounters errors, it's going to introduce measurable performance degradation.


>only true if the duplication: a) isn't also an abstraction (which, in go, it is in this case)

Can you give an example? I don't follow. I generally don't like abstractions for the sake of abstractions. I use Go's interfaces for a very specific reason: when I want dynamic dispatch, but with nice static typing guarantees. Interfaces in languages without structural typing force you to design a rigid, unflexible hierarchy/ontology well in advance, which only gets in your way when requirements change.

>is "contained" (i.e. not overly used)

Can you show why "containment" is necessary and why "overuse" is a bad thing?


We have a lot of Go services in production and errors have never been a performance issue. In the happy path, when there're no errors, errors are basically no-op. We don't use errors for control flow, though; only for exceptional situations, which aren't triggered often. If we want to branch based on an error, we use errors.Is. I don't remember ever having to inspect an error's string, that sounds like a hack. Usually, branching on an error's type is a rare scenario, even if it uses reflection, you usually just bubble up the error. In practice, at runtime, Go's error handling is just a bunch of TEST RAX, RAX instructions. Do you have benchmarks to show otherwise?


More Pros:

* Compiling to a single binary is awesome

* Rich library ecosystem at this point

* Backwards compatibility is a priority so you can be pretty sure code you wrote yesterday will work tomorrow

More Cons:

* Unused variables and imports are a compilation error which is INCREDIBLY annoying during development (number one frustration with the language)

* If you work with a team or a legacy project, you will almost certainly encounter panics from a nil dereference at some point (or in my experience basically all production bugs were the result of one)

* If you work with a legacy project, early lack of generics encouraged copy/paste spaghetti piles

* Due to early lack of real generics, many popular libraries (such as ent) used codegen to make generic behavior possible. Generated code balloons PRs (unless you take care to quarantine them to their own commit and share commit ranges for diffs), and gets in the way of understanding code you care about.


>Unused variables and imports are a compilation error which is INCREDIBLY annoying during development

It's easily solved with:

  _ = unused_variable

  _ "unused_import"
Sure, it's annoying, but not in an incredible way :) Before I knew about this trick, I used to temporarily delete or comment out all related code, which is indeed incredibly annoying.


I know about it. Still incredibly annoying. I shouldn't need to do anything, it should just be completely ignored unless I run a linter or compiler in pedantic mode.


Unused variables are an indication that the author hasn't completed their thought, so to speak, in the best case. In the worst case, it's a mistake and indicates the code is likely implemented in a way that it does something other than what it intended. I think making it a compiler error is the right way to do it. Other languages should adopt it.


The thing about software in development is that it isn't complete, practically by definition. I don't litter my code with unused imports and variables, I just have some stuff hanging out below while fixing it above. This is what linters are for, and unused variables and imports weren't the thing making software unmaintainable. Could even have a compiler flag that errors on unused for prod builds. There are a bunch of ways to skin a cat.


> Go's handling of dependencies...leaves something to be desired

Can you give specifics? I think publishing module versions as entire subtrees is very verbose and can be cumbersome but otherwise enjoy everything about Go modules. I find most peoples complaints are that its not like <npm|pip|maven>


That it's not like npm, pip, or maven is a plus in my view: I despise all three of them.

I don't like the proxy system at all. It is not, in fact, easy to set up a private proxy that "just works." Also, it is quite trivial to have "sync" issues with the go.mod and go.sum files that manage/"lock" dependencies. `go mod tidy`, `clean -modcache`, etc., are required far too often. And it has the same problem with dependencies-of-dependencies that python does. It leads to bloat and sometimes inconsistent behaviors in applications.


> And it has the same problem with dependencies-of-dependencies that python does. It leads to bloat and sometimes inconsistent behaviors in applications.

I haven't ran into that first hand. Is there a dependency management tool that prevents this? Cargo? (I don't have much experience with Rust yet FWIW)


I don't have a ton of go experience but it seems like there isn't a way to declare tool/binary dependencies that aren't imported anywhere. We ran into this problem for codegen libraries.


I recommend you consider Rust if you can. I'm not going to lie: it's definitely harder to learn than Go, but the benefits are immense as Rust is serious about C-level performance and encompassing the ample capabilities of C++ while bringing some very commendable goals (ie safety) into the binary-compiled language arena.

And believe me: once you pass the first slope of learning (borrowing, lifecycles...), everything clicks just right and writing code becomes a very delightful experience, similar to what you do with [pick a trendy dynamic language of your choice].

And I think it's safe to say that Rust in 2023 can be considered mature and non-niche.

If you'd like sheer enthusiasm to help you make your mind, I recommend this guy's YT series on Rust:

https://www.youtube.com/watch?v=Q3AhzHq8ogs&list=PLZaoyhMXgB...


I do. It works well.


I went through a similar journey, without the Rust part. Started using type hints, data classes, pydantic to get the benefit of static typing after dealing with the pain of refactoring dynamically typed projects.

It was better, but it _feels_ like lipstick on a pig. I love Python, but types are not what it's best at. It's missing features that makes typing easier.

I've realized if was going to write typed Python, I might as well switch to one that has better support for that and switched to .NET.

I can do almost all the dynamics things that I can do in Python with C#, but in a type safe way. Expressions and extension methods, reflection with `nameOf` opens up a lot of possibilities.

One thing that kept me using Python was SqlAlchemy, an ORM, a damn good one. Entity Framework Core, after V6 is a better one, especially with LINQ.

If you have the opportunity, give it a try. You might be surprised.


Another difference you might be surprised by is that the .NET tooling by default collects various data from your system and sends it to Microsoft [1].

If you want to avoid this (and still want to use .NET) you'll have to make sure that the environment variable DOTNET_CLI_TELEMETRY_OPTOUT is set to 1 in all contexts before touching anything.

[1] https://github.com/dotnet/sdk/issues/6145


> DOTNET_CLI_TELEMETRY_OPTOUT

I didn't know about this. I wonder how that sits legally with local data protection rules (EU).


I doubt they way they approach it is legal, but I don't think any of the DPAs have time to look into it.

For what it's worth, the tool prints a clear warning the first time you invoke it. You shouldn't need to opt out regardless, but they do at least communicate their stalking.


> I can do almost all the dynamics things that I can do in Python with C#, but in a type safe way.

If you like flexibility with a good type system, give Typescript a try, its miles ahead of mypy and its designed by same person who’s behind C#.


Typescript is of course nice and is a great type system but can’t totally paper over the absolutely impoverished base language that is JS. When you use Python and things like equality do what almost anyone wants instead of checking object identity, it’s real annoying to futz around in TS


[1, 3, 12].sort() == [1, 12, 3]

5 in [5] == false


Even setting the sort order aside, just the fact that it both returns the sorted array and also sorts it in-place is not great. I prefer Python's distinction between sorted (returning a sorted copy) and list.sort (sorting in place and returning nothing).


I don't think this is a completely accurate comparison, as TypeScript is transpiled to JavaScript and this provides restrictions to the expressibility and usability of types at runtime, which I've sometimes hit my head against. C# has much more type information available at run time, as does Python.


I'm a full stack dev so I'm also using TS at the same time. While the flexibility is nice, it's too flexible for my taste. To give a one basic example, I want to go to the definition of a symbol with a single action. Not possible with the current TS tooling and all the crazy things you can do with it's type system.

I want the type system to be predictable, so it can support whatever I want to build on top of.

There are features I'm missing with C#, but still I'll choose it over TS any day.


Even better would be to try F#, it will be able to use all the .NET libraries, has Python's succinctness, but with a strong type system (and inference).


I may switch to it eventually, but at this point in my career 16 years in, I try to avoid seemingly niche things when I just want to get stuff done. "Boring" is good. I know it's the same platform under the hood but I decided to spend my "innovation tokens"[0] on the stuff I'm buildin.

[0]: https://mcfunley.com/choose-boring-technology


I call it "novelty budget" and I completely agree.


I'm also 100% convinced most people who use mypy don't realize the myriad ways it just silently stopps typing things or just silently crashes with a 0 exit code. Even if you configure it to warn untyped functions etc. It will still just not work properly in some of circumstances and you will literally never know until you debug a bug that just happened to trigger it. There are over 1.4k open bug tickets it's such a broken piece of software: https://github.com/python/mypy/issues?q=is%3Aissue+is%3Aopen...

The involvement of Guido in mypy is such a tragedy.


I think it's probably no mere coincidence that the article recommends pyright, not mypy.


Seconded, C# is a great language these days. LINQ is just magic (in a good sense of the word), doubly so when it’s seamlessly translated into SQL by Entity Framework.


Django has kept me chained to Python. Just can't beat admin/migrations/ORM. Ive often wondered why a true django-like project has never been born out of the JavaScript/Typescript ecosystem.


> One thing that kept me using [language X] was [insert awesome library here]

And that's why I keep [language X], because sometimes the equivalent in [language Z] is not as good.


If [awesome library] is the primary thing your code does, then sure, you might be stuck with [language X]. But if it isn’t, then perhaps the [language Z] replacement might be good enough (or better but not yet well understood, as in the case of the OP), or you might keep [language X] just for that small component.


A typical pain point where type hinting fails in Python is with list of some types, it can't be enforced by the language without more checks.

For example, you can't do:

if isinstance(x, list[int])

though it would be super useful


You need to use mypy or similar to not check it at runtime:

    els: List[int]
To test it at runtime you can use typeguard s @typechecked annotation. For convenience you can also use Pydantic.


I didn't know about typeguard. Thank you for the recommendation!

How do you do this in Pydantic though?


For this example pydantic would be overkill, but you can use with it class definitions that follow standard python class hints, with additional utilities like YourClass.parse_obj(d) that parse a dict and throw a validation error if it does not match, with mostly standard type annotations.

It can also be used to generate openapi swagger files from the types and things like that.


Yes I know, I'm just pointing out that you need additional tool to check that.


What can .NET do that Python+mypy cannot?


At some point you gotta ask yourself: Why am I still writing Python then, if I want to write Rust? If I want Rust's safety, why not write Rust then, with battle proven tools and better type system from the start?

Often the answer to this question unfortunately is not a technological merit, but knowledge of a team and willingness to learn. You might want to write actual Rust code and there can be any number of benefits, but if your team does not want to learn Rust, you got a problem and cannot move on.

Here it becomes visible, how important it is to learn multiple programming languages. It makes teams flexible and allows choosing technologically better solutions, instead of having to shoehorn things into that one language the team knows. Programming languages matter. Each has its strong areas and flaws. Some languages allow you to change the language itself, to extend what you can do with them comfortably and safely.

All this is assuming of course, that you will have suitable replacements for any libraries you need. Such replacements can including writing some code oneself, using the language's facilities. For example, if the language is powerful enough or includes the right batteries for it, you can easily write a little parser for a file format yourself using a grammar that is available in some standard. But keep maintainance in mind. If the file format is still developing, you might not want to do this.

Something that in one language might be an external dependency, a library, can in another language be a concise self-written code, that one can look at and quickly understand.


> At some point you gotta ask yourself: Why am I still writing Python then, if I want to write Rust? If I want Rust's safety, why not write Rust then, with battle proven tools and better type system from the start?

Garbage collection


To what end? I think avoiding manual memory management is great, but I never had issues in Rust either, since I don't manage memory manually there either. I think that is kind of a point of Rust, as it avoids that whole class of bugs common in C programs.

But to what end do you want garbage collection (GC)? Just for having GC? Or a more specific purpose, that is difficult to attain with Rust's model?

For example: I usually want tail call optimization in languages I use. But not for it in itself, but for being able to write functions nicely, not having to worry about recursion depth, expressing things more declaratively, avoiding mutation (at least of an explicitly managed stack or any loop variable) and probably other things that don't come to mind right now.


GC is nice because you don’t have to constantly think about ownership and lifetimes.


The amount of time spent worrying about this is vastly overestimated. You don't really have to worry about lifetimes in your common day to day coding unless you're doing some seriously performant or low level stuff. At which point you probably don't want a GC anyway.


I’ve written a fair bit of Rust but it was ~5 years ago. I understand some ergonomics have improved but I think this somewhat depends on the type of application you are developing.


Just for having GC. When I'm writing Python, I am fine with not saving every CPU cycle and every byte of RAM, but having types saves my productivity.


I left Python once I found Scala. There are a fair few ML-family languages (and I'd argue that the ML-family functionality is most of what people love about Rust) with garbage collection; OCaml is frequently mentioned.


You're absolutely right that:

> Often the answer to this question unfortunately is not a technological merit, but knowledge of a team and willingness to learn.

And for the software engineer, this is also true:

> [it is] important ... to learn multiple programming languages

But I don't think the majority of python users are software engineers. They're data analysts, they're scientists, they're students. They'll never face a choice between Python and Rust, let's just be happy they chose Python over Matlab or Excel.

And with that in mind, writing Rust-shaped python so we can interoperate with our less-softwarey-brethren--while not the nirvana that we crave--is not a bad compromise.


100% this. As one of those people graduating from "Maker of Excel-abominations", I absolutely love that python can shape itself to my learning curve, while being useful the whole way through.

Articles like this expose people like me to concepts of typing that I never would get otherwise, and by practicing the concepts in python I might eventually be able to make the leap.


It's worth using a type checker for long enough to get a feel for how it can find bugs that would take you much longer to find at runtime. From there it's pretty easy to imagine how languages designed from the get-go to do this might do it even better.

It's also worth asking if the people who read your code are going to disengage when they see a big pile of type hints. It can be "better" in some abstract sense and still worse for the task at hand.


Please don't.

This is about people trying to use the wrong typing system "static typing" in a language that already has a better typing system "dynamic typing".

This is about bad programmers who have come from Java and want all other programming languages to look like Java.


Not better. Different.

Languages make different choices to suit different domains, and with perfectly good reasons.

Don't complain, embrace the richness and diversity of programming languages. And choose the right one for each task.


It's better in the context of a programming language designed to use it.


If you're scaling Python to the point where you need everything fully-typed, your next stop shouldn't be Rust, it should be something like Java or C#. You just don't need Rust's memory semantics, and the GC will be good enough.


Because Python enables faster iterations. By the time you manage to produce a Rust program that even compiles, you can do several edits with tests in Python.


`cargo new --bin cli`. There, it compiles.

I use Rust because I want to move fast. Python would slow down my iteration significantly. I have a lot of experience using it and I prefer not to whenever possible.


There are no high productivity web frameworks for Rust. That's what Python is used for (Django). Plz don't bother to list actix or axum, or askama. That's a joke compared to Django.


That's only until you've grokked Rust. I find iterating in Rust to be faster than Python.


Iterating in Python is always faster than iterating in Rust. If you do indeed find what you are saying then you simply speaking aren't able to write Python code.


I thought this too before my current python project.

It doesn't even use type hints and has no tests.

When I look at something I don't know what it is, what it does, and have to debug there (mentally or actually) to even know what's happening. If I change something, a few runtime errors pop up right at the start. Others take 30 minutes.


This may shock you but there is effort involved in learning how to use Python effectively.


What has this to do with my comment?


"Others take 30 minutes"

If you knew what you were doing with Python, you wouldn't have this problem. Obvious solutions include unit tests and pickle.


Unit tests wouldn't help with that, because it doesn't cover the interaction of all the classes when they pass data on to each other; and they would probably take me months;

I would need integration tests, for which I first need to understand how the code works together


"all the classes"

Write simpler code, let go of the complex Java class hierarchies.

No one can understand them, it's hard to reasonable them and they are mostly a mess.

The point of Python duck typing is you do more by writing less code.


Data flows through functions as well... Or between classes that are injected , no hierarchies required; Anyways I can't make wishes about how the code was written


No, because to iterate in Python, you have run the program and get it into the right state. For many things in Rust, I can just use `cargo check`.


From that statement, I'm not convinced you know Python nor Rust.

I'm going to suggest you read the following: https://docs.pytest.org/en/7.3.x/ https://doc.rust-lang.org/rust-by-example/testing/unit_testi...


Are you saying the only quick way to iterate is unit testing? My point is that there's a whole class of things that require unit testing in Python that are just automatically caught in Rust.

If you don't know Rust well, it's going to be slow going. If you've internalized the way it works, it can be very fast to iterate. Especially when you learn to start structuring types in such a way that bad states aren't even representable. You can often make it impossible to construct inputs that would require debugging in Python.

> From that statement, I'm not convinced you know Python nor Rust.

Thanks for the personal attack. I use both Rust and Python regularly at work, and write tests for both. Working on the Rust code base is pretty much always faster, because so many things are caught in advance. Yes, unit tests are still needed, but not for as many things. Learning to use the type system to your advantage, instead of treating it as an obstacle, can really speed things up.


In my case I write Python like this because the colleagues who are using my software are hardcore engineers but only intermediate or beginner programmers. Well-designed APIs with IDE support and safe patterns protect them from a lot of potential mistakes they didn’t know they could make. I won’t be able to ask them to learn Rust out of time constraints.


I like these ideas, and practice many of them regularly. Unfortunately one gets the feeling that they're swimming upstream a bit of the time.

For instance: data classes instead of tuples or dicts. Great, love it.

Now add immutability (frozen=true isn't enough so you're probably going to need pyrsistent) and serialization (dataclasses_json works reasonably well for this), and precommit hooks with pyright or mypy... And you've got yourself a reasonable system for leaning on the type checker.

Except you've been at it all day and you still haven't gotten any "work" done, and you've got all of these weird constructions that aren't complex themselves but they'll make a newcomer to your project go "huh?"

Sometimes it's a necessary evil. My users write python, so I need to write python to ensure that their experience is nice and ergonomic. But if I try to write python like it's rust (which I usually do) I end up with quite a pile of things that add toolchain complexity without addressing any business problems.

Don't get me wrong, I still do it, but I've never managed to make it look like a good enough idea to get other people to join me, which is maybe the universe trying to tell me I should stop.


I agree that it's not easy to explain this to other people, and Python tooling in general is pretty terrible. OTOH, having types in the code and giving newcomers to the codebase the ability to actually see what types "flow" through functions, and having the ability to "Go to definition" is a big benefit for introducing old Python code to newcomers.


One thing I liked about the post was I can sell some of the suggestions to the non-engineer Python users I work with. Types in signatures and data classes are big wins overall.

While I miss working with OCaml, Python's pay-as-you-go style of typing meets many users where they're at.


So cool. I love rust and my day job is in Python.

But I have a question. I'm a junior dev(gimme some leeway here).

I don't really understand how important these design patterns are because in the programs I write, I usually write the classes and call them in runtime myself. I think usually we write the servers and client ourselves.

Let's take the different client types example. You are making an assumption that users can call close on a closed client. Is it so hard to just follow the sequence

` client = Client()

client.connect()

client.authenticate()

client.send_message()

client.close() `

Aren't they overengineering? Perhaps I have not worked in a large code bases to understand the problems TypeState Pattern or more generally these design patterns solves.

I understand that these patterns are elegant and make the future modifications or enhancements easier but I have never seen the tangible value enough in real life


> I don't really understand how important these design patterns are because in the programs I write, I usually write the classes and call them in runtime myself.

One of the biggest learning moments in my work has been collaborating my past self. You need to have stepped away from some piece of code for a while to get this. If the code is not well organized you will think an alien wrote it.

It's only over time you realize that "it works because I know how to use it" is actually a problem. Over time you also figure out how to write the code in a way where you aren't surprised at your own decisions.


> Perhaps I have not worked in a large code bases to understand the problems

Basically, yes.

In a large codebase the steps in your example could well be separated by thousands of lines of code, with much branching, or perhaps complicated inheritance hierarchies.

In that case leveraging type hints to avoid logic errors can avoid a lot of hard-to-spot bugs


Would that all our codebases be that straightforward...

I worked in a team with low skill. One of our flagship apps had a lot of inherent complexity, and was coded by an over-promoted fool. He wrote spaghetti that was vile even at launch.

To my complete lack of surprise, after launch, the users wanted an enormous list of changes and new features. When you try to add features to code that's already spaghetti, the complexity compounds.

There's only one way to manage that, and it's to refactor. But to refactor you need good tests, and your team needs to accept the use of resources for refactoring.

This dungheap had objects with fucky interlocking responsibilities, e.g., scheduling was partly done by the ORM classes and partly by the class for customer output.

This app was also extremely time-based. A CRUD app doesn't really require you to think about how state changes over time; but in the dungheap, you couldn't "see" the state from the code, you also had to understand when in the sequence of operations a method was called. This is one of the hardest types of complexity to deal with.

The app was completely untyped. Some functions take enums and others take strings. There were state enums and also free text strings for state. You might see both "FAILED" and "FAILURE" for the same state concept. A huge amount of data was passed as nested dicts, and without the benefit of consistent kwarg names, so you would not know whether the variable you're looking at has an "error" or an "err_msg" key, or an "output" key containing a dict with an "error" key. To find out, you had to run the app for several minutes with a bunch of print statements, and the answer you got might vary depending on any of the 20 input flags.

I generated a call graph to try to grok the sequence statically but the graph was spaghetti and the sequence contained loops. A few code paths called methods once with one datatype and then again with another type.

Type hints massively reduced the cognitive burden. My violent impulses towards our dickhead "architect" reduced from daily occurrences to weekly.

Fwiw, TypedDict turned out to be a great fit for the use case.


I think it depends™ – when you know for sure there will just be one place to call this from, your approach is totally okay and wrapping it in seperate types would be overkill.

The seperate types will come in handy, once you end up using that client object throughout the codebase and you are not sure anymore who is connected, who is authenticated and so on. By pushing that onto the type system you can make less mistakes.


With types (independently of which language) you could encode the knowledge of the sequence you mention. For ex. send_message could take an AuthenticatedClient type. This way, instead of having it all in your head as implicit, it is explicitly described, and in many cases, impossible to use wrong.


I have limited development experience but I would say this is taking shared libraries into consideration, where you want your public API as comprehensible and failproof as possible. Now put yourself in the shoes of the library user in both scenarios, you will clearly see which code is more prone to errors. The code presented in the article looks like something I'd feel comfortable working with.

It is also more convenient for your own private libraries, but makes not so much of a difference for one-off scripts you maintain alone (unless you are also the type that forgets what you were even doing in that script a few months from now, in which case you could benefit like me). It's about lowering cognitive overhead and chance of development errors in the long run if everything is nicely abstracted.


Depends on what you do with the Client. If you have a complicated application and pass the Client around between different functions then maybe you would need to check every time whether the client is connected and authenticated before sending a message. If the function only accepts an authenticated client, then the type checker will complain. You could also notice the problem in a unit test, but it may be hard to represent all possible states.

In simpler situations where every time you just do the sequence you probably would want to combine some of these calls anyway.


This is a great question. Often times its the new engineers trying to makes things complicated and the older ones pushing back. Most of the time contortions to make a language different than it was designed are terrible idea. In this case, my claim is that this is a real problem worthy of solving. However I've also worked on plenty of projects that used the simple style and it was fine. One differentiator is how big the program is and who all will be using it.


The idea is that you'll come back to this code years from now. You'll have your head full of other projects and code bases by then. You will have only a faint idea of what that old code of yours is doing and you won't remember how and why. Anything that can help you to understand your code is good.

The same reasoning applies to other developers working on your code next month. However if your Python turns out to be very far from idiomatic Python you're not helping them the slightest. You're doing harm to the team and to your career, unless everybody agree to make it the company standard and you have your back covered.


Python is still my favourite language, 25+ years after starting using it professionally. Yes, it has become vastly more complex, but its core principles of readability, simplicity, and explicitness continue to make it a versatile and powerful language for both beginners and seasoned developers.

And I personally welcome "recent" additions to the language (gradual typing, pattern-matching, asyncio) as they reflect Python's (and its community's) ability to adapt and evolve, remaining relevant in a rapidly changing technological landscape.


What is the smart money doing for type checking in Python? I've used mypy which seems to work well but is incredibly slow (3-4s to update linting after I change code). I've tried pylance type checking in VS Code, which seems to work well + fast but is less clear and comprehensive than mypy. I've also seen projects like pytype [1] and pyre [2] used by Google/Meta, but people say those tools don't really make sense to use unless you're an engineer for those companies.

Am just curious if mypy is really the best option right now?

[1] https://github.com/google/pytype [2] https://pyre-check.org/


I have extensive experience of MyPy and Pyright, even going so far as to try to fix a but in MyPy (I couldn't do it; the code is too much of a legacy undocumented mess).

Pyright is much much better. If you're starting a new project or work with people who understand the value of type hints and want to actually fix them (ha yeah right) it's a no brainer. It's also the default in VSCode which is nice. Oh and they guy that maintains it is a bug fixing machine.

The only reason you should consider MyPy is if you're adding types to a big existing project or you're working with people that don't get it. MyPy has way more "eh whatever" options so it doesn't give you a barrage of errors when you run it for the first time.

But other than that you should use Pyright. No contest.


I also have extensive experience with mypy and pyright, also even going as far as trying to fix a mypy bug (also unsuccessfully). For pyright, the dev was so responsive that I only had to report bugs for them to be fixed (sometimes in a couple of hours).

It's been a year or two since I've touched Python, but back then, Pyright, was way faster, was more feature-complete, had a much more responsive dev team (of 1). It was better literally along every metric: except that it didn't support custom extensions to the type system like mypy did. But that wasn't a huge issue since there were very few extensions, and even the one that was developed by a mypy core dev for sql-alchemy was hopelessly out-of-date, or impossible to get working. So I didn't miss it much.

All this to say: pyright was much better.


I'm on the other end of the spectrum — I only write Python occasionally for smallish utility tools, just a step above shell scripts — so I'll give my input as someone relatively new to typed Python. I recently industrialized (to use the terminology from other comments here) a couple things that I'd originally done in quick-and-dirty Python 2 over a decade ago. My experience was that mypy had a handful of false positives, and pyright had none. Pyright also found one comparatively subtle mismatch that mypy didn't, although its error message in that case was incomprehensible to me.

(I also used pylint and pytest+coverage.py; interested whether there are better choices for next time.)


I just did some highly unscientific spelunking on the topic a couple hours ago, and my takeaway was more or less that a bunch of people on reddit said pyright was better.


You may already know this, but Pyright and Pylance are the same thing.

"Under the hood, Pylance is powered by Pyright," https://marketplace.visualstudio.com/items?itemName=ms-pytho...

I've been using Pylance with `"python.analysis.typeCheckingMode": "basic"` for a long time and have found it quite good. Most of the time, the problem isn't Pylance/Pyright, but poor or wrong type annotations in third-party libraries.


Maybe they produce the same results (I don't know), but Pylance using Pyright doesn't mean that Pylance is Pyright.

One important difference in this case is that while "Pylance leverages Microsoft's open-source static type checking tool, Pyright" [1], Pylance itself is not open source. In fact, the license [2] restricts you to "use [...] the software only with [...] Microsoft products and services", which means that you are not allowed to use it with a non-Microsoft open source fork of VS Code, for example.

The license terms also say that by accepting the license, you agree that "The software may collect information about you and your use of the software, and send that to Microsoft" and that "You may opt-out of many of these scenarios, but not all".

[1] https://github.com/microsoft/pylance-release

[2] https://marketplace.visualstudio.com/items/ms-python.vscode-...


Ruff [0] is the best linter around for performance but I'm not sure how well it fills the static analysis role. It has a vscode extension which updates the linting with no noticeable lag but it isn't a full fledged type checker. Their suggestion is to run ruff through the vscode extension and then manually run mypy or whatever type checker on occasion (maybe as a pre commit hook?).

[0] https://github.com/charliermarsh/ruff


Has anyone managed to get ruff to work when run as a pre-commit hook in a project whose deps (such as python and pre-commit) are declared in a flake.nix?

I love being able to pop between devices and have a uniform dev experience everywhere, but ruff is a thorn in my side. pre-commit can't be told to just use the ruff binary /nix/store and fails to install it correctly, so things get hacky.


For your mypy performance question, make sure it's using incremental mode [1] so that it can skip checks on code that didn't change. Yes, it is probably among the slowest of type checkers, but it is also quite thorough.

[1] https://mypy.readthedocs.io/en/stable/command_line.html#incr...


It sounds like incremental mode is the default now? I've noticed that it runs much faster after the initial run, with no special configuration

But that link also mentions daemon mode [1], which supposedly "can be 10 or more times faster", so that could be something to try. Running as a persistent server with an in-memory cache is probably part of why LSP-based type checkers like Pyright can perform better than mypy.

[1] https://mypy.readthedocs.io/en/stable/mypy_daemon.html#mypy-...


I'm using pyright at the moment. It's ok. Feels better than mypy. If you're used to counting on the type system to guarantee your invariants, you'll be disappointed, but if you're just looking for fewer TypeError, it will help.


Have tried ruff? It is a Python linter with a speed in mind.


I'm really happy with Pyright.


> Dataclasses instead of tuples or dictionaries

The point is good in general, but it's still perfectly possible to use tuples and get all the field name and typing benefits: just use typed named tuples [1].

Unfortunately this is buried in the typing module. I couldn't even find it in the table if contents, and I knew what I was specifically looking for.

https://docs.python.org/3/library/typing.html#typing.NamedTu...


Named tuples, typed or not, are a bad idea for most data structures. They might make some sense when dealing with coordinates (x, y, z), but why would you want to use it for an Employee class, as shown in the example? This is a tuple, so you can iterate over it, employee[0] == "Guido", and employee[1] == 3. This isn’t useful in any way, it’s confusing, and it’s making it harder to change the order of fields in this class.

Just use an actual class, not a class pretending to be a tuple sometimes.


I wasn't saying that you always ought to use a named tuple instead of a dataclass. Only that, if you do want a tuple, that option is still available.

Personally, I would use a tuple if I want something immutable. I realise you can do that with dataclasses by setting frozen=True but it feels a little over engineered to me.


I dipped my toe into NamedTuple. Where it falls down is that it requires a custom serialiser. Python's json library is a crock of shit once you've used .net.

TypedDict turned out to be the winner. The runtime type is dict but you get all the benefits of type annotations. I dropped it into a crappy codebase without touching the calling methods.


Isn’t the whole point of named tuples to not iterate over indexes, but to iterate over field names instead, e.g employee.name, employee.salary?

You can also use typing.namedtuple as a baseclass for your classes to have the same functionality .


Named tuples have both field names and numeric indexes. If you iterate over a namedtuple, you just get the values. The field names are hidden in a _fields attribute, there is no way to iterate over (name, value) pairs.


> there is no way to iterate over (name, value) pairs

zip(x._fields, x)

I get your point that it's not just a built in method though.


> typed banned tuples

I think you mean typed named tuples


Oops, thanks! Fixed now


I didn't know about that, thanks. I kind of agree with a sibling comment that in most cases you don't want to expose an indexer if the fields are named, but it might be useful sometimes.


Most of the issues described there are solved for me by using pydantic: https://docs.pydantic.dev/latest/ (whose core has actually been recently rewritten in rust ^^).

To avoid switching parameters with the same type, I like to use * in my methods definitions instead of NewType to force naming parameters. Pydantic allows to validate methods parameters too (see https://docs.pydantic.dev/latest/usage/validation_decorator/) but this is at runtime and can add a performance overhead so it could be enough for the modules interfaces with the rest of the code only. For static checking, the NewType method is probably better, using a simple * avoided us many mistakes already.


See how much discipline is needed to "write Python as Rust"? This is the real problem: you can have great code in dynamic/liberal/forgiving languages, but it will be due to programmer's discipline.

Using a tool like Rust (or other strict/strongly-typed languages) forces some quality constraints on all code that compiles. This is, to me, a great benefit of these languages.


Yes, you can write Python code that runs with missing or wrong type hints. Not ideal, but you can add a static type checker (mypy) as a step in your CI pipeline and reject commits that fail this step. Not much discipline required.


You need even more discipline to write code without the type hints. So a move in the right direction.


Type hints are a great feature in Python to enhance readability and to catch programming errors when refactoring or learning a new library.

A lot of the features presented here are helpful for programming in the large, i.e., once the program exceeds mental capacity.

As a general goal I think that programmers should reduce complexity even of simple stretches of code so that, adding all its context, a larger chunk of code fits into their brain at once and errors become more visible. Part of that is unloading tasks onto the tools.

Unfortunately, its still common practice to create write-only code, often justified by an "it works" (usually not true).


This reminds me of learning Haskell…

Having transitioned from Java to Python, because the type system in Java had very little bang for the buck, I was under the wrong impression that types sucked and dynamically typed languages were superior.

Boy was I wrong about that.

I of course don’t try to import all the structural or semantic haskellisms into other languages, but just recognizing the immense value you get from typing your inputs and outputs has changed the way I program.


Funny. I just started a side personal project and decided, since I didn't use types much at work to just try out a bunch of the newer python features. My code came out pretty much exactly like the blog.

But I've never coded in Rust. I could have written the exact same blog (excepting maybe the last bit) and called it "writing Python like it's Scala".


The `find_item` example uses List for an argument. To my thinking, this indicates that the function is intended to mutate the argument. I don't think that was the author's intention, though, so I would prefer to use Sequence in this situation (or possibly Iterable, if we only need to traverse the sequence once in order).


Someone else also mentioned this on Reddit, good point.


I'm curious how much our early programming experiences fix (as in, make permanent) our mindset about types.

As a little kid, I dabbled a bit in BASIC.

But my more formative years, in high school and college, really centered on statically typed languages: Pascal, then C++.

In the 20+ years since then, statically typed languages have always seemed far saner to me than, e.g. Python.

I can think of various explanations for this:

(a) Because I got my start in statically typed languages, that became ingrained as my "natural" way to reason about programs. Which was self-reinforcing, as practice reinforced my ability to express program constraints using types.

(b) I'm naturally biased toward writing correct programs, rather than rapid prototyping. So I would have gravitated towards statically typed (or even proven) programs regardless of my early education.


This probably plays a role. But for me, I started with C#, but still after I later used Python, I wasn't using types at all. Only after I got more experience in programming, and especially was exposed to Rust, then I started to write Python in a.. different way :)


I had personal experience with statically-typed languages before I ventured into industry with Python. Years of working in large python code bases revealed the wisdom of once again returning to statically-typed languages.


> I have no idea what is going on from the signature itself. Is records a list, a dict or a database connection? Is check a boolean, or a function? What does this function return? What happens if it fails, does it raise an exception, or return None? [...]

This type of objection is only raised by people who have never seriously used Python before. It sounds convincing that this is a serious flaw in Python, but in practice it never comes up because when you know what domain you're working in it's always immediately obvious what types the arguments are.


> This type of objection is only raised by people who have never seriously used Python before

How serious is “serious”? Because I’ve worked on millions-of-lines python projects powering billion-dollar companies, and I 100% agree that maintaining untyped python code that somebody else wrote (or that I wrote myself, 6 months ago) is a nightmare for exactly these reasons...


I mean I can't even imagine what would lead anyone to write millions of lines of Python for a single project, so maybe there's a level of serious above what I do. Maybe you're doing it wrong?


How many engineers does your company have? When you have thousands it is easy to write millions of lines of code.


> it never comes up because when you know what domain you're working in it's always immediately obvious what types the arguments are.

Oh this one is hilariously wrong. Is it InvoiceLine that we are getting here? Or Maybe LineInvoice? Nope those aren't the same. But the only argument is called «line». How shameful. Much sadness. Better add a print statement, push a new build and comeback in twenty minutes


It sounds like the codebase sucks or you haven't taken enough time to understand it.


Suppose it sucks, and you are expected to contribute to it already. Welcome to the real world?


That's beside the point. Those types ("types" you see? hehe) go against your argument "it's obvious from the context", because now it becomes "it's obvious from the context if you are lucky enough that your codebase doesn't suck or you spent 'enough' time figuring it out"


I have used Python for 10+ years, and I'm able to forget what are the input arguments/return type of a function after 30 minutes of working on some other part of the code :)


The thing with type hints in Python is that it's not enforced unless you make and maintain checking it as part of your workflow. And it comes with a bunch of things, at least: make sure to configure mypy, make sure you run mypy after push (experienced pythonists will forget to run it before), make sure everyone's IDE has it integrated, make sure to be super clear on which version of mypy you run (so errors people see in their IDE are the exact same as on CI). And don't forget to update all this when new version comes out.


It's not that many steps. You already have CI - adding mypy is one extra line, you should already have a way to keep track of versions in your project - keeping track of the mypy version isn't any additional overhead.

If your team uses VSCode, you can have a devcontainer for the project (https://code.visualstudio.com/docs/devcontainers/containers) to make sure that it's in everyone's IDE along with your other linters, formatters etc and you can also have pre-commit hooks.


> You already have CI

Don't worry about me. I mean all projects that do not have people and resources for this stuff. People just want to write Python and be done with it.

Without type hints they wrote Python and if they run it and tests pass then it works. Now they write Python code that can be wrong but they don't find out unless they also run an extra tool.

The fact that it requires CI now is a good illustration of the problem;)


I don't understand who the "just want to write Python and be done with it" demographic is. If you're a developer, it makes sense to do things well, if you aren't and you're writing small one-off scripts, you can do fine without type hints.

Anyone who can write and run a test can also call "mypy ." on their project.


Of course, but they forget.

So when you see type declarations in Python, you cannot trust them the way you can trust them in Rust or TypeScript. Outright wrong types that make zero sense cause no errors, no test failures, etc. Type validation is reduced to the level of linter, and no one really cares about the linter the same way as they care about working code.

This gets people new to static typing into bad habits and loses the main benefit of type system.


Don't all the exact same problems exist with e.g. what version of Python or what formatter/linter your team uses? I don't see how this is unique to mypy.

There is undoubtedly setup with all these tools, but the benefit massively outweighs the cost to my mind.


Version of Python is a runtime thing that can be checked.

Formatting is not relevant to whether the code is correct, but types are.

The problem compared to static languages is that it does not need to compile, types are completely ignored at runtime and there is no built-in way of checking it (like python binary with a special flag) so everyone does things different.


Having mypy in the pre-commit hook helps a lot! (But I agree that difference between mypy versions is a pain.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: