Hacker News new | past | comments | ask | show | jobs | submit login
Python's Type Checking Renaissance (dafoster.net)
150 points by davidfstr 11 months ago | hide | past | favorite | 180 comments



People who use dynamically-typed languages are finally realizing that having to simulate a type-checker mentally is a bad idea.

Also, many statically-typed languages do not take advantage of the flexibility a good type-system can provide. For example, C, C++, Java.


Contra: (some) people doing exploratory programming in the small e.g. scientists don't want to have to deal with the type checker.

Type systems like Haskell add a lot of value, but you have to really know the language to make good use of it. I mean, just IO handling requires planning ahead. Small dynamic languages allow one to keep simple things simple. There's value in that too.


When I'm wrangling data, it's not necessarily that I don't want to have to deal with the type checker. It's that I have yet to see a statically typed language whose type system is designed to help solve the kinds of problems I have when I'm wrangling data.

The distressing example that nags at me here is how the headline feature of Apache Spark 2.0 is that they Greenspunned a dynamic typing mechanism into it. And, in doing so, realized improvements in ergonomics, performance, and memory consumption. Understanding why and how is an instructive lesson.

The most promising typing system for arbitrary data wrangling that I've seen so far is the one that's built into a Python package called Dagster. It seems reminiscent, as best I can tell, of dependent typing. (Although I haven't actually spent any time with a dependently typed language, so that could be a misunderstanding on my part.) The actual checks are done at run time, of course, but I don't see any particular reason, aside perhaps from the complexity involved, why a similar mechanism couldn't be implemented as a static type check.


What are the sorts of problems you have when wrangling data that type systems don't help with? I'm not a data scientist, but I'm a type system guy, and I'd like to learn about ways that type systems are failing users. I took a very quick peak at Dagster and the Google results for "apache spark 2.0 dynamic typing" (are you talking about "Datasets"?) and nothing immediately jumped out as a clear sort of elucidation of what you're saying.


In data science, types change all the times. E.g. when dealing with tabular data, the columns (which might be used as type parameters) change frequently. In Julia, for example, see DataFrames.jl vs. TypedTables.jl for some trade-offs.

In general, type annotations are useful if you know the business requirements beforehand. Data science is exploratory, where type annotations can be a form of premature abstraction. Languages with traits (e.g. Scala) might be a decent compromise.


It sounds like the structural typing in Typescript might help a lot here - you don’t need to formally define any types for a record, the language just infers and tracks fields/types for you. I don’t know what the current state of data science is for TS though.


But now you are talking about type annotations and not static typing. You can have static typing without any type annotations


Everything comes with trade offs. But I’d argue that one of the primary reasons folks avoid types is that it requires additional learning.

I’m speculating here, but as a software engineer who’s dealt with many different languages, I (personally) find types to increase my productivity, because it reduces errors at runtime. This is a huge benefit, and because I’ve spent a lot of time learning the target language’s type system, it doesn’t come at the cost of productivity, and like I said, I find it to improve my downstream productivity.

I’ve read that scientists have had significant mistakes in their findings in papers due to bugs, which I know is anecdotal, but it seems like adopting typed languages would reduce those errors and increase confidence in studies.

I wonder if they were able to recognize the downstream benefits of types, if they would shrug them off as irrelevant to their work.


I read about a guy who wanted to reproduce a neural net and he worked at it for a year only to find it missed a +1 in one place. Being a neural net it still worked but not as good. I don't think this kind of error would have been caught by a type system.


It depends. Off by one errors are common, but the context is important. Some languages for example prefer integrators to indexing, and if ‘a’ was an index that might have been avoided.

Languages with algebraic data types also allow you to express state in ways that can make it easier to identify state changes, possibly making the missing increment more obvious.

Some languages will warn when variables are being used or modified in scopes where they are referenced but not used.

But, even with all this, yes bugs will still happen.


It's true that many papers have mistakes, but the reasons for that go further than types, the main one being the prevalence of uneducated spreadsheet warriors.

I definitely agree that if you're making a big custom model you should use a typed language. But indeed, that requires a lot of learning, so the investment has to be worth it. The problem is often that scientists don't program all day, so remembering language intricacies is hard. Dynamic languages that can have their whole syntax written on a postcard make things much easier from that pov.


Haskell syntax would probably fit on a postcard too. I think type systems are orthogonal to syntactic complexity.


Right, the problem is that the semantics of statically typed languages are far more complex (since the program text conveys more information).


It’s not that it requires additional learning. It requires additional time when writing code. A nontrivial amount of time that isn’t generally worthwhile for me.


I tend to find the opposite - writing Python with or without type annotations is both more time consuming and a more fraught endeavour than writing Go, TypeScript or even Rust.


I suspect that is the case if you have written a lot in a typed languages. There may be both context switching and literally no issue dealing with types if you have mastered them.


Oh, I find that Haskell and Elm are much quicker to write code because I spend much less time on little programming mistakes and refactorings are much safer


Most static type checkers prove (to the limits of the soundness of the type system) that the types of the values at run time will always be compatible with the operations performed on those values. For these checkers, any program that's not provably sound (to the limits of the type system soundness) will be rejected. In other words, most static type checkers answer "is this definitely correct?"

However, it's possible to have a more lenient static type checker that only looks for cases where you're guaranteed to get an illegal operation at runtime. They answer the question "is this definitely wrong?"

For instance, if we have:

  def f(x):
      g(x)
      h(x)

  def g(x):
      x.a()

  def h(x):
      x.b()
If we're doing whole program analysis and there's no type that has both a() and b() methods, then f is guaranteed to call a non-existing method at one place or the other.

This sort of "this is clearly wrong" type checking is much less intrusive than the more common "I'm not positive this is correct" type checking.

If you're not doing whole-program analysis, then you may restrict the type search to the types imported into the transitive closure of the module and its dependencies. This makes type checking slightly more intrusive by sometimes forcing more module imports, but it's still much less intrusive than common type checkers.


> However, it's possible to have a more lenient static type checker that only looks for cases where you're guaranteed to get an illegal operation at runtime. They answer the question "is this definitely wrong?"

If you're thinking about Erlang's Dialyzer, almost everyone I've heard talking about it says that it's difficult to understand and doesn't provide as much confidence as a regular type system would.


right, but for languages that support monkey-patching, this is going to be hard. e.g.:

  setattr(type(x),sys.argv[1],x.a)
or

  setattr(type(x), 'b' if halts(sys.argv[1]) else 'c',x.a)


Yes, static analysis of highly dynamic languages like javascript or ruby is tremendously difficult. Sound results are nearly impossible because you need to havoc so frequently. But if you permit some error and ask for a little extra help from developers in cases outside normal style for a language you can still do a pretty good job.


Presumably, you'd have an escape hatch in the from of specially formatted comments to annotate each questionable call site. My understanding is that even strong proponents of monkey patching feel it should be used sparingly.


> Contra: (some) people doing exploratory programming in the small e.g. scientists don't want to have to deal with the type checker.

Which in at least some cases means ignoring bugs. Pretty much every system I've worked on that used a language that didn't to type checking (or was too forgiving about it) has had runtime bugs because of type incompatibility or type coercion issues.


> Which in at least some cases means ignoring bugs

Indeed, but simplicity is the name of the game for exploratory programming. It's difficult to explore using languages requiring a lot of ceremony.

But then, that's when the time and funding constraints come in and prevent rewriting in more type sound languages. Which is why we end up with buggy python (if speed not an issue) or buggy C (if speed is an issue).

There are alternate paths. For example, APL tries to be terse enough that you can have the whole thing on a single screen. That's another attempt at bug prevention.

For example, I do my data wrangling in J.


> languages requiring a lot of ceremony

Some statically-typed languages don't require a lot of ceremony.


Is it a bug if it never happens? Another way to say it, would you pay money to prevent a problem you don’t have?


> Is it a bug if it never happens? Another way to say it, would you pay money to prevent a problem you don’t have?

Let me rephrase that: Is it a bug if I don't know about it? Another way to say it, would you pay money to prevent a problem you're not (yet) aware of?

Saying the bugs "never happen" is basically saying "I always write bug-free code."

I feel languages that don't have type checking (or where it's sloppy), force developers perform semi-automated testing (i.e. unit tests to catch type issues) when they could have systematic, fully-automated testing (for type issues, at least)


Right, but some people are doing data wrangling by themselves with python in a jupyter notebook. These aren’t production systems, and static type checking slows you down without adding much value.

That’s all I’m saying, the data analyst that has some edge case “bug” that never gets executed probably shouldn’t pay the upfront overhead cost of type checking.


Is having buggy, unused code lying around a good thing?


It’s not always bad, and dealing with it has a cost. The cost is not always worth. It may be worth it most of the time, but not always.


Agreed. I am essentially an Excel Poweruser on steroids with the way I use Python.


Utilizing a type checker when you want is very different from being forced to use it. Statically typed languages create a lot of complexity from being entirely dependent on types being correct in a static sense rather than at runtime.

With type checkers for dynamic languages, a lot of the arguments in favor of static languages disappear.


> Statically typed languages create a lot of complexity from being entirely dependent on types being correct in a static sense rather than at runtime.

Python’s type annotations aren’t checked at runtime, though - they’re also entirely static. So you either have to:

- Write code that’s as pedantically correct as in a traditional statically-typed language (e.g. by using `mypy —-strict`), in which case you’d be better off using a language that supports AOT-compilation, or

- Be prepared for cases where the runtime type doesn’t match the annotation. This is unacceptable - I hate the idea of code that tells lies.


You're entirely forgetting the benefits.

1) Scientists, hobbyists etc. can be super productive with Python while disregarding types.

2) Software developers can write serious applications with type checking.

Statically typed languages exclude group that benefit from 1).


A language that tries to satisfy both of these groups won’t be ideal for either of them.

Python should aim to be perfect for group 1. There are many other languages that will always be a better choice for group 2.


I disagree. You can spend 90% of your time doing e.g. data science, and then one day you need to integrate with a DB or make a cross-platform desktop GUI application, as an example. Should you learn C++ (it takes a decade to become a very mediocre C++-developer), or Java to write a desktop GUI application?

No, there isn't "always" a better choice for group 2.

- Most software isn't CPU bound, so that defies "always" in your argument

- You need libraries get things done

What language would you always pick for group 2? Or would you pick a different one for each task, spending an order of magnitude more time learning them?


> Statically typed languages create a lot of complexity from being entirely dependent on types being correct in a static sense rather than at runtime.

Nonsense. It's very easy to use a runtime cast if and when you want to, even in Haskell. You're in control of exactly where typechecking doesn't happen.

In contrast, in a dynamic language with optional typechecking you have basically no guarantees. Even if 99% of your codebase is typechecked, you have no idea what errors might be lurking in the other 1% or how far they might propagate, since a type error can show up essentially arbitrarily far away from its actual cause. It's the worst of both worlds - you pay all the costs of a static type system but get hardly any of the benefits.


I think this is a pretty old-school mindset. For instance, with strict TypeScript the overhead once you get used to properly typing things is minimal - most things are inferred, and it actually empowers you to code faster as you have guard rails and great autocomplete; the issues are more likely due to poor/non-existent types.


>People who use dynamically-typed languages are finally realizing that having to simulate a type-checker mentally is a bad idea.

I'd say that the pool of python developers has been swamped by a tsunami of former java/c# developers who don't understand the first thing about what made python good and are demanding they get everything they were used to in their old languages in python.

It's a rather sad state of affairs, but it's been great for cashing out on a language I learned in my off time at highschool in the 00s, especially since I have my name as a contributor before I turned 18.


Or you could look at it as Python growing up and being used for long lived projects with tens and hundreds of developers of varying skill levels, where you need to do things such as... safely refactor large amounts of code quickly.


Maybe it’s “growing up”, but I agree that Python has definitely changed its focus. It has pivoted towards the handful of companies with “projects with tens and hundreds of developers“ at the expense of the millions(?) of people using it for much smaller projects.

As part of the latter group, I don’t consider this change in focus to be a good thing.


> It has pivoted towards the handful of companies with “projects with tens and hundreds of developers“

Handful of companies? Small and medium businesses, according to most definitions, can have up to 100 employees, and they generally make up to 90% of the businesses in most countries. If these small and medium businesses are writing Python apps, it's far from inconceivable that they will at some point have tens of people working on a project.

You make it sounds like you need to be a company with 1 million employees in order to have a big project.

And I don't see how these changes impact the "millions (?)" using Python for small projects. Python type checking is optional.


Indeed, I've moved to Guile as my small scripting language.

It has much better integration with the gnu user space, and much like pythons early days if you know C you can write extremely performant code. The fact that I can write a DSL for every problem is a benefit I didn't know I wanted until I no longer had to monkey patch objects.


> I've moved to Guile as my small scripting language.

I have been using Lua as an alternative to Python. It's much closer to my ideal level of language complexity - and I think overall it's better designed than Python, although it's certainly not perfect. The implementation of the interpreter is significantly better as well.


Yes, I already said a tsunami of developers who do not know how to use python.


As opposed to the many Python gurus writing short scripts? :-)


Yes.

Python is glue.

If you build a stick bridge with just glue you sound like the type of person who ate most of it.


As a Clojurian I just run static analysis

It can be nice to use an optional type/spec system to help with that but not required


I'm a fan of static typing (but not necessarily overly strong typing) myself, but one big reason why I use Python next to statically typed languages is to write quick and dirty cross-platform-scripts where the sloppy duck typing comes in handy. Want to throw vastly different things into a single array (sometimes not even knowing what type those things are)? No problem, it just works.

This type of quick'n'dirty throw-away coding would be a lot less convenient if Python would force a strong static type system on me.

If Python is used for "real projects" with more than a few thousand lines of code, worked on by a team, static typing totally makes sense. But there are a lot of everyday tasks where this just gets in the way.


The annoying part about (traditional) type checking is having to do it all the time

But the optional Python annotations? I was skeptical, but I began to see the value in it, especially in interfaces/APIs for external consumption, or more complicated areas.

But yeah don't ask me to type hint a "private" function that's only a couple of lines long and used in specific cases


It's a trade off. You don't want to write all your types for quick scripts, data analysis and small web sites.

As a trainer, experience teaches me that it's also much easier for beginners.

Now, types are nice to have for bigger code bases, but that doesn't mean there is no place for a language with optional typing.


There is Hindley-Milner type inference. Don't need to type types in most cases. See, for example, OCaml.


Type inference does not mean untyped. Arguably, getting a type error when you didn't write a single type hint makes things even more confusing if you are not an experienced programmer.


That comes with its own tradeoffs: increased compile time and more effort needed to create error messages that are helpful to the user.


Ocaml is probably the fastest compiling language I've ever used.


> more effort needed to create error messages that are helpful to the user.

Well, sure, but that's a tradeoff for the developers of the language to consider. As a user, there are already plenty of statically typed languages to choose from with many person-hours put into just that.


I want to offload as much work as we possibly can to the compiler. My brain will always be millions of times slower.


Is it really that burdensome for novice programmers? Maybe it's just been too long for me to remember the pains...but I'd expect most novices to heavily use primitive types which shouldn't be that hard to grok.


> People who use dynamically-typed languages are finally realizing that having to simulate a type-checker mentally is a bad idea.

It cuts both ways. Some days type advocates recognize why their type language is not their host language i.e., there is a such a thing as too much types.

If you believe types can express anything, and more over they are the best at expressing it then why do you need the host language, why not to program using just types.

There is no silver bullet. Types won't make your code bug free.

What programming language do you know that uses types to express most tests?


I’ve found that enforcing types users (including myself) can use with my code is a bad idea. Informal specifications are simpler and enable massive code reuse. Inheritance in OOP was meant to SRN able code reuse in static languages, but has largely failed. In OOP, library interoperability can only be achieved if all developers know a common set of subtypes that satisfy all cases, which can never be true. The granularity of trait systems is better, but still suffers from the same issue.


What is so bad about Java/C++?


They can't even describe simple problems without using escape hatches (ignoring C++'s crazy templates).

Example: Everyone in your system has a role. There are 3 kind of roles: User, Moderator, Admin

They have different attributes. All of them have a name, but only moderators and admins have dedicates areas of responsibility. And only moderators have a subset of at least one allowed permission (e.g. "edit posts" and/or "delete posts").

You can't really denote this in a way that the Java compiler understands it at the use site. E.g. an action is executed and you want to see if it is allowed. So you want to check the kind of role and if the attributes/permissions match the action or not. The compiler doesn't support this well at ala.


So the problem is that the type system does not give you the type of the caller of a method? Does e.g. Haskell support this?

(As an aside, I think one solution in Java would be to use the command pattern and pass an object (ModeratorCommand, etc.) to whatever code is processing your actions, so it can type check the command.)


It's more that you simply can't define your datastructure correctly.

> Does e.g. Haskell support this?

Sure, and it is not certainly not the only language. E.g.:

    data Role = User String | Moderator String [Area] NonEmpty Permission | Admin String [Area]
I'm not even saying that this is the best way of modeling it - but with Java you unfortunately can't really define it.

> (As an aside, I think one solution in Java would be to use the command pattern and pass an object (ModeratorCommand, etc.) to whatever code is processing your actions, so it can type check the command.)

The problem stays the same though: how do you know what kind of command it is that you just received? You cannot safely "match" on the command in the same way you can in other languages, because the compiler doesn't know that there are only exactly 3 types of commands.


Okay, so it's union types plus pattern matching that are missing (among other things I'm sure). Thanks, that's helpful.

> You cannot safely "match" on the command in the same way you can in other languages, because the compiler doesn't know that there are only exactly 3 types of commands.

Perhaps the visitor pattern? Command processing receives an object and calls its command method, which knows what permissions it has.

I guess the fact that I have to keep saying "patterns" is a hint that some flexibility is missing. Still, I'd rather have Java/C++ static types than nothing at all.


Yes the visitor pattern is a valid encoding of algebraic datatypes. Very heavyweight (imagine if booleans were all encoded with the visitor pattern) but nonetheless valid.

https://blog.ploeh.dk/2018/06/25/visitor-as-a-sum-type/


No sum types, no HKT (which means no practical way to handle effects), no ability to handle records generically (i.e. no typeclass derivation). Just imagine something super basic like: take a JSON structure that's a "patch" to an existing domain object, merge it in place, and for each field type there's a given way of merging it and a possible kind of error that can occur; gather up all those errors and report them together. That's trivial in Python and impossible in Java/C++.


Better late than never.


Java is not a good type system at all - Haskell is.


That's what the OC said.


Ah my bad. I misread that.


They listed those three as examples of languages that don't take good advantage of having static typing.


I'm excited about Python's typing potential. I recently rewrote an API from TypeScript to Kotlin since I am fairly unhappy with the server-side TS ecosystem, but ran across https://github.com/tiangolo/fastapi when exploring options and really dig it - seems to be by _far_ the lowest-ceremony way to make an HTTP API with static types that integrate with parsing/validation (something TypeScript is still really bad at, unless you bring your own runtime typing libraries...).


More specifically, Pydantic. It's great!


FastAPI got me turned onto Pydantic, both are awesome!


I use pydantic all over the place now. FastAPI was what introduced it to me. Its really nice to be able to take advantage of the autocompletion when utilizing pydantic models rather than a straight dictionary. Now, in a lot of instances I use the pydantic models instead of dictionaries as it feels more explicit


Ah, thanks, I hadn't realized that's the underlying library! Runtime validation of types is absolutely the largest missing feature from TypeScript, IMO, so Pydantic is really impressive to me.


Yes, I have been using Pydantic with my Django projects, and it is great!


Have you taken a look at https://rocket.rs?


Rocket is nice but I had a much, much better experience with Warp[0]. I ran into a lot of roadblocks with Rocket, especially with stuff like JWT and working with databases. No such issues with Warp.

0: https://github.com/seanmonstar/warp


I'm curious about both, and Rust for APIs in general. While I'm excited by the expressiveness of the language (compared to e.g. Go) and how robust the type system is. Right now, though it feels like a bit too much of a barrier both in terms of learning the language and the smaller ecosystem right now (this article covers a lot of my concerns: https://macwright.com/2021/01/15/rust.html). Still keeping an eye on it and would gravitate towards it if I had a smaller service that needed particularly high performance.


I sure hope things continue to progress, but right now, having come from TypeScript, Python is quite a ways behind when it comes to static types. A few major differences:

- Library support for static types is not very good. This can be fixed, of course, but it's also very hard to fix in a concerted way. It'll just depend on the community getting on board.

- The syntax is limited. There isn't proper support for declaring generics, you have to declare a separate TypeVar, as a Python variable, somewhere else in scope and it just... gets used to approximate a generic. It mostly works, but sometimes it doesn't, and it's very unintuitive and awkward. And then concepts like Callable, Union, TypedDict, and Optional don't have dedicated syntax for readability; they're generic types that you have to import and parameterize. Etc.

- Support isn't great for highly "dynamic" data. TypeScript gives you powerful features for reasoning about dynamic property-sets of objects (dicts), combining and separating them, duck-typing, doing really complex inference, etc. These features in Python are usually some combination of unreliable, third-party, syntactically awkward, and so on.

- Inconsistency between different type-checkers. You'd think the fact that Python has standardized type syntax would help with consistency, but what it actually means is that everyone gets to define their own semantics for the same syntax. Different checkers mostly orbit around the same semantics, but there are always gaps. So for example, MyPy does a pretty good job of being strict and smart, but it's really slow. So you'll end up using an IDE-optimized checker for development, like Pytorch, but Pytorch will allow some things that MyPy doesn't and not allow some things that MyPy does. So you can use your IDE to get most of the way there, but you always have to remember to run a "real" type-check before you commit, or you may break the build in CI.

I should point out the one big advantage that Python has here: unlike TypeScript you don't need a build step, because Python interpreters can parse (and throw away) the type annotations. That's pretty nice, especially for gradual adoption/casual typing of scripts.

All of the problems (except maybe the syntax) are solvable, and I genuinely hope they get solved. For now, if you stick to primitives and core or class-based data structures you'll have a great experience with Python types. If you do anything more complex, the results will be mixed. This is of course much better than nothing, but it could be a lot better still. If you're picking between typed Python and TypeScript for a new project, it's worth factoring in.


> And then concepts like Callable, Union, TypedDict, and Optional don't have dedicated syntax for readability; they're generic types that you have to import and parameterize.

Union is getting dedicated syntax in 3.10: https://www.python.org/dev/peps/pep-0604/

Optional could follow: https://www.python.org/dev/peps/pep-0645/


That’s great news. Hopefully PEP 505 gets revised as well, which would make optional not only more ergonomic to declare but also more ergonomic to use.


Good to know! That's exciting


I generally agree.

I'll add a couple more frustrating limitations to Python's typing:

1. You can define a function type somewhat clumsily (`Callable[[Arg1T, Arg2T], ReturnT]`), but if your callback uses keyword arguments (pervasive among Python programs), you're out of luck.

2. You can't define recursive types like JSON. E.g., `JSON = TypeVar("JSON", Union[str, int, None, bool, List[JSON], Dict[str, JSON]])`.

3. Getting mypy to accept third party definitions sometimes works perfectly and other times it doesn't work at all. You get a link to some troubleshooting tips that have never actually worked for me.

Beyond that, it's just the general usability issues that ultimately derive from Python's election to shoehorn a lot of typing functionality into minimal syntax changes (as opposed to TypeScript which can make whichever syntax changes it likes because it isn't trying to be valid JavaScript).

I think the idea was that they didn't want to introduce build system complexity by way of a compiler, which is an easy choice to criticize in hindsight but I might've made the same call. I haven't used TypeScript in vain so I can't say for certain, but the TypeScript grass certainly looks pretty green from the Python side. Moreover, on the Python side we aren't even absolved from build-time problems since we still have to fight tooth and nail to get Mypy to accept third party type annotations.

Mypy is a valiant effort, but like everything in the Python ecosystem, it's a problem made difficult by an accumulated legacy of unfortunate decisions and there just isn't enough investment to move it forward. I've since moved onto Go for everything I used to use Python for, and I haven't honestly looked back--Go solves all of my biggest Python pain points: performance, tooling [especially package management], and dealing with the low-quality code that you tend to get when your colleagues don't have a type system keeping you on the rails and Go doesn't impose many pain points of its own (generics, but at a certain point in your career you realize that "good code" is not "maximally abstract code" nor "maximally DRY code", and the actual valid use cases for generics are fewer and farther between). TypeScript piques my interest, but it seems a bit too complex and configurable and in particular I'm not looking forward to figuring out how to wire together a JavaScript build system. Maybe I'll try Deno if I hear enough positive feedback about it, but for now it's hard to beat `go build`. Rust seems cool but the borrow checker tax is too steep for my blood.


> 1. You can define a function type somewhat clumsily (`Callable[[Arg1T, Arg2T], ReturnT]`), but if your callback uses keyword arguments (pervasive among Python programs), you're out of luck.

I've personally never missed the ability to add keyword arguments to a Callable. If you want to anyway, I understand that at least mypy has syntax for this:

* https://mypy.readthedocs.io/en/stable/protocols.html#callbac...

* https://mypy.readthedocs.io/en/stable/additional_features.ht...

> Beyond that, it's just the general usability issues that ultimately derive from Python's election to shoehorn a lot of typing functionality into minimal syntax changes

Many syntax niceities will be introduced in upcoming Python releases. From the article:

* Union types are shortened to X | Y (PEP 604)

* (?) Optional types shortened to X? (PEP 645)

* Type Hinting Generics In Standard Collections (PEP 585) - Can use list[T], dict[K, V], etc in place of List[T] and Dict[K, V].

With those changes, you won't generally need to import the `typing` module at all anymore.

> there just isn't enough investment to move it forward.

Dropbox has dedicated engineers maintaining mypy. And of course a few volunteers such as myself. And Guido. :)


More annoying for our codebase is the inability to specify optional/default arguments with closures. We often use functions that return other functions and you can't persist the "optional state" without using a protocol. It would be great just to match the current state of vanilla defs with optional params.

For example:

  def make_fun1(a: int) -> Callable[[int, Optional[int]], str]:
      def fun(b: int, c: Optional[int] = None) -> str:
          if c:
              b += c
          return f"{a+b}"

      return fun

  fun = make_fun(1)
  fun(2)
Will give you `Too few arguments` for `fun(2)`

You can fix this with something like:

  class FunC(Protocol):
      def __call__(self, b: int, c: Optional[int] = None) -> str: ...

  def make_fun(a: int) -> FunC:  ...
But that that seems like unnecessary overhead, because the non-nested def case can happily understand the optional parameter.


> I've personally never missed the ability to add keyword arguments to a Callable. If you want to anyway, I understand that at least mypy has syntax for this

Neat, I didn't realize this. This is far from desirable, but nice to know it's possible.

> Many syntax niceities will be introduced in upcoming Python releases. From the article:

I mean, typing out "Union" and "Option" or even having to import generic types from the typing module aren't really the syntactic pain points I was referring to. It's more like "Where do I define the TypeVar for a generic method on a class? Do I define it inside of the class or at the top-level module? In either case, if I reference that variable in another method, does it imply that these two types are the same? No doubt there are answers, but Python is the only language in which I have to navigate these kinds of questions. Similarly, having to create a protocol for kwarg callbacks seems really heavy. These are the kinds of things I would like to see improved.

> Dropbox has dedicated engineers maintaining mypy. And of course a few volunteers such as myself. And Guido. :)

Yes, I didn't mean to suggest there was no investment, only that the progress seems really slow (also, I didn't realize Guido was a volunteer? I thought he was employed by DropBox to work on Python). How long have recursive types been in a holding pattern, for example? I don't mean to disrespect you or any of the other maintainers--no doubt you're doing great work.


I definitely prefer the TypeScript dev experience for larger/longer-term projects, all else being equal. You spend an hour or two configuring your build tooling and then don't really have to mess with it much after that. But I also never bother setting up TypeScript for JS one-offs or scripts (where I would bother to add Python type annotations in those cases), so make of that what you will.

Deno is currently trying to bridge that gap - putting TS directly in your interpreter - but right now it comes with some caveats around being a separate runtime with incompatible system APIs, unfortunately. We'll see if it takes off enough that that becomes less of an issue.

Aside:

> it's a problem made difficult by an accumulated legacy of unfortunate decisions and there just isn't enough investment to move it forward

This is funny to me because JS has an even bigger accumulated legacy of unfortunate decisions, it's just gotten an obscene amount of investment to move it forward despite all odds ;)


If your callback uses kwargs, you define a Protocol that describes __call__ with typed kwargs, and use that as the type. Yes, it’s clunky, but it’s possible to do.


Imagine ...

1. a -> b :: Function(a)[b]

   (a, k1=t1, k2=t2) -> b :: Function(a, k1=t1, k2=t2)[b]

2. JSON = RecTypeVar('JSON', lambda Self: Union[None, bool, int, float, str, List[Self], Dict[str, Self]])

huh!


> 2. JSON = RecTypeVar('JSON', lambda Self: Union[None, bool, int, float, str, List[Self], Dict[str, Self]])

You don't actually need this workaround—the problem is typechecker support, not syntax. You can use strings for forward definitions, like this:

  JSON = Union[str, int, None, bool, List["JSON"], Dict[str, "JSON"]]
What happens next depends on your typechecker. Mypy says this:

  error: Cannot resolve name "JSON" (possible cyclic definition)
Pytype says this:

  Recursive type annotations not supported yet
But Pyre accepts it and correctly uses it for typechecking.


In fairness I find (1) to even less readable than the existing Callable syntax, but yeah, it would be cool if Python could express callables with kwargs and recursive types.


> Library support for static types is not very good.

Granted. I do think it's improving over time. For example the django-stubs project is a really nice addition to the regular Django distribution: https://pypi.org/project/django-stubs/


Hmm...the basic development cycle described is horribly broken:

   Write a bunch of new code.
   Repeat 4-6 times, in rapid succession:
     Run program to manually test.
     Find basic error (like a missing import).
     Fix basic error.
   Debug/fix deeper errors in the new code
With this, just about anything will help, including static typing.

The problem is the following line

    Run program to manually test.
You should avoid manual testing at (almost) all cost. Write an automated test instead. It is effort your are expending anyway, why throw it away?? And while you're there, write the test beforehand, to clarify the specification of the code you're trying to write.

With such a TDD cycle, those problems wouldn't have happened.

    Repeat
      Write a test
      Make it pass
      [Commit]
      Refactor
      Commit
    
Mind you, static typing can still help in this scenario, but not in getting the code to work and helping it remain functional. It helps as verified documentation that eases understanding when reading the code.


Yes, my first test is always running pyflakes, which finds the most egregious bugs quickly.


I'm glad that strong typing is getting popular and it's not just for weak minds anymore.

But (for new projects anyway) what's the point of using Python with types instead of using a real typed language like Java or C#?


A use case is library code. I've had good results using type annotations internally in a library for education and research. Static typing would be overkill for most of the users of the library but it's useful for developing the library itself.

Aside from that, an advantage is that the type system is more expressive than that of Java or C#. Most common Python idioms work just the same way with or without type annotations, so you can continue to write functions that take either int or str arguments (for better or worse) and the typechecker will understand your use of isinstance and make sure everything checks out. (But there are other fully statically typed languages that are also more expressive than Java or C#.)

You can also continue to do weird metaprogramming and monkeypatching, and though the typechecker is not always able to make sense of it you can often wrap it in a safe interface so you can still get assurances for the rest of the project.


Not sure if you know what you're talking about. Python has always been a strongly typed language.


Surprised to see you downvoted on HN for being right. In a thread specifically regarding python typing you would think getting the basics right is kind of important.

I don't see the hype here, just use a statically typed language if it's an important consideration for the project.


You’re probably one of those people that says “you mean the world wide web?” when someone says the internet.


No it hasn't. Python has things that it calls types, but they're not types: you can't determine the "type" of an expression from the "type" of its components.


What makes Java's typing more "real" than python's?

I find typed python more pleasant and expressive than similar Java, for a variety of reasons, among them, I find python's type system to be superior to Java's (and getting even better faster!)


They're probably suggesting performance differences


the point of using python? ML libraries! that's the 99% case of python-only case.

Other than that, I actually don't know... a lot of other languages now have interpreted-dev envs...

(btw, I'd rather use Kotlin than java --- it's like Java++)


Even there, many of those libraries are written in C, Fortran and C++.

On can use any bindings to those libraries, including using modern versions of Fortran and C++ directly.


Not necessarily ML, more generally - numeric, as in NumPy/scipy/pandas, variety of optimisers, plotting and reporting tools and whatnot.


This is the only reply to my question that makes sense, thanks!

All of the other replies are subjective.


Those are compiled languages, while Python is an interpreted language. You're comparing apples and oranges.

Interpreted languages are quick to develop/quick to learn, and with the support of typing that same "quick prototype" can easily mature into a full application without needing to be re-written in a "real" language.


This isn't an interesting distinction for quick prototypes because the difference is basically

    python task.py
vs

    javac Task.java
    java -cp . Task
And if you're in and IDE (in fairness, they're not popular for quick prototypes), you're just clicking the play button either way.


Not really given that there are REPLs for Java, C#, OCaml, Haskell, F#, Scala.

There are also toolchain options for running the code interpreted, JIT compiled, AOT compiled, or a mix of the previous options.

Languages are orthogonal to the toolchains.


'Renaissance' implies a first version, was there any attempt to add typing in older versions of python that were dropped ?

As a side note, for a related project linked to typinglike the TypedDict, I found that Pydantic is pretty amazing.


It makes sense if see it as Renaissance for Python, driven by type checking. Or at least the most sense I can find.

For me this is true, I've come back to Python because of it for personal projects.

I'm still trying to cargo cult myself into thinking that Python async makes any sense however. Why not just go with a genserver abstraction and expose the primitives over this mess we have?


I think the way to parse it is as "a renaissance of type checking, in python," i.e. in the same sense that "The Renaissance" was "a renaissance of classical culture, in Italy."


> was there any attempt to add typing in older versions of python that were dropped ?

Not that I know of. I used "renaissance" to connote an increased positive focus, invigoration, and activity.


One could argue that the first version is Archaic, and Renaissance the second attempt


Given that Python type checkers are becoming all the rage, will we ever see a compiler that actually uses those types to increase performance?



Cython has been around forever.


For those who have never used type checking in Python, what's a good introduction?


For the actual type annotations themselves, the official docs are great: https://docs.python.org/3/library/typing.html

Know that the signatures vary with version, so select the correct one.

To check these annotations, you'll need a third-party type checker somewhere in your build process. I use pylance with VS Code, as it can detect errors as I type: https://marketplace.visualstudio.com/items?itemName=ms-pytho...


This doesn't seem like an ideal introduction. I don't think it actually covers type checking at all (besides the note at the top) since that's relegated to third parties.


To learn how to use it, I would recommend setting up your IDE (I recommend VS Code) to use mypy.

Then just open a small script you know well, let's say less than 200 lines, and check what "missing" type annotations mypy complains about.

For most cases, the IDE will be able to tell you what it recommends, and from then you can start reading the docs for specific types for a deeper dive.


Pylance is much faster and can index a large virtualenv without issue, IMO


read mypy docs, but use google's pytype...

pylance is much stricter, but it seems to have a lot of false-(+)s

from what I've used, the implementation varies wildly...


You may also find my introduction to Python type hints helpful, see https://www.augmentedmind.de/2020/10/11/static-python-type-h...


Sounds like you've never dealt with a type checking language in general. I recommend using a language that forces you to do type checking like haskell. Haskell is super hard so just use it until you see the light and understand why typing is better.

I would say get good enough with haskell to write a 2 human player tic-tac-toe game by following some online tutorials. Then you'll be able to pick up types in python like it was nothing.

Otherwise if you have used a type checked language... Python types are really straight forward. I never needed a tutorial. Whenever I needed a feature (such as type variables) I would look it up via google.

Anyway, I don't actually know if you've ever played with a type checked language but I'm assuming you haven't because python type checking is pretty easy to pick up without a tutorial based resource if you have had prior experience in other languages.


Why would you add type checking to Python! Using another language with static typing would be the solution. It’s not like the browser where you have to use Javascript so you add another layer to your toolchain.


Because you’re a python shop that wants some extra safety.

Because you’ve got an existing code base.

Because you want the benefits of a dynamic language with some of the benefits of type checking.

Because the editor experience is way better when you selectively sprinkle in some types.

Because you can choose how deep you go on types depending on the impact.

We have avoided countless production issues just by annotating when something is Optional or not.


Because type annotations are medium to high value documentation for your users.

Because stating the type of something important can expose design issues (e.g. "wait a minute, it isn't always a X! in case Y it can be a Z! Forbid or support?") before they become a serious problem.


So that people who want to use it, can?

You don't have to use it if you don't want to.


Python’s philosophy is supposed to be “one obvious way to do it”, not “you decide which way to do it, it’s optional”.


There is one obvious way to add type annotations. The annotations themselves are optional. That's just gradual typing[0], and it doesn't "break" the zen of python any more than having the choice of using a class over free functions in your program.

The meme going around that Python is adding so many features that it's becoming something "not python" is super weird to me, and it appears as though it's propagated since the walrus operator discussion and hasn't really gone away.

All popular languages add features and adapt as new tools, methodologies, and learnings pop up from other communities. This is a good thing (subjective, I guess), and in general I find Python to be somewhat restrained.

[0] https://en.wikipedia.org/wiki/Gradual_typing


> Python is adding so many features that it's becoming something "not python"

Here's an example I came across recently. It's from the official documentation for the `heapq` module, so I assume it's now considered the idiomatic way to write a wrapper for each item in a priority queue:

    @dataclass(order=True)
    class PrioritizedItem:
        priority: int
        item: Any=field(compare=False)
That's completely different to the way such a class would have been written even in Python 3.4. It would have been a simple class with an explicit `__init__` and `__lt__`. Instead, the above dataclass sits on top of a mountain of complexity and everything about it is implicit.

It's in violation of both "Simple is better than complex" and "Explicit is better than implicit".


That's not been the case for a very long time now.


Agreed (type hints are just one example), but I wish it were.


Because someone suggested it, and Python’s main design principle nowadays is “include as many language features as possible”.


When proponents of dynamic languages tell me "that's what the unit tests are for", I say good for you that you have unit tests.

However a unit test may simply inform you that you have something wrong with your program, whereas a type checker will (more often that not) show you exactly where the error is, and usually as you were typing it.

Reality is that unit tests and type checking both have their place even if there is overlap between the two. Type checkers can however remove the need for trivial and repetitive unit tests.


The type checker may as well be one big automated unit test.

For all the time a developer spends not writing out types, it then has to be made up in manually written unit tests.


I'd rather my IDE flag the type error before going to the effort of running a unit test. That, and I never have to ask myself what type a parameter is.


As is typical in these discussions, folks are confusing representation with kind.

"type" systems help with representation errors, but those tend to be easy to find without static typing.

One problem with static typing is that I'm forced to address all of the representation issues before I can do any testing. That "premature optimization" is hell on incremental development.

Another problem is that static typing tends to encourage languages which are designed around making type checking easy. That impacts program design.


I really like being able to add types in Python. But I wish it was more integrated into cPython, so I didn't have to call mypy on my file. I admit it's been awhile so maybe it's changed, but I like how in Racket I can just add types, change my language to #lang typed/racket and then boom we are type checking.


is there a high quality introduction to Python type checking for people who already know Python?

maybe something very opinionated about how you're supposed to use the features, sort of a "Hitchhiker's Guide to Type Checking".

if I Google around I find either documentation or fairly low quality blog posts.


I like this site for tutorials of just about the right length:

https://realpython.com/python-type-checking/


It’s going to be in the next edition of Fluent Python, but that says September release.

Docs and the relevant PEP are your best bet for now.


I honestly think the Pydantic docs are a high quality introduction to type checking in Python.


I recently started working through python tutorials in videos, books, and blogs on statistics and machine learning. As a front end developer who has over the last several years started to adhere to using immutable objects and functional programming in JavaScript and subsequently TypeScript, which doesn't matter as much from my point of view. All the bugs that arise in the python tutorials from data mutation and side effects in class methods (looking at you inplace=True in Pandas) is what disturbs me the most about my first exposure to Python.

I really hope to start seeing some immutable architecture and structuring in Python more than type checking, as my first impression.


What’s the big deal about type checking?

I haven’t worked on large code bases, only run small ML analyses on tabular data. Can someone be concrete with the advantages and use cases?


When you're writing code in a dynamic language, you're still using a type-checker; it's just in your head instead of the computer. This is a fantastic waste of mental energy and working memory.


I'm not so sure about that. When I'm actually using variables in expressions or as arguments to functions, as opposed to when I'm declaring them, the code is essentially the same in a typed language and in an untyped language.

Realizing when I'm writing that the expression I'm about to type is wrong because the types do not match still depends on my doing type checking in my head.

It seems to me that the case where typed wins is when my in-head type checker fails and I write something that is invalid. The typed language will catch that failure at compile time. The untyped won't catch it until runtime (and maybe not even then).

For the typed language, an IDE might be able to catch a mistake immediately so I don't have to wait until compilation to find it, and that immediate feedback might prevent me from writing dozens more lines with the same mistake, but even then I was using my in-head type checker to write that first mistake for the IDE to catch.


You are only describing a small part.

Here's another example. You are using a library and call a function that expects a List[Foo] as argument. Now you check in the same library if there is a function that returns a List[Foo] or at least a Foo. So you find it and are good.

Without types, now you have to read the documentation which is much much slower than having your IDE help you out. And yeah, sometimes variable names and function names are sufficient, but in my experience they are very often not.

Also, what happens when you update your library to the next version and it changed what it expects and returns? Now you either have a typechecker that checks _all_ the code, even the one you haven't written... or you have to do it yourself. For each version. And each library.


It's in your head, and it works faster because there is less boilerplate to read. This is great until the project reaches a size where this becomes unmanageable.


The big advantages for me are autocomplete, documentation, and simple bug-finding (you passed an Optional[str] to something expecting a str; that’s almost surely a bug). Being able to hover my cursor over methods and get their arguments and doc strings is invaluable. But that’s only possible if the type of the caller is known.


This is too true, too often is the real type signature str. And with all the inconsistency we have in the ecosystem and even standard library it saves you so much esoteric crap.


Now that I'm working in a very large code base with many engineers (Stripe), I have a new understanding of where type-checking is actually important: documentation.

Yeah you can go in circles for an eternity about which is better and which is worse. I try to stay away from such arguments because they aren't productive.

I do now argue though that as an engineering team and code base gets bigger, you will need static typing to be able to continue to grow at a sustainable pace. Static typing provides a very strong safety net letting people make good, safe changes in code they may not be intimately familiar with.

If you have a small code base and/or small team, then static typing becomes much more of a personal preference.


Imagine changing core features of your codebase and then the compiler / type checker telling you all the places that you needed to fix it. Then when it's fixed, the code runs flawlessly. That's happened a LOT in my 10 years of C++ development.

I can leverage the compiler to do 90% of the refactoring work.

For example, say you want to remove a field from an object in C++. The compiler will tell you every place where the field is accessed and raise a compiler error. In python this won't manifest until the code is run in that particular spot. I can only imagine that in this type of language a well intentioned refactor could easily create 10 bugs that don't get triggered for a very long time.


Type checking turns run-time errors into compile-time errors. This results in a huge improvement in code quality.


"I haven’t worked on large code bases, only run small ML analyses on tabular data."

While I endorse the other answers, I do want to highlight how the issues only appear at scale. I have no problem using a dynamically typed language for up to low-hundreds of numbers of lines. But that's about where I start to get nervous I've picked the wrong language. (Or, putting it a different way, about day 3 of working on the same code base continuously.)

When the whole program fits on the screen, essentially, it's no big deal to not have types.

But as the program grows, the problems emerge.

I think I have a minority view on what the problem that emerge is, though. I think the first problem a dynamically-typed code base usually encounters is that there is some function that accepts an object of some kind, and you discover that it actually needs to accept a list of that object sometimes instead. In a statically-typed language, you change the type from "MyObject" to "MyObject[]" and immediately change all the call sites. Possibly you even discover the change reverberates back up the program design, and you push it up higher, with the compiler helping all the way.

With dynamic languages, you tend instead to do something like:

    def myOldFunction(obj):
        if typeof(obj) != typeof([]):
            obj = [obj]
        # function continues
Well, that's if you're lucky. It seems to be more popular to instead decorate the entire function with if statements every time "obj" is used, but let's take this instead.

Now you've taken your first step down a dark road, where you now have a function that accepts an object, or maybe an array of those objects. Then you decide to treat None/nil/whatever as an empty list. Then you realize that sometimes the return value also needs to be a list of whatever it used to return, but you have to add a parameter to the call now to specify you want a list in the return value so you don't break all the old callers.

You inevitably head down a road where the function is filled to the brim with entangled concerns from a lot of other code. Then you start getting lots of these functions in a codebase together, and the codebase can never again be refactored because it'll break everything. (Or, rather, it can be, but only in very constrained ways.)

By contrast, I prefer even to prototype in static languages now, because when I make a mistake in the signature a function should have, in just a minute or two, I can fix it, and it's gone like the mistake was never there because the compiler made me fix it (and, fortunately, helped). It is much easier to maintain a discipline where every function isn't deeply entangled with another when you're not carrying along the entire history of the function's input and output parameters forever.

As you scale up, other problems emerge too, like the difficulty of using static analysis tools and the way documentation and unit tests have to carry a lot more water because the code is so much harder to get a grasp on... but whenever I'm starting a new code base, or even just a new module in an existing code base, it is always the above problem that is the first one I hit in a dynamic language and the first benefit of a static language I notice. A lot of the other scaling issues take months or even years to develop, but this is the one I notice in days, or in the worst cases even mere hours. I sometimes wonder how much of the "prototype one to throw away" comes from people using dynamic languages; usually with not much extra effort, my statically-typed "prototype" comes out production quality by the time I'm done working with it this way. The end result may not much resemble what I first sketched out, but I got there with a clear set of easy steps, and usually I don't even take that large a step backwards since I pair everything with unit tests and between the static types and tests I'm usually always moving forwards even as I make rather substantial changes in the codebase that I wouldn't dream of doing in dynamic languages because I know from experience that it's much harder to avoid breaking things and taking huge steps backwards even as I move some part incrementally forward.


It's verified documentation


I think the difference in the workflow is very much as described. Often you will have test suites that run in tens of minutes at least due to heavy coupling and many integration tests. While the automatically incremental type checkers run in less than a second and you can run it on ever save.


A coworker sent me a block of python code that wasn't working, asking if anything looked off. It took me a bit to grok what each line was doing, what values were getting stored in what variables, and what values were being returned from what functions. If everything was typed I would have been able to understand it quicker.


With pycharm you don't even have to run the code. Your type checker will highlight all errors real time.



I am blind, what does the image show?


two people, both solving a puzzle. The first person (static typing) has some pieces and looks at the second person (dynamic typing), who shouts "finished!" but the puzzle is completely wrong.


The funny thing is that when you look at the pieces on the static typing side there is no way to solve the problem at all.

For example, the piece which has two legs of the giraffe should to at the bottom most position but it doesn't have a single straight edge so it can't be a bottom most piece.

A deeper message is....

Dynamic typing: Solve your problems but leave bugs because you moved too fast.

Static typing: Discover that there is no sound solution, then have to manually cut and glue pieces together to satisfy your type checker.


You do know that almost every language with a static type system has a hatch, right? Object, Any, whatever.


I feel like type checking for Python is a plot by engineers bored by a language that is too pragmatic and efficient for their taste.


I think it is a plot by engineers frustrated with python codebases in the 10s of thousands of lines of code.


blink A plot to do what exactly? Python is amazing, both in its traditional typeless form and in its typed forms. :D

Type checking mainly starts pulling its weight in large codebases, or otherwise in long-lived and large programs maintained by rotating groups of people over long periods.


well there are some warts...

1. no multiline lambda function? - breaks flow of writing fp code

2. default-arguments are 'shared' by default? - have to 'break' this sharing by assigning 'None'... - very unintuitive / I don't know any language that does this...

Anyway, not everything in python is 'pragmatic', and adding types is one of the first steps in going in the right direction.


No multiline lambda is a feature. It is convenient enough to use it like in

    sort(items, key=lambda x: x.size)
but really anything much bigger should have a name stamped on it dammit. I find JS largely unintelligible due to lambda overuse and nesting.


no it's not -- every other languages other than JS/TS has multiline lambdas... Java 8+, Kotlin, Rust, Scala, C#, ...

seems you're confused between "limitation" and "feature"...

it's a limitation due to python's "no-braces/indentations-only" policy -- can't think of a way to mark where a lambda starts and ends clearly and cleanly with indentation-only...


Disallowing multiline-lambdas was an intentional choice by the creators to prevent overly-functional code. For the same reasons that map/filter/reduce aren't in the global namespace anymore.

Python prefers different tools (comprehensions) to accomplish most of the same goals.


I mean, yeah, maybe that's the reason, I don't know. Still, I see no reason why you'd split features and limitations into two distinct sets.

In Rust it's difficult to mess with memory: it's a limitation but also a feature. In Haskell you cannot mutate, a limitation if there ever was one, but also a feature.


woah another rustacean? hi there!

for rust, it counts as a 'feature' because it provides safety guarantee, and they're opt-in (can use unsafe)

as for haskell, that immutability also provides guarantee of 'referential-transparency', which the users/libraries can use to their advantange.

But in python's case, you have to define the multi-line function *outside* the chaining, and 'forcing to name things' isn't always good (especially for constantly-changing code)

def hideEmail(user): return { ...user, email: user.hideEmail? '': user.email, }

users.map(hideEmail)

// later: def hideEmailAndPhone(user): return { ...user, email: user.hideEmail? '': user.email, phoneNo: ... // same stuff }

users.map(hideEmailAndPhone) ---

As for JS/TS, you can have 'named-callback-fn':

users.map(function hidePrivateFields(user) { ... })

and it's not difficult to come up with 'named-lambda-fn' standard

users.map(hidePrivateFields(user) => { ... })


    (lamada x:
      (x+x)
      /x)(n)
Will work, though in Python I usually just create an outside function and pass that in.

But yes, not everything in Python is nice for working with fp code. Using tuples, coming from Clojure, is painful. Things like returning exceptions for `StopIteration`is annoying. Mutability of lists outside of a lexical scope without using deepcopy. Though functools and itertools makes it more bearable.

But I also don't think types is the answer. I think something like Racket's contracts or Clojure's spec is closer to what I'd prefer. Gradual typing is nice though.


Lambada, the forbidden dance:

    (
         lambda x:
         x
         +
         x
    )(5)


It's optional. How about if you are better without it, outperform by yourself and show us the way?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: