Hacker News new | past | comments | ask | show | jobs | submit login

I can see this kinda. It would be interesting to experiment with how black and white this is.

Historically, most cases have been either compile time (static) or run time (dynamic) type checking. And left between one or the other, and experiences like the above, people make their binary choice.

More and more in my Python code, I do some type annotations I can. My feeling is that the annotation coverage ROI is non linear. I get a lot of mileage out of the types I can add easily. When it gets complicated (interesting unions or deep containment) it gets harder and harder. Enough so that sometimes it influences my design decisions just to make the typing job easier.

Iā€™m left to wonder how this scales as the code base/team size scales. If the pain of 0 types is 100 in a large project, and 100% types cuts it to 10, what happens if we all do the easier 80% annotations. Is the pain 80% less too? Or is my personal experience mirrored and it actually is quite a bit better than just 80% reduction?




> either compile time (static) or run time (dynamic) type checking

But it is not that black and white, is it? Python is actually somewhat static in that it checks (some) types during runtime. Other dynamically typed languages live completely by the "when it quacks like a duck" playbook.

On the other hand, Haskell is completely statically typed. Still you can write many programs without annotating any types at all, as the compiler is pretty good at inferring types from their context.


> Enough so that sometimes it influences my design decisions just to make the typing job easier.

This is a very good thing. You definitely want the type system (and tests!) to guide your system design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: