Hacker News new | past | comments | ask | show | jobs | submit login

Dependently typed language can check at compile time if you violate a ranged integer value among many other things.

I'm curious though why you are not a fan of static typing? I feel like if you have a language with inference it's pretty good. I mean you could imagine python with type inference and you could essentially write anything in it you can today (baring stuff like heterogenous data structures) without specifying the type manually.




(1) I like generic code which I can use as I see fit later. In Scheme, I can write:

(lambda (x) (* x (+ 3 x)))

A half-decade later, someone can take that code, pass a pair of specially-crafted objects to it to introspect what it does, and take a symbolic derivative, pretty-print that derivative with LaTeX, combine it with a few other pieces, and compile it into native code (and yes, that does happen).

But a lot of my code is generic in a way where types get in the way.

(2) If I do have typing, the type should usually be specified manually, but it should have semantic meaning, not be based on inferring whatever types my programming language happens to have built-in. What's important isn't that something is an integer, but that it's a count-of-apples. I should be able to compare a count-of-apples to a count-of-oranges without first converting them to a count-of-fruit.

This is especially important for numerical code, and especially in education. A lot of Scratch tutorials have little kids add a velocity to a position, without first multiplying it by a time. That leads to deep-rooted misconceptions.

I don't mind type inference if it's designed to give me feedback. I've seen a few systems (designed for students) which do things like take JavaScript:

let x=5;

let y="hello";

let z=x+y;

And flag that in the IDE. That's kinda nice. And obviously, many systems do optimization at runtime based on inferred types. That's not a problem either.

So there's a fuzzy layer in between. But the discussion was about TypeScript, not that fuzzy layer.


I don't really see how the first example is incompatible with static types. Most often type inference infers the most general type possible. That means you can do exactly the same things with that lambda in a statically types language as in a dynamic language.

The second example with count of apples and oranges is bad. Those are unitless and a thus Just an natural should be fine for them. Numerical code can get messy but that's exactly were the power of static typing is the best. You can have dimensions and units in a type system and you get dimensional homogeneity correctness for free.


I agree that well-designed type systems can be helpful, and in an ideal case, I'd have a mix. It's just that TypeScript is not that. The gap between the types of type systems you're describing and TypeScript is the Grand Canyon.

re: type inference

Inferential type systems fall into a fuzzy zone between statically and dynamically typed. For example, many JITs for dynamically typed languages do type inference, and generate compiled code optimized to the types actually being used. That's a very obviously good idea. And many linters look at inferred types as well to do compile-time checks. That's also an obviously good idea. But I wouldn't call a JavaScript JIT a statically-typed system (or Python code that's been through Pylint).

re: Apples versus oranges

They're not unitless. In the first case, the units are apples. In the second case, they're oranges. You shouldn't compare or add those types. If I do have a type system, that's something I should be able to specify.

Not to mention implicit conversions, if I want them (12 inches + 1 foot = 24 inches).


Well yeah typescripts typing sucks but that's not my point. Type inference falls squarely in the static typing camp. It has nothing to do with dynamic typing.

Take a look at the Haskell units library. That can do the things you illustrate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: