Hacker News new | past | comments | ask | show | jobs | submit login

So what might make dynamic languages easier to write in? I enjoy dynamic languages more because:

* "double kilometerToMiles(double km) { return km / 1.6 }" here the types are of really low value, yet I have to type them in.

* a mutable list of generic and variable arity event handlers. Really hard to specify as a type (most languages cannot do it). Really easy to use.

* co- and contravariance and lists, the math works exactly opposite of intuition.

* when you change your mind on the types/arity beyond what your IDE can follow, it is a lot of low value work

* if your test suite has 100% coverage of arguments/return-value and field usage, you have type checked your program.

If I am going to write down types, they better be of value to me. Like types that are null or non-null, or tainted (from the user) or non-tainted. Or two numbers that are kilometers vs miles. Or only accessible under a certain lock or other preconditions. And be flow sensitive (kotlin style null checks), not make me write even more code.

But if you are a dynamic language, do be strong typed (no "1" == 1). Also be dynamic in arity, that is (part-of) a type. Don't be handwavy scoped, lexical scoping is probably the only predictable way to do scoping. Fail fast, like prevent spelling mistakes for non existing field names for example. And allow this to work: "2 * Vector(10,10)".

And from that perspective there is still quite some room for improvement in all classes of languages, I suppose, and good that there is lot of movement and experiments today.




> double kilometerToMiles(double km) { return km / 1.6 }

But what if you could have `miles kilometersToMiles(kilometers km)`? Under the hood its the same old doubles but you would be able to catch errors such as the one that caused Surveyor failure [1].

e.g. Haskell has units[2] and dimensional[3] that allow you to do exactly that

[1] https://en.wikipedia.org/wiki/Mars_Climate_Orbiter#Cause_of_...

[2] https://github.com/goldfirere/units

[3] https://github.com/bjornbm/dimensional


> * "double kilometerToMiles(double km) { return km / 1.6 }" here the types are of really low value, yet I have to type them in.

You shouldn't have to, and you wouldn't in say Haskell.

> a mutable list of generic and variable arity event handlers. Really hard to specify as a type (most languages cannot do it). Really easy to use.

You want the handlers to have types corresponding to the events they handle, right? I.e. a labelled product/coproduct (similar to HList). They do exist.

> * co- and contravariance and lists, the math works exactly opposite of intuition.

I don't understand the claim here?

> * when you change your mind on the types/arity beyond what your IDE can follow, it is a lot of low value work

In a well-designed language most of your functions will be generic, and the parts you have to change should only be the parts that your change actually affects. Obv. this ideal is not fully achieved yet, but we keep getting better at it.

> * if your test suite has 100% coverage of arguments/return-value and field usage, you have type checked your program.

Not true. That one example works for a given function does not prove the type is correct for all arguments. As a trivial example you could define a function like "def f(i) = if(i == 1000) "blah" else i". Typechecking would catch that immediately, but to catch it through testing you'd have to test every possible integer as input.


To most commenters: Indeed static typing has advantages as well, I just didn't list them. And some languages solve certain "complaints" I had.

I would suggest though, that comparing haskell to javascript, is almost like comparing a train with an offroad bike. Which you choose when is a discussion at a whole different level.

As for the list of event handlers example. The point it is a trivial piece of code. Any jQuery plugin writer can do it. Yet the type math behind it is complex. Java cannot express it. Most languages cannot be generic in function arity so cannot express it. Spending time on it is not worth the trouble. And paying this penalty at API level, where ever user has to pay, is just a sin.

Some discussion from the dartlang on covariant lists: https://www.dartlang.org/articles/design-decisions/why-dart-...

The test suite idea was incorrect. Even 100% code coverage will not do. But for "normal" code, add reasonable amount of coverage. And any production runtime type error will be really surprising. And even when it does happen, the fix is clear and quick.

A personal feeling on this topic is that it is all tradeoffs. But it makes me really happy to see gradual typing gaining traction. So you can start on your offroad bike, but end on a well oiled train, where that is worth the tradeoff. So you don't have to choose up front. Perhaps when we can even get runtime guarantees on perf/object layout when the type checker is happy.


> The point it is a trivial piece of code. Any jQuery plugin writer can do it. Yet the type math behind it is complex. Java cannot express it. Most languages cannot be generic in function arity so cannot express it.

As the quote goes, dynamic typing is the belief that you can't explain to a computer why your code works, but you can keep track of it all in your head. If it's really hard to express in a given language then that's a problem with that language, but IME anything that's hard to express in statically-typed languages in general is a bad idea.

> And paying this penalty at API level, where ever user has to pay, is just a sin.

I'd say the opposite, because it's easy to cast away types when you don't need them, but it's virtually impossible to put type information back into a language that doesn't have it.

> The test suite idea was incorrect. Even 100% code coverage will not do. But for "normal" code, add reasonable amount of coverage. And any production runtime type error will be really surprising.

IME: For any given level of "I want all runtime errors to be at least this surprising", that level will be easier to achieve via types than via tests.

> A personal feeling on this topic is that it is all tradeoffs. But it makes me really happy to see gradual typing gaining traction. So you can start on your offroad bike, but end on a well oiled train, where that is worth the tradeoff. So you don't have to choose up front

IME it's ineffective - I spent years trying to get types working after the fact, but the trouble is that code that's 95% typechecked might as well be 0% typechecked. Every violation of typesafety needs to be visible right at the point where it happens if you're going to have any hope of correct code.

> Perhaps when we can even get runtime guarantees on perf/object layout when the type checker is happy.

That'd be nice. But it's going to take more advanced typing, not yes.


I only see this now. Good reply. All valid concerns. But

> "I want all runtime errors to be at least this surprising", that level will be easier to achieve via types than via tests

For the type of the values, yes, but you still need tests for the content of the values. E.g you can only index into arrays with integers, but can your code access the array out of bounds? So that level is implicitly achieved.

> but the trouble is that code that's 95% typechecked might as well be 0% typechecked

That is absolutely not my experience. Typed at module boundaries really helps, even is that is only 5% of the code. On the other hand, getting to 100% type checked can be hard in current gradual systems. Mostly because some code is hard to type, even though it works fine, and humans can reason about it fine. Though you might be right, it is probably better expressed in a more type sound way. (Unless it is parametrization on function arity that does you in, grr.)

But to me, unless you are coding in idris or such, types are like the derivative of your code. Yes they show the general "slope"; how it fits together. But it says nothing on the values that flow through it, or the correctness of the code. You still need tests, and the type checker is not much more than a spell checker. Its usefulness is not in dispute (though overestimated by some). Just that in most of todays popular languages, you pay a price that is not non trivial. The choice is a tradeoff.


> For the type of the values, yes, but you still need tests for the content of the values. E.g you can only index into arrays with integers, but can your code access the array out of bounds? So that level is implicitly achieved.

So you create a path-dependent type for valid indices into a given array (or, more likely, use a safe way of expressing iteration so that you don't need to see indices directly). Yes you can check that your indexing is valid with tests, but it's more efficient to do it with types.

> That is absolutely not my experience. Typed at module boundaries really helps, even is that is only 5% of the code.

Maybe if they're enforced at those boundaries - does that happen? I'm thinking mainly of gradual typecheckers that are erased at runtime, which IME are useless, because a type error can easily propagate a long way before it's actually picked up, and you see an "impossible" error where something has a different type at runtime to its declared type.

> But to me, unless you are coding in idris or such, types are like the derivative of your code. Yes they show the general "slope"; how it fits together. But it says nothing on the values that flow through it, or the correctness of the code.

Not my experience once you get used to working with the type system. Anything that should always be true in a given part of the code, you make into part of the types. Make the types detailed enough that a function with a given signature can only have one possible meaning, and then any implementation of that function that typechecks must be correct.


> * if your test suite has 100% coverage of arguments/return-value and field usage, you have type checked your program.

The most horrible argument in favor dynamic typing in my opinion. Test suits shouldn't be about checking if a return value is of expected type, that's ridiculous. It's implementing a type checker manually.

There are a lot of things statically typed languages can do to make types painless :

- (global) type inference (Crystal does that)

- co and contravariance, it is not limited to dynamically typed language

- optional parameters

- adhoc type declaration like in C# (ex {x:1,y:"a"} is a type).

- contracts

- pattern matching

- generics

- ect ... etc ...

I just dislike dynamically typed languages. Even interpreted languages should be statically typed IMHO. Again, there are multiple ways to make statically typed language painless when it comes to types.


> Test suits shouldn't be about checking if a return value is of expected type

Indeed and so they don't have to. When you have reasonable code coverage, it is really surprising to encounter type issues in production. And they are the easiest mistakes to fix. But the point is, runtime checks happen implicitly as you run the code. So no "implementing a type checker" needed ...

On the other hand my comment on when you get the equivalent of being fully type checked was incorrect. Even 100% code coverage will not guarantee it for some code.

I like both kind of languages, and use both. While you get some advantages in static languages, you do also pay a price in language semantics and effort. It is good to point that out, so we can continue making progress, like Crystal or latest C# (and latest F#!). There used to be a time when the mood was: Java is good for all, stop confusing us with your new languages.


> * if your test suite has 100% coverage of arguments/return-value and field usage, you have type checked your program.

This is absolutely false. Others have pointed this out, but types are much more general than runtime checks in dynamically typed languages. For instance, you can check at compile-time that your program doesn't have data races or deadlocks:

http://www-kb.is.s.u-tokyo.ac.jp/~koba/typical/

Or design a library in Haskell to ensure file handles and other scarce resources are promptly reclaimed:

http://okmij.org/ftp/Haskell/regions.html#light-weight

Neither of these are testable in dynamically typed languages. Static types are much more powerful than dynamic types, in general.


Writing down the types isn't a necessary property of static type systems. In Haskell, F# or (OCa)ML, you almost never have to actually mention any types.

I do agree that Java-style types aren't of much value. That catches few errors at significant cost.


Catching logical errors is only one out of several things that static typing gives you.

Incidentally, right now this is a few positions above this posting: https://news.ycombinator.com/item?id=12350063

Better IDE support, a kind of self-documentation and increased performance come to mind. I know there are counter-examples for each of these points, but the general trend cannot be denied.


The cost of non-Java-style types is compile time :)


> "double kilometerToMiles(double km) { return km / 1.6 }"

What about:

  toMiles (Kilometers k) = Miles (k/1.6)
Then you do:

  let x = Kilometers 5
  print (toMiles x)
And, of course, any calculation will only accept the correct units. At this naive implementation, you'll have to add the conversions to satisfy the compiler, but there are Haskell libraries that bring automatic conversions too.

Anyway:

> Like types that are null or non-null, or tainted (from the user) or non-tainted.

That's Haskell.

> Or two numbers that are kilometers vs miles.

That's my example up there.

> Or only accessible under a certain lock or other preconditions.

Yep, Haskell.

> And be flow sensitive (kotlin style null checks), not make me write even more code.

That's emulating what people do in Haskell.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: