Hacker News new | past | comments | ask | show | jobs | submit login

> Few things change more frequently while evolving/hacking together software than the data structures and types.

I tend to agree; it is often correct. (Though sometimes we have solid data structures and types and are able to evolve things without changing them.)

Anyway, therefore it follows that if we calcify and ossify data structures and types, we kill the progress, not to mention fun.

> Changing a type in one place while forgetting to in N other places is an extremely common defect introduction method that is easily caught with static type systems.

This is blown out of reasonable proportion by static type advocates. In a dynamic system, type is de facto a few bits of value in an object. So the above situation is just another value that is wrong.

Static programs also have run-time values that can be wrong. A static program that is supposed to calculate 42 might calculate 43, which has the correct type.

To discover the cause, someone will have to sit down and think about the code.

Code which has never been exercised by a test case (coverage) should still be regarded as garbage, in spite of passing type checking. It is not known to be correct. Type checked is not the same thing as correct.

The graph of "type-value" in typical code is rather trivial. Take any moderately complex algorithm, and ignore what its operations are doing; just consider their input and output types. The resulting graph/tree is a lot simpler than what is going on with the values. Type problems are trivial. That is why it's even possible to move them to compile time and have static languages. Think about it: something that runs for hours or days in order to produce a result (and is not even known to terminate) can be entirely type-checked in a millisecond.

We do not know a whole lot about code just because it has type checked. It may feel like you do, but that's just a feeling. The name of the function might be `sort`, but it could actually fail to sort items, in spite of type checking. Or it could sort them in descending order when the expected order is ascending: some test was reversed and the reverse test type checks because the operator is commutative.

You still have to either prove your code if possible and feasible, else test it exhaustively if possible, or else test it imperfectly.




"...calcify and ossify data structures and types..."

What are you even talking about? This sounds like an assertion from someone who doesn't use statically-typed languages and is just guessing.

To change a data structure in a statically-typed language, you change the declaration then fix any compile errors. It is easy in most cases.

In a dynamically-typed system, data structures actually get way more ossified, because when you change a structure you don't really know what might be broken or when you are really done making the code correct again... Therefore programmers avoid this.


I feel we're getting into strawman assertion territory here.

I've worked with statically typed and dynamically typed languages, and the fact is that there's some overhead involved in static typing. You pay that overhead upfront in order to hopefully prevent some bugs (and possibly get a performance boost) later on.

If you haven't tried both, you probably won't notice the mental overhead.

I've also found that this difference sometimes result in people approaching things differently, with static typing leaning itself more to specifying things upfront compared to jumping straight into the code with dynamic typing.

Sometimes this surely means things end up being clearer with static typing, but I've also seen the reverse happen, more often than you'd think, because some kinds of code have a lot of messy little details - with a dynamic language like Python you can model, or perhaps rather avoid modeling, those little local details by just stuffing things onto something you're already passing along, while the static typer would have to think harder about how to fit them in without polluting the overall data structures.

When you're rewriting things a couple of times as your understanding of the problem evolves, it can certainly help to be able to not worry too much about those details.


Oh, I've tried both all right!

There is more mental overhead in dynamically-typed languages, actually .. it's just less visible because it's implicit! It's the overhead of having to "keep all the type information in your head", which dynamic type proponents sometimes seem to be saying is a good thing.

It's not good because it is a tax on everything you do! Whereas in a statically-typed language, sure, you have to do the little extra overhead of putting the types in the program text, but this is quite freeing in the long term, because you can then drop the burden of having to think about what type something needs to be, in most cases.

(It also serves as documentation / literate programming.)

My approach to programming tends to involve rewriting things several times, or heavily modifying them, and as someone who has been programming for 34 years, in a lot of different situations, I find that static typechecking is by far a superior framework when refactoring or rewriting code. It is not even close.


I'm firmly on the static typing side of the fence. But I think you're ignoring what others are saying about where dynamic typing shines: Where you don't know what you're doing to the point that you don't know what the proper types look like. You can try things faster in a dynamic language in that situation, because you can sweep the "don't know what the types look like" part under the rug, and experiment with algorithms, and let that experimentation show you what the types ought to be. In some situations, that can be faster/easier/better than trying, from first principles and analysis, to determine what the types must be.

I think the difference is that I (and, I suspect, you) spend little of our time in that mode, and much more in the mode where static types can save your behind. (Nothing like working on a million-line code base in its second decade to make you value static types.)


No, this is how I program all the time. Usually when I start typing I have only a rough idea of what I want to do.

But in practice this is not a problem. So what if I don't know exactly what type a particular thing will be in the end -- I know generally if it is a number, or an array/list, or a hash/index... that is all I need to know. I use one of those basic types. If I need to change it later I change it later, and the fact that I am in a statically-typed language is great for changes like this because it helps me make them with high confidence.

This is why I don't believe that anyone who makes this argument in favor of dynamic languages really has that much experience in static languages. The actual outcome in real life is the opposite of what is described.


Sometimes you know what the types look like but not in detail. For instance, you know you need a structure with several named properties, but you don't commit to a type for some of them. You can have a fairly deep idea about the object model: what kinds of classes there are and how they connect, yet be loose about some of the lower level details.

You have types in the dynamic realm. Only, you're not restricted to executing nothing but consistently typed programs.


You still have to keep type information in your head when working with static programs. Compiling isn't all that you do with programs.

Not every expression in a program is rife with explicitly visible type. Not even in a language with declarations for all storage locations. Type inference seeks to minimize that, because it's considered clutter.

However, the completely clutter-free static program looks much like a dynamic one! If I look at some snippet of OCaml or what have you, for all I know it could be dynamic.

The difference is that it's constrained in invisible ways. (If it is already correct in its current form) it cannot be changed in certain ways and still be accepted for execution.

That doesn't mean a thing to me when I'm just looking at it trying to understand it. I have to figure out what the types are, and connect all the pieces in my head.


In a statically-typed language with reasonable tools you can right-click on an expression and see what type it is. This whole thing you're describing is a non-issue.


> To change a data structure in a statically-typed language, you change the declaration then fix any compile errors. It is easy in most cases.

Right. This can become an overhead in a rapidly evolving design because every small type change has implications across the codebase. With dynamic typing, you only need to fix the code paths you are currently interested in and development is very focussed in a way - you don't have to shuffle all parts of the code through your mental caches. This way you can iterate over a few revisions to the types without having to 'fix everything' each time. Finally when you're happy, you go ahead and fix the other dependent code. Admittedly, dynamic languages may not help you find the code that needs fixing. This problem is somewhat mitigated by tests.


> you change the declaration then fix any compile errors.

In all 3123 places in the code base; good luck.

This sounds like ... an assertion from someone who doesn't do actual software engineering.

Back at you!

> you don't really know what might be broken

Because you don't have a regression test suite, a test plan, and four developers to one QA person.

> Therefore programmers avoid this.

Programmers bravely avoid nothing. Historically, programmers have gotten themselves into every imaginable mess with every type of language.


> Anyway, therefore it follows that if we calcify and ossify data structures and types, we kill the progress, not to mention fun.

I don't think it's fun to ship bugs to customers.

I think it's much more fun to have a pervasive, mandatory proof engine that shows that I don't create these bugs in my software. (since it's mandatory, I can also help my teammates use my code correctly even though they don't understand how it works!)

I make better progress when I'm not creating preventable bugs. My customers have more fun too.

> This is blown out of reasonable proportion by static type advocates. In a dynamic system, type is de facto a few bits of value in an object. So the above situation is just another value that is wrong.

It's wrong in a way that's totally preventable ahead of time. I'd rather take the proof engine and never suffer this class of bug. It certainly doesn't catch every bug, but I'll take what I can get.


> shows that I don't create these bugs in my software.

Unfortunately, it doesn't show you that you don't create other than these easy bugs in the code.

> It's wrong in a way totally preventable ahead of time.

That would be a more powerful argument if all else could be held equal, which it rarely is.

> I'll take what I can get.

Local greediness is not always rationally founded.

Anyway, dynamic typing doesn't preclude the proof engine. Dynamic programs can be analyzed to predict situations of inconsistent use of type, so that you can be informed. However, that doesn't prevent them from being executed, and still having the type representations at run time to resolve the situation. That is to say, programs for which the proof engine positively identifies one or more type error can be run anyway, as well as those for which it says "undecided" (not proven free of type errors, but no errors confirmed).

Dynamic typing doesn't mean "I don't want my program analyzed prior to it being run; do not inform me of any impending issue of type!".


> We do not know a whole lot about code just because it has type checked. It may feel like you do, but that's just a feeling.

You can know a whole lot because it type-checked. What exactly you know depends on how you leveraged the type system.

For example, if you define a red black tree as is defined here: https://github.com/yairchu/red-black-tree/blob/master/RedBla...

You know code will never generate trees that break the red/black invariants. The tree is guaranteed to be balanced by the type checking process.

> The name of the function might be `sort`, but it could actually fail to sort items, in spite of type checking

Unless you use a dependently-typed language.

Consider this `sort` definition:

  sort : (in:List n a) -> (out:SortedList n a, IsPermutation in out)
"List n a" is a list of length "n" of values of type "a". "SortedList n a" is a sorted list of length "n" of values of type "a". "IsPermutation a b" is a proof that a is a permutation of b (returned from sort).

Here's how you can define a type like SortedList:

  type SortedList n a =
    exists min. SortedList' min n a
  data SortedList' min n a where
    Empty : SortedList' min 0 a
    Cons : LessOrEq x y -> (x:a) -> SortedList' y n a -> SortedList' x (1+n) a
SortedList thus embeds a proof alongside each element that it is less than or equal to the lower-bound of the remaining list (i.e: the next element).

Type-checking this guarantees that `sort` does the right thing, modulo resource-use/performance (or termination, depending on your type checker).

Resource use (bounding) can also be type-checked, but requires leveraging the type system even more.

These guarantees aren't free -- you have to write the types for them, and you have to prove them by instantiating values of these types. But you can opt in or out of each of these guarantees -- leverage the type system as much as you want.

With dynamic typing, you just don't have this tool at your disposal at all. No flexibility to guarantee anything. You're forced to opt out completely.


That would be useful if it could be applied to normal red-black trees and sorting. As in, respectively, the efficient, destructive algorithm which inserts into a tree, rendering it temporarily unbalanced and non-red-black, and then fixes it up; and an internal quicksort on an array.


Why is only destructive useful?


Please clarify something for me: Does that "proof" run a run time, or at compile time? If at compile time, how does it do so without running into something like the halting problem?


It does run at compile time, and it doesn't run afoul of Rice's theorem (a useful generalization of the theorem about the halting problem) because it's not implementing a general decision procedure that can decide for _all_ programs whether the property holds.

It's possible to solve halting-class problems for restricted subsets of programs, as well as to enlarge those subsets greatly by forcing the user to help. For example, I can trivially "solve" the halting problem for the subset of programs that contain no loops or recursion, by simply refusing to make a decision on programs that do. I can also solve it for the subset of all programs that do halt by simply running the program and reporting when it halts that it does, in fact, halt.

More usefully, the type systems in Peaker's examples rely on user-provided annotations to help the compiler "see" the truth of the user's claims (or equivalently, the compilers restrict the class of programs they operate on to those programs which contain sufficient user-provided annotations). The compiler doesn't have to prove everything itself - it only has to check that the proof is valid according to the rules of the relatively simple logic of the proof system.

The history of type systems has largely been a story of improving these logics and inference systems to reduce the amount of help needed from the user and enlarge the class of things that can be proven about programs. In this context, Rice's theorem amounts to saying that, no matter how complicated you make your type system, there will _always_ be programs that either (a) are valid but not accepted, or (b) are accepted but not valid.


So is it fair to say that you are proving to the compiler that your function will return a sorted list (rather than you handing an arbitrary function to the compiler, and the compiler decides whether it returns a sorted list)?


Yes, and in general it's pretty clear as you're doing it that that's what's happening. The ML family and Haskell type systems occupy a pretty nice place on the spectrum of expressiveness in that there's not much you can add to them without losing the ability to infer unique types. As long as your types are on the same level of complexity as a Haskell or ML type, you can typically get away with minimal type annotations - a bit more than ML or Haskell would've required, but not a lot more. When you introduce more complex quantifiers such as the "exists" in these examples, in practice you have to start thinking about "what the compiler knows" and how to help it know the things you want it to.

There are some systems such as Coq that use heuristics (and let you write your own new heuristics) to try to automatically find proofs, and interactive sessions in which you guide the system to the proof by suggesting tactics that move hypotheses toward goals or vice versa. This is one of the major factors that, in common speech, distinguishes between "typed languages" and "proof assistants" which otherwise are basically the same kind of thing. Even in proof assistants, though, you still need to be quite intentional about what lemmas need to be proven in order to move toward your ultimate goal, and often even in how you phrase your goals in order to improve the chances that the heuristics will find a proof.

These are the kinds of things i mean by "helping" the system in my previous comment. They are very powerful tools if assurance of correctness is a high priority, but definitely not a free lunch.

If you're ever interested in going down the rabbit hole, there's a pretty steep learning curve (though if you're comfortable with ML or Haskell you're already halfway up it) but an extensive and rewarding landscape of thought at the top. Unfortunately it's more of an academic or self-enrichment discipline than an industrial one these days, but if you look hard enough there are a few jobs where you can make use of these tools.


How the type checking proof avoids the halting problem is by lumping together "ill-formed" and "undecided" into one category: reject the program. Programs which cannot be shown not to be free a type inconsistency are rejected alongside those which are confirmed to have one.

(Actually before we even get to that, the type system has to be powerful enough to invoke the problem (i.e. be Turing complete: capable of expressing partial recursive functions).)


> We do not know a whole lot about code just because > it has type checked. It may feel like you do, but > that's just a feeling.

See also: The Safyness of Static Typing[1]

[1] http://blog.metaobject.com/2014/06/the-safyness-of-static-ty...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: