Hacker News new | past | comments | ask | show | jobs | submit login

Guaranteed correctness has a cost in the form of extra specification needed to get things done. For example, many type errors are essentially noise: they do not catch real bugs, and they create needless busy-work for the programmer to correct them.

example:

function stringify(FooObj object): return object.toString()

Now suppose we do some refactoring somewhere, and we want to call stringify not with a FooObj but with a BarObj. Since we have defined stringify as taking a FooObj, the compiler whines until we update the definition. However, depending on your philosophy, this error is nothing more than noise, because stringify would work fine on a BarObj.

Pragmatists may wish to forgo such safety in favor of greater flexibility:

function stringify(object): return object.toString()

This is the essence of liberal programming: an emphasis on flexibility and minimal specification, at the cost of reduced safety guarantees.




That example is a straw man. First, all reasonably strong type systems support polymorphism well enough to handle that case trivially. Second, type inference is a thing, so a lot of code in strongly-typed languages even looks the same as your second code example.

I'm not saying you can't come up with a fair example to support your argument. I will say, though, that it's going to be a lot harder to do so, and that such examples only rarely come up in practice. The argument for flexibility at the cost of guarantees of correctness used to be a good one, but it has weakened significantly with time, and before long will cease to be valid at all.


The example cited makes two decisions which are not articulated explicitly but are decisions one can make and which have benefits and costs. They are:

1) The function correctly executes on the class of things for which (.toString thing) has a sensible run time invocation.

2) The function delays examination of the correct class of its argument from compile time to run time.

1 is a benefit, in that in a dynamic system it is possible that something which, at compile time, does not have a sensible .toString invocation, can gain it at run time and then participate in the function.

2 is a cost, in that you have to delay classification to the instant in which you try to invoke .toString

This tradeoff is elemental and there will be systems you can build by embracing it that will never be possible in strongly typed languages. You'll be able to build isomorphs of them, but doing so will involve expressing significantly more ideas to get there.


Terms like "straw man" imply that I'm trying to debate, but I'm not. You asked how correctness could be contrary to pragmatism, and I tried to provide an example.


Whether or not it's a debate has nothing to do with whether or not your example is valid. I was just pointing out that it isn't.


Out of curiosity, how would you define my example stringify function in Haskell with "no cost" safety around the argument type?


The typeclass that provides "toString" in haskell is "Show". It defines a few functions, of which the important one is "show", which takes the original type and returns a string. A function that is equivalent to the java-ish example is:

    stringify s = show s
No type definition is necessary, because the compiler can correctly infer the correct, most generic type:

   stringify :: (Show s) => s -> String
Or, for any s in type class Show, a function that makes s into a String. Note that dispatch is done statically.

Modern type systems eliminate most of the cost of type safety through good generics and type inference. Mainstream statically typed languages are just 20-30 years behind the state of the art.

(note that the example actually can be reduced to: stringify = show)


Guaranteed correctness has a cost in the form of extra specification needed to get things done.

This is simply untrue. Take an example in Haskell:

    > show 23
    "23"
    > show (4,5)
    "(4,5)"
    > show (Just 2.34, Nothing, [2..5], 'c')
    "(Just 2.34,Nothing,[2,3,4,5],'c')"
This works for all types which are members of the class Show. We can even derive new instances for Show for our own new data types automatically:

    > data Foo = Foo Int Float deriving (Show)
    > show (Foo 3 4)
    "Foo 3 4.0"
But what happens if we omit the instance of Show from our definition?

    > data Bar = Bar Char Bool
    > show (Bar 'a' True)

    <interactive>:12:1:
        No instance for (Show Bar) arising from a use of `show'
        Possible fix: add an instance declaration for (Show Bar)
        In the expression: show (Bar 'a' True)
        In an equation for `it': it = show (Bar 'a' True)
We get all the benefits of safety guarantees with minimal extra work. This form of automatic derivation works for many type classes in the standard libraries but in the cases where it doesn't work we simply define a few methods which are specified in the class's definition. None of this is any more than what you'd need to do in a dynamic language but you get all the extra benefits of compile time safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: