Hacker News new | past | comments | ask | show | jobs | submit login

I write ruby and javascript all day, and while I make plenty of errors, I find that they are rarely related to types. What are you working on where you regularly run into these? I hear this argument often enough that I assume there is something to it, but I struggle to understand it.



The bugs you make may not be related to types because your definition of what a "type" is refers to the very weak guarantees given by C, Java et al.

Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty? Perhaps you've discovered an XSS or SQL-injection vulnerability? What about an exception that you didn't anticipate, or really know what to do with?

In a more robust type system, these could all be type errors caught at compile time rather than run time. A concrete example of the null/nil thing; in Scala, null is not idiomatic (although you can technically still use it due to Java interop, which is understandable but kind of sucks). To indicate that a computation may fail, you use the Option type. This means that the caller of a flaky method HAS to deal with it, enforced by the compiler.

My "come to Jesus" moment with the Option type was when writing Java and using a Spring API that was supposed to return a HashMap of data. I had a simple for-loop iterating over the result of that method call, and everything seemed fine. Upon running it, however, I got a null-pointer exception; if there was no sensible mapped data to return, the method returned null rather than an empty map (which is hideously stupid, but that's another conversation). This information was unavailable from the type signature, and it wasn't mentioned in the documentation. The only way I had of knowing this would happen was either inspecting the source code, or running it; for a supposedly "statically-typed" language, that is pretty poor compile-time safety.

This particular example of a stronger null type would be doable in the "weaker" languages, but it isn't done for several reasons - culture and convenience are the two most prominent in my opinion. In this sense, "convenience" means having an interface that does not introduce significant boilerplate; any monadic type essentially requires lean lambdas to be at all palatable. "Culture" refers to users of the language tolerating the overhead of a more invasive type system, which admittedly does introduce more mental overhead.


> "Have you ever had a bug due to something being null/nil when you didn't expect it? How about a string or list being unexpectedly empty?"

I understand that's not exactly the point, but I find that LINQ (Sequence monad), built-in Nullable<> and custom Maybe monad make your life easier in C# in that respect.


That's certainly true. Although I'm not an expert on the really advanced type system features, this is an example of convenience without culture, IMO. C# having painless lambdas allows this particular example to exist, but it's not required (or idiomatic, in some code bases) - one can still use plain 'ol null. That said, I'd much rather use C# than Java for exactly this reason.


I don't want to write a big wall of text, but a lot of situations come up where bugs could (if you so choose) be encoded into the type system. Here are some examples:

- The Maybe/Option type: you explicitly declare that a value may be missing, so you cannot call methods/functions on it willy-nilly; the compiler will force you to handle both cases. Say bye-bye to NoneType object has no attribute 'foo' errors.

- Different types for objects that have the same representation: in a language like Python, text, SQL queries, HTTP parameters, etc. are all represented as strings. Using a statically-typed language, you can give them each their own representation and prevent them from being mixed with one another. See [1] for a nice description of such a system. See also how to separate different units of measurements instead of using doubles for everything.

- Prevent unexpected data injections. With Python's json module, anything that is a valid JSON representation can be decoded. This is pretty convenient, but it means you must be very careful about what you decode. With Haskell's Aeson, you parse a JSON string into a predefined type, and if there are missing/extra fields, you get an error.

- When I was doing my machine learning class homeworks, I very often struggled with matrix multiplications errors. An important part of that was that the treatment of vectors vs Nx1 matrices was different. I feel that if I could encode the matrix size in the types, I'd have had an easier time and less errors.

These are simple examples, but whenever I code in Python, I inevitably make mistakes that I know would've been caught by the compiler if I had been coding in OCaml or Haskell.

[1] http://blog.moertel.com/posts/2006-10-18-a-type-based-soluti...


Also, if you use F# flavour of OCaml you can use units of measure that would eliminate even more compile-time errors (if those matrices had something to do with physics for example).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: