> OCaml's H-M type system doesn't try to guess whether you meant all references to x to be type A or type B, it just tells you that they aren't internally consistent.
No, it doesn't tell you that. It tells you that they are internally inconsistent WITH RESPECT TO OCAML's SEMANTICS. They are not internally inconsistent in any absolute sense, as evidenced by the fact that I can write (+ x y) in Lisp and it will produce a sensible result whether x and y are integers, floats, rationals, complex numbers, or any combination of the above. This is an existence proof that compilers can figure these things out. Hence, any compiler that doesn't figure these things out and places this burden back on the programmer is deficient. OCAML's design requires its compiler to be deficient in this way, which makes OCAML IMHO badly broken. OCAML, in essence, requires the programmer to engage in premature optimization of arithmetic primitives. The result is, unsurprisingly, a lot of unnecessary work when certain kinds of changes need to be made. Maybe you can "get used to" this kind of pain, but to me that seems to be missing the point rather badly.
Yes, "...aren't internally consistent within OCaml's type system.".
What does your Lisp implementation (SBCL?) do with (+ x y) when x is a float and y is a bignum? Is there an implicit conversion at run-time or compile-time? (I don't have SBCL at the moment, it's not available for either of the platforms I develop on. Sigh.) If so, that's a fundamentally different result than OCaml, which is deliberately avoiding implicit conversions under all circumstances. I'm not saying that's an inherently better choice than Lisp's (it can be annoying), but this thread was initially about making Lisp faster, and that's part of how you do it.
Adding optional type inference, whether by H-M or something more flexible when mixed with non-statically-typed code (which seems like the real trick, but I'm still learning how type systems are implemented) to e.g. Scheme would be a great compromise. I believe MzScheme has had some research along these lines (MrSpidey: http://download.plt-scheme.org/doc/103p1/html/mrspidey/node2...), but it's not available for R5RS yet.
I like OCaml's strict type system more for how it helps with testing/debugging than the major optimizations it brings, really, but the speed is nice.
Also, OCaml is not an acronym (but I'm guessing that was from autocomplete in Emacs).
debugger invoked on a SIMPLE-TYPE-ERROR in thread #<THREAD "initial thread" {A7BD411}>:
Too large to be represented as a SINGLE-FLOAT:
129381723987498237492138461293238947239847293848923
This really shouldn't be an error -- the internal representation of the number shouldn't leak out like this.
The good news is that you rarely want to add a double and a bigint -- it usually doesn't make mathematical sense. But the compiler / runtime should try harder to do what you asked.
(In the case of something obviously wrong like (+ 42 "foobar"), I agree that it would be nice for the error to be caught at compile time. But I digress.)
Actually, that works even if you don't change read-default-float-format. I think + should coerce single-floats to double-floats when appropriate, though.
CL-USER> (+ 1203981239.123d0 129381723987498237492138461293238947239847293848923)
==> 1.2938172398749823d50
CL-USER> (+ 1203981239.123f0 129381723987498237492138461293238947239847293848923)
debugger invoked on a SIMPLE-TYPE-ERROR ...
(note ...f0 vs ...d0)
Admittedly, this is not a problem I'd expect to encounter in real life, so I'm not losing much sleep over it. CL is weird and I accept that ;)
Of course. Sooner or later you bump up against the fact that real machines are finite. That's not the issue. The issue is this: you ask two compilers to add 1 and 2.3. Compiler A says the answer is 3.3. Compiler B says it's an error. Which compiler would you rather use?
It's ultimately a matter of taste. You can push compiler cleverness too far. For example, 1+"2.3" probably should be an error (and if you don't think so, what should 1+"two point three" return?) But the rules for mixing integers, floats, rationals and complex numbers are pretty well worked out, and IMHO any language that forces the programmer to reinvent them is b0rken.
I agree that it makes mixing floats and doubles a little more work, but I'll trade that for the ability to have it automatically infer and verify whether what I'm doing makes sense when I have e.g. multiple layers of higher-order functions operating on a cyclical directed graph containing vertices indexed by a polymorphic type, and the edges are ...
Interactive testing via bottom-up design helps, but getting automatic feedback from the type system whether the code works as a whole, and which parts (if any) have been broken as I design and change it is tremendously useful. Having to write the code in a way that is decidable for the type system is a tradeoff (and the type system itself could be improved), but it's a tradeoff I'm willing to make on some projects.
> The issue is this: you ask two compilers to add 1 and 2.3. Compiler A says the answer is 3.3. Compiler B says it's an error.
In this case, Compiler B is fine with (float 1) +. 2.3 or 1 + (truncate 2.3), it just won't implicitly convert there because it goes against the core design of the language. This sort of thing is annoying at the very small level, but its utility with complicated data structures more than makes up for it.
> For example, 1+"2.3" probably should be an error (and if you don't think so, what should 1+"two point three" return?) [...]
Good example, by the way. Also, I think we're in agreement most of the way - I find Haskell's typeclasses a better solution to this specific problem (the operations +, -, etc. operate on any type with Number-like properties, and if it's not a combination that makes sense within the type system, ..., it's a compile-time error), though I prefer OCaml to Haskell overall.
This really shouldn't be an error -- the internal representation of the number shouldn't leak out like this.
The good news is that you rarely want to add a double and a bigint -- it usually doesn't make mathematical sense. But the compiler / runtime should try harder to do what you asked.
No, it doesn't tell you that. It tells you that they are internally inconsistent WITH RESPECT TO OCAML's SEMANTICS. They are not internally inconsistent in any absolute sense, as evidenced by the fact that I can write (+ x y) in Lisp and it will produce a sensible result whether x and y are integers, floats, rationals, complex numbers, or any combination of the above. This is an existence proof that compilers can figure these things out. Hence, any compiler that doesn't figure these things out and places this burden back on the programmer is deficient. OCAML's design requires its compiler to be deficient in this way, which makes OCAML IMHO badly broken. OCAML, in essence, requires the programmer to engage in premature optimization of arithmetic primitives. The result is, unsurprisingly, a lot of unnecessary work when certain kinds of changes need to be made. Maybe you can "get used to" this kind of pain, but to me that seems to be missing the point rather badly.