This is not even close to reality. In the end they are integers. You will have to add/combine values with different qualities, which means at some point you need to unwrap all these layers of abstraction. The only difference it makes is that you can't recognize anymore what happens in this sea of useless wrapping and unwrapping. You constantly juggle more invariants than you can make newtypes for. Instead of simply writing "x & FOO_MASK" I will not follow some vague idea that if you wrap it in another layer of ad-hoc concept, the complexity will just magically go away and it will all just magically be "correct".
In one sentence, you can move everything to the type level if you insist, but at some point you have to DO it. The complexity does not go away, all you can hope to do is implement a given artifact only a single time. (And plain procedures are pretty good at that for >90% of what I do). Most things are done exactly once in a code base, so moving them to the type level is just a crazy bad idea.
>> a language that is somehow "lesser" than Rust
> So you're mad that I think Go is inferior to Rust? Would you like me to expand on why I think that?
I'm not mad at all. In fact I've never done Go either; I do most of my work in C and Python and sh (and some coding competition style things in C++11 if I need the quick container). I'm perfectly happy with the expressiveness they provide.
Actually I did read your blog post and regarding missing generics I even agree somewhat. But what I think you're missing is that it's not the biggest deal under the sun. I actually like to throw a few void pointers around in C, and it has considerable advantages, for example being highly modular (does not enforce any dependency hell crap, compiles quickly...).
I don't care about NULL-pointer safety. It's one of the least problems I encounter.
I don't care about formalized error handling. In many simple applications you simply die right away, and for many more complex and long lived ones, explicit error handling is the right choice and leads to a consistent code base, and does not force me to second-guess what conditions are "errors" and what not. Frankly error handling in Haskell is a huge mess.
I don't care much about operator overloading either. It wouldn't have been helpful in more than a handful of situations for me, so far. Instead of implementing extra syntactic sugar it's no big deal to simply call the relevant function three or five times. Again, that approach has advantages as well, for example I can easily search the locations where these functions are called. Also I can be sure that '+' means really an add instruction on the CPU.
And as I said I sometimes use std::map<K, V> et. al as well, but actually it's often only an interim solution until I realize I don't need that at all, because a V* would do, or I need an intrinsically linked map instead (which can't really be made as a template class, the only alternative being allocating the V's separately and making std::map<K, V* > instead), or I absolutely die suffering the compile times due to the added dependencies, or I can't bear the allocation overhead.
> Unless you have good reasons for disagreeing with them, which I do. You're not obligated to agree with everyone on everything. I think you're wrong about languages with poor formal systems being acceptable for correctness-oriented programming, so I'm not going to "admit your viewpoint" just because it's polite or something.
That's exactly the point: nobody pulled the correctness-first card. No, you can't disagree that people are productive and stuff EXISTS and works to a large extent, and advanced type systems are niche. I used to play CS:S for example and can't recall having any problems. I'm a heavy Linux user and the kernel is seriously rock solid, with the exception of a few reverse-engineered device drivers.
Not saying that there are not many buggy projects as well. Most of those are financially terribly undersupported.
Also not saying people are not looking for improvements to their methodologies, but you need to acknowledge the methods with which people get their work done, and that correctness-first approaches have largely not worked out so far. I'm not aware of any serious kernels or performant 3D Games written in Haskell. In fact, one of the highest-profile Haskell applications, GHC, does indeed have quite a few bugs, and this is because the type system simply can't encode all THAT many invariants, especially if you need to get something done in the end. (Besides that it's god awful slow sometimes, although I'm in no position to judge whether the implementation language is the reason, or the type system extensions are just not practical from a performance standpoint).