Hacker News new | past | comments | ask | show | jobs | submit login

Fascination with type systems does not seem to be all that useful in practice. Go has a minimal type system, and is able to do much of Google's internal server side work.

Most of the problems that cause non-trivial bugs come from invariant violations. At point A, there's some assumption, and way over there at point B, that assumption is violated. That's an invariant violation.

Type systems prevent some invariant violations. Because that works, there are ongoing attempts to extend type systems to prevent still more invariant violations. That creates another layer of confusing abstraction. Some invariants are not well represented as types, and trying makes for a bad fit. What you're really trying to do is to substitute manual specification of attributes for global analysis.

The Rust borrow checker is an invariant enforcer. It explicitly does automatic global analysis, and reports explicitly that what's going on at point B is inconsistent with what point A needs. This is real progress in programming language design, and is Rust's main contribution.

That's the direction to go. Other things might be dealt with by global analysis, Deadlock detection is a good example. If P is locked before Q on one path, P must be locked before Q on all paths. There must be no path that leads to P being locked twice. That sort of thing. Rust has a related problem with borrows of reference counted items, which are checked at run time and work a lot like locks. Those potentially have a double-borrow problem related to program flow. I've heard that someone is working on that for Rust.




> Fascination with type systems does not seem to be all that useful in practice.

> ...

> The Rust borrow checker is an invariant enforcer. [...] This is real progress in programming language design, and is Rust's main contribution.

I'm so confused by your stance here. You essentially say "type systems are not useful" and then "oh but this most recent advance in type systems — that one is useful." Do you find type systems useful or not?

There are a lot of properties we can analyze statically, and practically all of them essentially amount to extensions of type systems. Any of them increases our ability to rule out undesirable programs from every beginning execution. Some of them have unintuitive syntax, but many of them are no more syntactically burdensome than most other type systems. This is especially true if you consider how far we've come with type inference, so we no longer have to write code with the verbosity of Java just to get some meager guarantees. It's still a very active area of research, but we're clearly making progress in useful ways (which you even highlight), so I don't really know what point it is you've set out to make.


I think the key is the following:

> It explicitly does automatic global analysis

They appear to think that the borrow checker isn't achieved with type theory, but with some other technique ("global analysis").

Although, to be fair, my understanding of making a practical affine type checker is that things get kind of wonky if you do it purely logically. So practically you do some data flow analysis. Which is, I believe, what rust is doing. This also explains why MIR was such a big deal for certain issues with borrow checking. They ended up with a format that was easier to run a data flow analysis on, and that allowed the borrow checker to handle things like non-lexical lifetimes, etc.

[I've only read about such things. So I might have mis-remembered some of the details. However, this is my take on why someone might not call rust's advances purely type theoretic (even if they can be handwaved as type theory at a high level).]


Also, the borrow checker doesn't do global analysis. Not doing so is an important design decision in Rust, and is part of the reason why we don't infer type signatures, including lifetimes generally. You want to keep things easy to compute.

That said, you're right that the current borrow checker ("non-lexial lifetimes") is built off of a control flow analysis, absolutely. But it still operates only within bodies, not globally.


>They appear to think that the borrow checker isn't achieved with type theory, but with some other technique ("global analysis").

No, I think they appear to think that the borrow checker is not about the programmer stuffing the program with type theory annotations for this purpose, but the borrow checker handles the analysis automatically across the program.

Of course it doesn't handle it perfectly, so there's some "fighting the borrow checker" and lifetime annotations sometimes needed - but it's not like designing the whole of your program based on type theory - the same way you're somewhat free from thinking about freeing memory in a gc language.


I'm so confused by your stance here. You essentially say "type systems are not useful" and then "oh but this most recent advance in type systems — that one is useful." Do you find type systems useful or not?

Not the original author but it seems like they're saying that type-systems are non-specific invariant enforcers and so have costs without necessarily having benefits whereas a user-specifiable invariant enforcer is more guaranteed to have the benefits.


More like: type-systems are a special kind of invariant enforcers, and so they are less useful than generalized invariant enforcers.


It is probably pretty presumptuous to assume, but I think that a lot of programmers that have only every been exposed to C/C++/C#, Java and Python have basically no concept of what a good type system can do for them.

Two examples from the top of my head:

1. Encoding matrix sizes into the data- and function-types, so that you can safely have a function `mat[c,b] mat_mult(mat[a,b] a, mat[c,d] b)` or even `mat[w-2,h-2] convolve(mat[w,h] input, mat[3,3] kernel)` and have the compiler check that you never use a matrix of the wrong size.

2. Actually checking the correctness of your implementation.

There is a very nice online demo of Liquid Haskell [1], where they defined the properties of ordered lists (each element has to be smaller or equal to the one before, line 119). Then they define a function that takes an (unordered) list and spits out a ordered one by applying a simple quicksort.

Now, if you break the algorithm (e.g. flip the < in line 193) and run Check, the compiler will tell you that you messed up your sorting algorithm.

Pretty neat.

edit: I just realized that LiquidHaskell is almost 10 years old. Sad to see that basically nothing made it into "production".

[1] http://goto.ucsd.edu:8090/index.html#?demo=Order.hs


For example one, it works for signal processing or graphics but compile-time dimensions are unusable in Machine Learning or Numerical Computing because it's too much friction on serialization/deserialization and some operations that reduce dimension or rank are based on runtime data (for example some dimensions are 1)


Sure, but this approach generalizes quite well. Especially in ML you have have a lot of matrices, many of them of known size (e.g. convolution kernels).

Also, while it looks like the matrix sizes have to be known at compile time, this is not the case. You are still free to use the same matrix types with dynamic sizes (or even mix them, useful for said convolutions).

In Haskell there that is achieved by using a "KnownNat", basically you elevate an integer from the value into the type level during run-time.


Type systems are useful, but not nearly as useful as many people believe they are.


Try working in a big system without them =P I think they are invaluable


I can see this kinda. It would be interesting to experiment with how black and white this is.

Historically, most cases have been either compile time (static) or run time (dynamic) type checking. And left between one or the other, and experiences like the above, people make their binary choice.

More and more in my Python code, I do some type annotations I can. My feeling is that the annotation coverage ROI is non linear. I get a lot of mileage out of the types I can add easily. When it gets complicated (interesting unions or deep containment) it gets harder and harder. Enough so that sometimes it influences my design decisions just to make the typing job easier.

I’m left to wonder how this scales as the code base/team size scales. If the pain of 0 types is 100 in a large project, and 100% types cuts it to 10, what happens if we all do the easier 80% annotations. Is the pain 80% less too? Or is my personal experience mirrored and it actually is quite a bit better than just 80% reduction?


> either compile time (static) or run time (dynamic) type checking

But it is not that black and white, is it? Python is actually somewhat static in that it checks (some) types during runtime. Other dynamically typed languages live completely by the "when it quacks like a duck" playbook.

On the other hand, Haskell is completely statically typed. Still you can write many programs without annotating any types at all, as the compiler is pretty good at inferring types from their context.


> Enough so that sometimes it influences my design decisions just to make the typing job easier.

This is a very good thing. You definitely want the type system (and tests!) to guide your system design.


They are especially useful when refactoring, or as a documentation tool.

However, without a type checker, choosing your names wisely will get you a long way.

I think anyone advocating type systems should spend a year working in a dynamic language, to get out of their echo chamber and form a more objective opinion.


I was a mild advocate of static type systems. Then I spent about two years working on dynamic languages, mostly Python, on two jobs with otherwise very different code bases. I am now fully converted: after that experience, I strongly, strongly, strongly advocate static typing, so much that I will leave any job that requires me to use languages that lack them. I don't know if I was just unlucky, but I found that experience to be dreadful. Aside from the million small friction points, I remember things like needing to allocate a full week for a refactor that would have taken less than one hour (and that's a pessimistic estimate) with the standard tools of a static language's IDE.

I really don't see the advantages of dynamic languages. Supposedly you develop faster, but in my experience that's absolutely not true.


For me it all depends on the size of the project. For personal projects below a certain size dynamic typing is 50x faster than static. But that benefit drops off sharply once the project gets over a certain size or once there are more than a couple of people involved.

That's why I liked JavaScript -> TypeScript. I can bang out a prototype of some small piece in JavaScript and then add the types once I'm sure it worked. I save tons of time getting to MVP / proof of concept first and then filling in the constraints later.


I find usually that happens after a couple thousand of lines, so beyond one off scripts the advantage isn't there much.

And some of it is just bad language syntax or culture making it obtuse, not the staticness of the language. It also has an increased learning curve.


Yeah, you have to work so hard to write maintainable code in "dynamic" languages. It's also super hard to debug them. It's easier to write the first thing that runs, but that's not enough.


I'm working in Python. I agree with you. Refactoring is much harder, especially with large code bases


What you appear to be saying is that people who like type systems must be ignorant because with experience you suspect they would think differently.

This seems to me to be extremely uncharitable point of view.

But let's roll with it.

I advocate type systems.

I've also worked in several non-trivial projects in lua. Several non-trivial projects in python. Several trivial projects in common lisp. Several trivial projects in erlang.

Additional I've worked in a non trivial project in ruby for several months. And one non trivial project in node for a year. Both of these in a professional 8 hours a day capacity.

I still advocate type systems. More so after working in dynamic languages.


Common Lisp has a type system, if I understood your meaning right.


"Type system" is being used here to mean "static type checking" while CLisp has some facilities for this they are implementation-specified and not commonly used. (Even its dynamic typing checking is not commonly used.)


Common lisp is dynamically typed.

There are type hints that you can give the compiler, but I believe that's only useful for generating faster assembly. Also you can get compile time warnings from macros, but that's much closer in nature to getting a parse error (something pretty much every language does that I'm aware of static or dynamically typed).

I wouldn't be surprised to learn that there exists a common lisp typing extension that someone made with macros (after all racket has something like that iirc), however if it exists I didn't use it regardless.



Common Lisp also has a system for OO coding but Common Lisp is not an OO Language.


I’m not sure this is true. John Brant and Don Roberts did the some of the original refactoring work in Smalltalk under Ralph Johnson at UUIC.

I remember sitting with them at OOPSLA and discussing just this at length. Java was in full momentum ascendancy. Eclipse was the new will-solve-everyones-problems IDE. There was a LOT of interest in doing for typed Java in Eclipse what John and Don had both done in Smalltalk during their advanced degrees. They too thought typing would make it easier.

And what they shared with me is that it actually made it harder. Type systems are complicated to model. It gets combinatorial complex to model transformations across all the edge cases.

It may be argued that this was/is more a statement about the Java type system in particular, which is not generally extolled anyway.

But the basic take away was that refactoring becomes easier and easier as the language syntax and model gets simpler. A more complicated language like Ruby, without types, was also harder.

Or put another way… refactoring engines work with AST models. The more complicated your AST, the harder refactoring gets. I think type systems generally add, rather than remove, complexity from language models.

But perhaps there is a way to make it so. Just sharing some thoughts of yore from some guys that spent a lot of time studying refactoring.


That's exactly what I did, because I felt that I was just too sure that typed language were just superior to non typed languages without actually having proper experience.

I learnt Clojure, to the point that it's now my favourite language and the one I use for many things. Nevertheless, I still think that typed languages have two advantages that make them a more practical option for most situations:

- It's easier to evolve your code. This includes refactor but also normal program evolution through aggregation.

- It allows to express abstractions in pure code, rather than in documents or comments.

The first advantage requires little explanation. The second I think is more difficult to appreciate without enough experience with typed systems and an interface oriented programming style.

How these two things are important for a given person is difficult to say. I have come to the conclusion that it relates a lot to how your brain works. Unfortunately, mine can only track 3 or 4 things at the same time, so I need to abstract away anything else than the little piece of code I'm working on and get the compiler to worry about how things glue together.

I need the comfort of getting the compiler to help me change my hats and be able not to think about anything else that the concrete piece of code I'm working on and the interfaces of the other parts it uses. When I don't have this possibility, I miss it even more than the comfort of programming using inmutable data structures that Clojure spoilt me with. I think need to seriously try Scala 3 at some point, since it seems to combine inmutable data structures by default with an advance type system (although the lenses libraries I've seen look like an abomination in comparison with Clojure's update-in and company, not to mention the absolute elegance and triviality of the implementation of the latter: https://github.com/clojure/clojure/blob/clojure-1.10.1/src/c...).

So I would second your recommendation and encourage people trapped in dynamic language echo chambers to try for a year something like Typescript or Kotlin to appreciate other ways of doing things in languages with practical type systems. Perhaps some of them will discover that it suits them better or help them better understand why they prefer the dynamic style.


The average user of Haskell, Rust, or ML today has certainly spent sufficient time in the Python/JS/Tcl/what have you mines.

The opposite is much less likely to be true; at best they may have done a project or two in Java or started migrating to TypeScript.

(Please also keep in mind: The average ML user probably reads, or intentionally avoids, HN. The average JS user doesn’t know it exists.)


I advocate for type systems specifically because of how much I've worked with dynamic languages in big corps.


I spent about a decade pretty much solely in dynamic languages (ruby and javascript mostly) before switching to pretty much entirely working in a fairly lame statically typed language (java). Languages with static type systems are enormously, glaringly, better. Really where the benefit comes from is static analysis; it's just that languages with static type systems have better static analysis capabilities. This may not strictly be the case in theory but it definitely is in practice.


Oh, I'll never program in a dynamically typed language again. I'm sold on that. What I'm speaking to is the notion that types are the best model for solving most/all problems in software engineering.


Do people commonly think types are a solution to most or all problems? Other than correctness I am not sure what software engineering problems a type system actually solves, and the rest of the debate is about the expressiveness of the type system (or lack thereof, which forces suboptimal engineering practices in some languages).


Static typing is just another form of static analysis. However, it's static analysis enforced by the language rather than a third party tool. That allows me to be confident in my dependencies too if I see them putting the type system to work.

Protobuf is moving us that way with microservices too. Since they're a strongly typed message format, it's harder to make mistakes in the interface between two services.

I also like that languages can have complete local static analysis. Sure, the business requirements might be large and spread across many areas, but I will break them down into smaller chunks and encode invariants into the type system so that if the small chunk compiles, I am confident it does exactly what I expect, and I don't need to remember exactly where it fits in the larger picture


> Do people commonly think types are a solution to most or all problems?

There's certainly a subclass of programmers who believe this, yes.


Rust's borrow checker isn't a type system. It's a static analyzer that tries to determine if it can figure out when to free your allocation.


Why do you think that’s not a type system?

Literally all type systems could be described as “a static analyzer” that tries to assign and validate properties over the code it’s analyzing.

All compilers also rely on the results of that static analysis to direct codegen.

Rust’s type system implements substructural typing, and the borrow checker is an integral element of that type system.


Based on your definition, then any static analyzer is a type system, because type information and the usage of those types is basically all that’s available for static analysis.

Types define what operations can be performed. The borrow checker looks at allocations and determines when they can be freed. It really has nothing to do with types. An ideal implementation of the borrow checker could be totally type unaware and work with dynamically typed languages.


> It really has nothing to do with types.

I was under the impression that it's substantially an implementation of an affine type system (https://en.m.wikipedia.org/wiki/Substructural_type_system).

Probably you could build such a thing without thinking about types per-se, but I don't think the designers of Rust did that, and I am not sure that makes it "not a type system" anyway.

> any static analyzer is a type system

"Any static analysis checking correctness is either a linter or a type checker" isn't a claim I would make, but I am not sure offhand how I would argue against it.


I'm not deeply familiar with Rust borrow checker, but my understanding is that Swift will in the near-future support move only types. Are those not somewhat similar to what Rust has?

See here for more re Swift: https://github.com/apple/swift/blob/main/docs/OwnershipManif...


> Rust's borrow checker isn't a type system

No, it's the thing that enforces a type system.

The borrow/move semantics it enforces are a (part of a) type system.


> Fascination with type systems does not seem to be all that useful in practice.

And yet type theory is an excellent way to express all kinds of invariants. The more rich the type system the more you can express. If you get to dependent types you essentially have all of mathematics at your disposal. This is the basis of some of the most advance proof automation available.

What is super cool is that proofs are programs. You can write your programs and use the same language to prove theorems about them.

This is still fairly advanced stuff for most programming activities but the languages have been steadily getting better and the automation faster and I think the worlds will eventually collide in some area of industrial programming. We're already seeing it happen in security, privacy, and networking.

I don't think type systems suffer from complexity. They merely reveal the inherent complexity. You can use languages that hide it from you but you pay a price: errors only become obvious at run time when the program fails. For small programs that's easy enough to tolerate but for large ones? Ones where certain properties cannot fail? Not so much in my experience.

update: clarified wording of "proofs are programs"


> The more rich the type system the more you can express

Why is this interesting? You pay an extremely heavy price in terms of language complexity. In practise, you almost never have the invarants at all or correct when you begin programming, and your programs evolve very rapidly. Since with dependent types you loose type-inference, you now what to evolve two programs rather than one. Moreover proofs are non-compositional: you make a tiny change somewhere and you might have to change all proofs. In addition we don't have full dependent types for any advanced programming language features, we have them only for pure functions that terminate.

> the same language to prove theorems about them

That sounds like a disadvantage. Ultimately verification is about comparing two 'implementations' against each other (in a very general sense of implementations where logical specs and tests also count as implementations). And the more similar the two implementations, the more likely you are to make the same mistake in both. After all, your specification is just as likely to be buggy as your implementation code.

> type systems suffer from complexity.

This is clearly false for just about any reasonable notion of complexity. For a start pretty much as soon as you go beyond let-polymorphism in terms of typing system expressivity, you looks type-inference. Even type-checking can easily become undecidable when the ambient typing system is too expressive.

There is no free lunch.


>your programs evolve very rapidly. Since with dependent types you loose type-inference, you now what to evolve two programs rather than one.

Yes, just like you have to evolve your specification/documentation. Similarly, in the exploratory phase you'll stick to very 'rough' typing and next to no proofs and as the program gets clearer and solidifies, you can continuously refine your types (with the amount of refinement depending on how much certainty one wants). A mature program with a rigid spec is going to be harder to change, but that's just how it is anyway.

>Moreover proofs are non-compositional: you make a tiny change somewhere and you might have to change all proofs.

People on HN keep repeating this, but it's trivial because it's actually just a statement about the nature of properties. Proven invariants provide an interface to be used by the rest of the program. If the changes you make don't break your invariants but only the proofs, then you just have local adjustments to make. If your change cascades through the entire program, then it's because all the invariants get invalidated and your 'tiny' change actually changes a good deal about the semantics (and thus the spec) of the program.

The exact same thing applies to unit tests, but you don't see people constantly bemoan the non-compositional nature of unit tests even though a 'tiny' change could break all your unit tests.

>After all, your specification is just as likely to be buggy as your implementation code.

Not only are specifications declarative, they're generally magnitudes smaller. If you're as confident in your spec as in your code, something is very, very wrong with your spec. Well, or your code is your spec, but in that case you get what you pay for.


> just like you have to evolve your specification/documentation.

That is correct, and also one of the core reasons why in the vast majority of cases either no specification/documentation exists, or will only cover a small case of the actual specification. For example I would bet money that not a single function in the C, C++, Java and Python standard libraries is fully specified, in the sense of nailing down the program up to observational equivalence. (Aside: I can still count the programmers who would be able to sketch the observational equivalence using in e.g. C++.)

> If your change cascades through the entire program, then it's because all the invariants get invalidated and your 'tiny' change actually changes a good deal about the semantics (and thus the spec) of the program.

This is not borne out in practise.

A lot of code refactoring I've done was trivial (e.g. changing the order or arguments), but ripples through the program and proof structure. HoTT was invented in order to automate some of those trivialities. Java exception specifications are an example: you call a different library function somewhere and need to change all exceptions specs up to the main function, rippling through millions of LoCs. That's why exception specs were abandoned. Another example are termination proofs (note that a full specification must involve termination proofs, which is why the existing expressive type theories don't give you unrestricted recursion, and also the reason why program logics are typically only for partial correctness). In my experience, termination bugs are rare, and it would be insanely counterproductive if I had to refactor all proofs globally just because I've made a slight change to some printf somewhere in a large code base.

> unit tests, but you don't see people constantly bemoan

The reason is that no programming language forces you to do unit tests. In contrast, expressive type-theories constantly force you to prove a lot of trivialities.

> declarative, they're generally magnitudes smaller.

I don't know what you mean by declarative (other than: leaving out some detail). But they cannot be smaller in general: if every program P had a full specification S that was shorter (here full specification means specifying P up to chosen notion of observational equivalence) then you've an impossibly strong compressor which you can use to prove that every string can be compressed even more. Contradiction.

What you see in practise is that you only specify some properties of the program you are working on. For example sorting routines in C, C++, Java etc. I have never seen a specification that says what happens when the sorting predicate is nicely behaved (e.g. returns a < b on first call but a > b on second). It's fine to omit details, but that limits the correctness you get from your spec (and also limits program extraction). Moreover, if you only work with a partial specification, you can ask the question: what level of partiality in my specification gives me the best software engineering results. My personal and anecdotal experience (which includes dependently typed languages) has consistently been that the full automation given by let-polymorphism is the sweet spot for non-trivial programs (lets say > 10k LoC).


>the core reasons why in the vast majority of cases either no specification/documentation exists

I feel that is much too pessimistic.

>will only cover a small case of the actual specification.

If the same applies to proofs: so be it. Don't let perfect be the enemy of good!

>For example I would bet money that not a single function in the C, C++, Java and Python standard libraries is fully specified, in the sense of nailing down the program up to observational equivalence.

I'd imagine so as well, but I think that's more indicative of how even a (superficially) simple language like C is not all that amenable to specification.

> A lot of code refactoring I've done was trivial (e.g. changing the order or arguments), but ripples through the program and proof structure.

This is not my experience. If you use sufficient proof automation, something like this should have next to no impact on proof structure. Univalence is useful, but a lot of refactoring is not just switching to isomorphic representations, so I'm convinced that larger scale proof automation is way more essential than HoTT.

> Java exception specifications

I'm not convinced that this is fundamental to specifying exceptions rather than just Java having particularly poor ergonomics for it. I've never met a person that actually liked implicit exceptions and if you ask people who dislike exceptions, that's often one key part of it.

> In contrast, expressive type-theories constantly force you to prove a lot of trivialities.

For all large dependently typed languages that actually bill themselves as programming languages (Agda, Idris, Lean) there is some sort of 'unsafe' feature that allows you to turn the termination checker off - at your own peril of course. But you only pay for what you use, if you don't need the additional correctness, you don't need to put in any more work, just like with unit tests.

(There are also ways to safely defer termination proofs, but to be fair the experience isn't the best currently.)

>I don't know what you mean by declarative (other than: leaving out some detail).

Specs tell you the what but not (necessarily) the how. Take the spec for all sorting algorithms (giving observational equivalence):

1. The output list is a permutation of the input

2. The output is sorted in regards to some predicate.

That's a couple lines at most (or a one-liner if you don't count the building blocks like the definition of a permutation), which is a good amount shorter and easier to verify than e.g. a full heapsort implementation.

>But they cannot be smaller in general: if every program P had a full specification S that was shorter [...] then you've an impossibly strong compressor

The compression is stripping away the 'how', that's why you can't 'write a spec for a spec' and compress further.

>What you see in practise is that you only specify some properties of the program you are working on.

Sure, writing a fully specified program of non-trivial size is currently really only possible if you're willing to ~waste~ invest a decade or more.

>Moreover, if you only work with a partial specification, you can ask the question: what level of partiality in my specification gives me the best software engineering results.

Why would you assume that there is a single level of partiality that gives the best results? I agree that HM style types are a great fit for 'general' programming because it has such low impedance, however I also believe that most programs have some areas where they would benefit from more specification. (I think people have a clear desire for this and that it's at least partially responsible for some of the crazy type hackery as seen in Haskell or Scala, which could often be greatly simplified with a richer type system.)

Having a richer type system doesn't mean that you always have to fully use it. It's perfectly possible to just use a HM style/System F fragment. Using dependent types just for refinement is already one of the dominant styles for verification. If dependent types ever become mainstream in any way, I imagine it will also be largely in that style.


> Take the spec for all sorting algorithms (giving observational equivalence):

It's not that simple. You also have to specify the effect set of the algorithm, meaning, assuming we do in place sort: every memory cell other that the input array are unchanged after termination. (If you return a fresh array, you have to specify something related). You also have to specify what happens for general sorting predicates, for example if the sorting predicate prints out the element it compares then this reveals (much of) the algorithm.

> The compression is stripping away the 'how'

The sorting example shows that this largely doesn't work in practise. In my experience for non-trivial programs you tend to have to carry around invariants whose complexity matches the size of the program you are verifying.

> I'm convinced that larger scale proof automation is way more essential than HoTT.

I strongly agree, but currently this proof automation is largely a hope, rather than a reality.

> crazy type hackery as seen in Haskell or Scala

Haskell and Scala have (local) type-inference. That makes those complex types (somewhat) digestible.

> dependent types just for refinement

If / when this really works well, so that you don't pay a price when you are not using complex types, then this would be very nice. I don't think PL technology is there yet. (I'm currently using a refinement typing approach in production code.)


>It's not that simple. You also have to specify the effect set of the algorithm, meaning, assuming we do in place sort

I implicitly assumed that state would be encapsulated, since after all dependent types are fundamentally incompatible with observable state, but you're right that this is part of the spec. However I have severe doubts about the usability of any specification language that takes more than a one-liner to specify this.

>You also have to specify what happens for general sorting predicates

If you want to admit more general sorting predicates, sure, but normally you'd just allow pure functions.

>The sorting example shows that this largely doesn't work in practise. In my experience for non-trivial programs you tend to have to carry around invariants whose complexity matches the size of the program you are verifying.

If you include lemmas derived from the spec then I'd agree, but then the spec you have to check would still be a lot smaller. If not, then the only specs where I've experienced anything close to this are ones that are just the operational semantics of existing programs i.e. cases where you want to analyze a specific program. Otherwise I'm frankly uncertain as to what even the point of giving a spec would be.

>Haskell and Scala have (local) type-inference. That makes those complex types (somewhat) digestible.

Might make it more tractable to use, but I find my issue is that it's less direct and often obfuscates the meaning.


> doubts about the usability of any specification language that ...

You can extrude hidden state in ML-like languages, which gives rise to all manner of subtle behaviour in combination with higher-order functions. As Pitts, Odersky, Stark and others have shown in the early 1990s, even just the ability to generate fresh names (in ML-like languages, this corresponds to the type Ref(Unit)), in conjunction with higher-order functions gives you essentially all the specification and reasoning problems of state.

> normally you'd just allow pure functions.

This is way too restrictive for non-trivial programs. This restriction work for sorting. And in reality even there you'd run into problem, for example if you take into account that a predicate might throw an exception (which, in practise you cannot avoid, think overflow or taking the head of an empty list).

> the only specs where I've experienced anything close to ...

I think that's because you have not attempted to prove substantial theorems about substantial code. The SeL4 verification is interesting in this context: it's specification (a purely functional specification of the OS behaviour) had, IIRC about 1/3 of the size of the OS's C code. Which is not much of a compression.


> Why is this interesting? You pay an extremely heavy price in terms of language complexity.

As you say, there is no free lunch. Not having a useful type system introduces its own complexity. It depends on what abstractions you find most useful.

The present limitations of dependently typed languages will not be limitations tomorrow. Evolution in the field of proof engineering is providing new frameworks for making proofs more compositional and being able to extract programs from the proofs. It's not amazing and super useful today but it's a lot better than it was even five years ago and I suspect will continue to improve.

> After all, your specification is just as likely to be buggy as your implementation code

If you can't think of the right theorems or specifications I doubt you will write a correct program.

One is a lot easier to reason about than the other.

> Even type-checking can easily become undecidable when the ambient typing system is too expressive.

I'm not sure I follow. I understand how type inference can become undecidable but how does a sound type theory end up this way? The judgement and inference rules for CoC are rather small [0].

> There is no free lunch.

I don't disagree. I still enjoy programming in C. And I might even choose it for a project. And if it so happened that I had certain requirements like the system had to be real-time and could not deadlock then... I might be making a trade off to write my proofs and specifications in another language than my implementation.

We're not at a place yet where we can extract a full program from a specification and not in a place where we can write dependently-typed programs with deterministic run times either.

I would like to have my cake and eat it too but that's where we are.

[0] https://en.wikipedia.org/wiki/Calculus_of_constructions


> Not having a useful type system introduces its own complexity.

I agree, I am not promoting dynamically typed languages. My intuition about this is more that there is a sweet-spot between automation and expressivity that gives you the best software engineering experience in most practical programming tasks. Milner's let-polymorphism is closer to that sweet-spot than full-on dependent types a la Calculus-of-Constructions

> extract programs from the proofs.

In practise you don't have specifications in > 95% of programming tasks. What, for example, is the full specification of a climate simulation, or of TikTok? One could argue that the shipped product is the (first and only) specification. Every program I've ever been part of constructing started from an informal, intuitive, vague idea what the software should do. This includes safety-critical software. To quote from a famous paper [1]: "We have observed that the errors we find are divided roughly evenly between errors in the test data generators, errors in the specification, and errors in the program."

> If you can't think of the right theorems or specifications I doubt you will write a correct program.

I strongly disagree. I have written many programs that are, as far as I can see, correct, but I am not sure I fully know why. Here is an example: the Euclidean algorithm for computing the GCD. Why does it terminate? I worked out my informal understand why at some point, but that was a lot later than my implementation.

More importantly: in practise, you do not have a specification to start off with. The specification emerges during programming!

> how does a sound type theory end up this

I'm not sure what you are referring to. Type inference and type checking can be undecidable. For example System F, a la Curry.

> extract a full program from a specification

Let me quip: Proof are programs! In other words: if you want to be able to extract a full program from proofs, the specification / proof must already contain all the information the eventual program should have. So any bug that you can make in programming, you also must be able to make in proving. Different syntax and abstraction levels clearly corresponds to to different programmer error probabilities. But fundamentally you cannot get away from the fact that specifications can and often do contain bugs.

[1] K. Claessen, J. Hughes, QuickCheck: A Lightweight Tool for Random Testing of Haskell Programs.


Highly expressive type systems can lead people to as much design hell as does deep OOP. I have seen and experienced this in at least a couple of projects.

The only difference is: instead of brittle hierarchies, we get ossified compositions (depending on how much nominal vs structural typing happens).

We, of course, agree that we are quite some distance from having advanced type systems brought to day-to-day industrial programming.


> You can write your programs and use the same language to prove theorems about them.

Didn't Kurt Gödel and Alan Turing do some work on proving statements within a system?


AFAIK languages like Idris, Agda, and Coq are not Turing-complete (specifically, they disallow general recursion) for just this reason.


If my understanding is correct, Agda and Coq disallow general recursion, so all programs terminate by construction, but Idris relaxes this, in a desire to be more pragmatic at the cost of not as clean mathematically (functions have to be modeled as partial in general, to account for the fact they might not terminate).


General recursion, yes however often the fixed point operator is added (and its associated judgements) to have interesting programs/proofs.

Hmm, that fixed point operator, rings a bell, can't quite put my finger on it . . . :-)


This advanced types stuff sounds really useful but it needs to be made very easy to use for the mainstream Java of C# programmer to use.

A success story in this regard is the async keyword. Very quickly you can get used to it and it feels like any other imperative programming.

In C# if I can add assertions and have C# compile time check the source that the assertion will not be violated. This would be great. I know they do this for null checking.


> The more rich the type system the more you can express. If you get to

Ah yes. And then you end up writing entire prgrams in types. So the next logical setep would be to start unit- and integration tests for these types, and then invent types for those types to more easily check them...

> you essentially have all of mathematics at your disposal.

Most of the stuff we do has nothing to do with mathematics.


One of the major selling points of a robust (not "strong", necessarily, but at least...not weak?) type system is that an entire class of unit tests are no longer necessary (e.g., those that validate/exercise handling of cases where data of an invalid type are passed as a parameter to the function). Integration tests are necessary independently of the implementation language--they don't test the correctness of units of code, but of communication between APIs.


> that an entire class of unit tests are no longer necessary

That's not what I wrote about. I've seen people implement entire DSLs in types. So now you have to test that.


> Go has a minimal type system, and is able to do much of Google's internal server side work.

And yet Go is adding generics in 1.8. And I'm sure its type system in another 5 years will be much more expressive than 1.8's. The community has long been saying that the minimal type system isn't enough.


Nit: they're adding Generics in 1.18 (not 1.8). Regarding "another 5 years": I'm not so sure. Go is very conservative about language changes. The type system didn't change at all from version 1.0 through version 1.17 (a 12-year period).


Some changes to nil accesses were made in 1.3. Tags were ignored in casts since 1.8. Overlapping methods were allowed in 1.14. New array pointer casts were added in 1.17. (Arguably, also type aliases in 1.9.)

None of these are as significant as generics, but things do change.


Yes, you're right. Saying that it didn't change "at all" was perhaps overstatement. But those are all very subtle changes. I've coded Go daily for 3-4 years and never been affected by them, except the tags one -- I think I used that once (post 1.8). Not sure what you're referring to about the nil changes in 1.3 -- I didn't see anything about nil in the 1.3 release notes: https://go.dev/doc/go1.3


Apparently what I was thinking of was in 1.2; nil pointers used to give semi-meaningful (but useless) results when an address was taken of a subfield, like the usual C implementation of `offsetof`. https://docs.google.com/document/d/14DgGJKGQeBTNJDXo3YxnlSwv...

It was before my time so I don't know what the practical impact was, just inherited some code that needed updating a few years ago...

I have used tagless casting a half-dozen times to deal with JSON vagaries; and overlapping methods today almost constantly. Slice conversions to array pointers is too new to say generally but I had one use-case pretty much immediately.


> Go has a minimal type system, and is able to do much of Google's internal server side work.

Isn't it stil mostly Java and C++? That's what I hear all the time here.

Also, I'm not sure what point you're trying to make. You start by saying that fascination with types systems is not useful in practice, and end with an example where it is useful (Rust). While Go can stick a GC to avoid most of the issues that Rust is trying to solve, it stil has to ship with a defer mechanism (no linear/affine types/RAII) and a data race detector.


It's been a while since I worked there, but the trend at Google at the time was that the amount of code written in each popular language was rapidly growing, and the number of popular languages was also slowly growing. (Despite a lot of resistance to introducing new languages.)

I'm out of touch, but I would expect that there is a lot more Go code by now, and it also didn't catch up with C++ or Java.


This is pretty accurate I would say. Although, Go is more popular outside of Google than inside it and the original dream of "Replace all Python and most C++ and Java with Go" is laughably dead. Google used to require SRE teams to explain why they weren't using Go for new projects and they abandoned that because it was ridiculous and no one took it seriously.

There are quite a few internal projects written in Go, but it feels like any time one of them gets big enough to matter, someone ends up reimplementing the important parts in C++ for performance reasons.


> Isn't it stil mostly Java and C++? That's what I hear all the time here.

Go's type system is much weaker and less expressive than either Java's or C++'s. C++ in particular has parametric polymorphism, type constructors, and dependent types. Go has none of those.


Question from genuine interest, how is Go's type system weaker than Java? Is it just the (until recently) missing generics.

I know that casting to interface{} is the equivalent of casting to Object, but Go 1.18 should hopefully avoid most of that.

Just wondering what else you're thinking of, asking as someone who knows Java well and Go okay.


Everything is mutable. Builtin operators like assignment and equality are hardcoded and may behave badly for user-defined types. It doesn’t have subtypes; you can slice to an embedded value but can’t get from there back to the containing struct or its methods. It doesn’t have generic covariance and contravariance; you can’t decide whether a []Animal or a []Bulldog is acceptable in place of a []Dog.


Go has definitely gained more mindshare more quickly outside Google than inside, in my experience.


> Most of the problems that cause non-trivial bugs come from invariant violations. At point A, there's some assumption, and way over there at point B, that assumption is violated. That's an invariant violation.

Which is exactly what a type error is!

> The Rust borrow checker is an invariant enforcer. It explicitly does automatic global analysis, and reports explicitly that what's going on at point B is inconsistent with what point A needs. This is real progress in programming language design, and is Rust's main contribution.

> That's the direction to go.

The borrow checker is an ad-hoc informally specified implementation of half of an affine type system. Having to switch programming languages every time you want to add a new invariant is a poor paradigm. What we need is a generic framework that allows you to express invariants that are relevant to your program - but again, that's exactly what a type system is.

Rust has done a great thing in showing that this is possible, but linear Haskell or Idris - where borrow checking is not an ad-hoc language feature that works by gnarly compiler internals, but just a normal part of the everyday type system that you can use and customize like any other library feature - are the approach that represents a viable future for software engineering.


Rust has affine types, and uses them extensively: types that own memory, as opposed to borrowing it, are generally affine.

In principle you could implement a form of borrow check with linear types (I don’t think affine is good enough), but the ergonomics would be horrible.


What would that look like, and why horrible?


Rust contribution affect less than 1% of programmers. Most code written today do not require manual memory management or even explicit multithreading.

I think typescript with gradual and structural typing and similar like mypy or sorbet are making real difference.

Type systems provide multiple benefits, performance, self-documentation, better tooling and more explicit data model.


> Most code written today do not require manual memory management

Rust has automatic memory management.

> or even explicit multithreading.

You don't need explicit multithreading to run into data races. Languages that allow any kind of unchecked mutable state sharing and allow any form of concurrency (explicit or hidden) are prone to that problem.

Even single-threaded programs with aliasing of mutable variables are hard to reason about and Rust improves on that considerably by not allowing accidental aliasing.


It's not a fascination, it's just easier and better to have good static analysis when programming. That doesn't have to be a type system, but I think there is a lot of reason to think that a type system is the lowest hanging fruit for useful static analyses.


I think this sums up the pragmatics well. Brian Cantrell discusses in one of his talks what they did at Sun to ensure they were writing safe C. This was a substantial amount of tooling they had to build up. Type systems bring you this tooling in a well founded, logical way. And as you say, it's a good place to start, even if it's just to know how the puzzle pieces of your code fit together.


Yes exactly. I'm kind of a broken record on this, but the key thing is static analysis. It's just that with statically typed languages, the type system specification and its implementation give you a giant head start on doing those analyses. You can build other kinds of static analyses for languages without static types, but it's just harder and you're way more on your own; you don't benefit from all the work put into the compiler for the language.


Type systems also allow people to understand your code, this is very important


> and is able to do much of Google's internal server side work.

You mean, one of the companies with the largest number of developers on the world, paying one of the highest average salaries for them is able to use the language?

That means absolutely nothing.


Go has been used by way more successful startups than Rust, Haskell and others of their kind combined.

At least so far. Rust might change that in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: