Anybody correct me if I'm wrong -- in Pascal these subranges can specify the size of the variable in memory and check compile-type assignment to a literal value, but don't (can't) do much else.
There's the concept of dependent types in e.g. Idris which lets us correctly do this sort of range check throughout the program (not just on literal assignment) but it comes with strict requirements on what the compiler can allow, such as no unbounded recursion, because checking dependent types is roughly equivalent to running the program.
Nim has Natural and Positive (signed int) types that checked at compile and runtime. You can also define arbitrary range types for any ordinal type, for example enums:
type
Month = enum Jan, Feb, Mar, Apr, May, ...
SpringMonth = range[Mar..May]
var m: SpringMonth = Mar
It will raise under/overflow exception if value falls out of range.
I’ve seen “month” being implemented in typescript as a big sum type ‘1 | 2 | 3 | … | 12’ :) probably ranges could just be syntactical sugar on top of that, but I suppose it probably causes big inefficiencies for the compiler
In Pascal these are called range types, e.g.
month: 1..12
would define an integer where the type system would ensure that it is always between 1 and 12.
Apart from Ada this seems to be an alien concept to all other languages. The concept of a "range type" also seems to have other meanings.
What is the Pascal "range type" properly called in type theory and what is its relationship to refinement types?