> As far as I can see, the fast types are a failure, and the least types are unused, possibly because no-one can understand exactly what the latter mean.
I personally quite like that we have the fast/least types, because they allow you to write portable code, that would otherwise require uintN_t types, which are only defined if the platform supports the specific width of the type.
Let's say you want to implement a cryptographic (often require 32/64 bit modular arithmetic) algorithm that works on any confirming c compiler. uint32_t isn't guaranteed to be defined, but uint_least32_t is.
You can now use uint_least32_t instead of uint32_t and mask out the upper bits that may exist when necessary. You'd do this with e.g. UINT32_C(0xFFFFFFFF), which ~~coincidentally~~ by design also has the type uint_least32_t.
This would result in "perfect" code gen on any platform that has a 32-bit unsigned integer type (given a reasonable optimizing compiler), since in that case uint_least32_t is the same type as uint32_t, and otherwise it will still work.
The fast types could be used for something similar, if they were properly defined. But C has a history of compilers implementing features not as intended, I'm looking at you rand(): "The C89 Committee decided that an implementation should be allowed to provide a rand function which generates the best random sequence possible in that implementation, and therefore mandated no standard algorithm" (c99 rational).
Edit: Note that you also need to make sure that the variables aren't promoted to signed integers, e.g. x << 3 should be 1u*x << 3.
I found that types like uint32_least_t to be much less useful after learning that standard C/C++ types already have these guarantees. Namely: char is at least 8 bits wide, short is at least 16 bits wide, int is at least 16 bits wide, long is at least 32 bits wide, long long is at least 64 bits wide.
Implementing cryptographic algorithms using uint32_least_t can be dangerous. Suppose that short is 32 bits wide and int is 48 bits wide. Your uint32_least_t might be mapped to unsigned short. When you do something like (x << 31), x gets promoted to signed int, and then you might be shifting a 1 into the sign bit, which is undefined behavior. So I would say that a better idea is to use unsigned long, which is guaranteed not to promote to a higher-rank type.
Oh, you are right, this can be problematic, although I don't like your solution either, since unsigned long is going to be larger than 32-bit on 50% of the platforms (not really, but you know the drill, Linux: 64-bit, Windows: 32-bit).
I personally quite like that we have the fast/least types, because they allow you to write portable code, that would otherwise require uintN_t types, which are only defined if the platform supports the specific width of the type.
Let's say you want to implement a cryptographic (often require 32/64 bit modular arithmetic) algorithm that works on any confirming c compiler. uint32_t isn't guaranteed to be defined, but uint_least32_t is. You can now use uint_least32_t instead of uint32_t and mask out the upper bits that may exist when necessary. You'd do this with e.g. UINT32_C(0xFFFFFFFF), which ~~coincidentally~~ by design also has the type uint_least32_t. This would result in "perfect" code gen on any platform that has a 32-bit unsigned integer type (given a reasonable optimizing compiler), since in that case uint_least32_t is the same type as uint32_t, and otherwise it will still work.
The fast types could be used for something similar, if they were properly defined. But C has a history of compilers implementing features not as intended, I'm looking at you rand(): "The C89 Committee decided that an implementation should be allowed to provide a rand function which generates the best random sequence possible in that implementation, and therefore mandated no standard algorithm" (c99 rational).
Edit: Note that you also need to make sure that the variables aren't promoted to signed integers, e.g. x << 3 should be 1u*x << 3.