being a more readable syntax. The problem is that if you later have:
int[7] arr2;
It would make sense that arr and arr2 are different types (as the thing on the left is different) while they are the same type and you should be able (hopefully!) to use them as arguments to the same function.
But that's because they wanted it to be like that.
To me, this code is very unclear:
int a, *b, c, d;
because everywhere else, multiple declarations in one statement all have the exact same type, but for some reason pointers get special treatment so that you can declare variables of type X and variables that are pointers to type X in the same statement, which just seems odd to me.
I'd be much happier if they were all strictly different and that
int* a, b, c, d;
did, in fact, declare all four variables to be pointer to int.
> [...] everywhere else, multiple declarations in one statement all have the exact same type, but for some reason pointers get special treatment so that you can declare variables of type X and variables that are pointers to type X in the same statement, which just seems odd to me.
I mean, int[5] and int[7] would be different types, and lots of bugs happen from passing something besides the appropriate array type to a function.
That being said, most languages which disambiguate between int[5] and int[7] provide some kind of polymorphism (and usually store it as a struct of size + data, to enable that).
For example: you can define a first function that goes from t[N]->t pretty easily, and it would operate on both int[5] and int[7] (returning an int).
Right, in a language with dependent types there is a type-level difference between int[5] and int[7], but c is not such a language, therefore using a syntax that encourages the mistaken notion that there is a type-level difference between int[5] and int[7] would be misleading.
i don't know rust, but i read a bit about fixed size arrays in it and the fact that 32 is the largest fixed size array makes me suspicious that it works like a pair. you can do this without depending on a value, because the length is encoded in the type.
like, you can have a type (a, b), and b can, of course, be of type (a, b), b again having type (a, b), and so on. then you always carry around the length encoded in the type, and it can be checked like above.
ghc has a limit on tuple sizes, and haskell makes no type distinction between [a] based on the number of elements in it, and it isn't dependently typed.
> i read a bit about fixed size arrays in it and the fact
> that 32 is the largest fixed size array
This is exceptionally mistaken, where did you read it? Arrays in Rust top out at the maximum value of a platform-sized pointer, which is either 2^32 or 2^64 depending on platform.
"Arrays of sizes from 0 to 32 (inclusive) implement the following traits if the element type allows it:
Clone (only if T: Copy)
Debug
IntoIterator (implemented for &[T; N] and &mut [T; N])
PartialEq, PartialOrd, Eq, Ord
Hash
AsRef, AsMut
Borrow, BorrowMut
Default
This limitation on the size N exists because Rust does not yet support code that is generic over the size of an array type. [Foo; 3] and [Bar; 3] are instances of same generic type [T; 3], but [Foo; 3] and [Foo; 5] are entirely different types. As a stopgap, trait implementations are statically generated up to size 32."
i sort of leapt to the conclusion that these traits couldn't be implemented because generically because the type system requires them to be implemented for each size of N, and they provide the first 32 as a nicety. the similarity to the situation with tuples in haskell, (http://stackoverflow.com/questions/2978389/haskell-tuple-siz...), and the fact that rust doesn't have dependent types and so a type couldn't depend on a value, and i just kind of guessed at a possible reason.
Ah I see, yes the explanation there is correct. The types exist up to ginormous sizes, but the standard library only implements certain convenience traits for certain sizes (though using newtypes, you can implement those traits yourself for any array size you want). The specific feature we're lacking is type-level numerals, which is on the way towards but not anywhere close to a dependent type system AIUI.
Saying int[5] and int[7] are the same type because the compiler doesn't enforce the difference is like saying JavaScript is untyped because there is no compiler to enforce types at all.
Regardless of what the spec or the compiler says, you the programmer absolutely need to treat them separately. Especially in a language like C, where arrays aren't even self-describing at runtime.
no it isn't. javascript has types; they exist only at runtime, but they're there. it's patently false to say it's untyped. but type differences codify certain classes of differences, and it really isn't that crazy or unusual to say that the length of a vector, array, list, whatever you want to call it, isn't a difference of type. to say that there's a type difference even if the type checker disagrees just means you and the type checker are using different type systems.
> It would make sense that arr and arr2 are different types (as the thing on the left is different) while they are the same type and you should be able (hopefully!) to use them as arguments to the same function.
arr and arr2 are indeed distinct types. They are of type int[5] and int[7]. You can see this by checking that they have different sizes, or that `printf("%s\n", std::is_same<decltype(arr), decltype(arr2)>::value ? "true" : "false");` will outputs `false` on your screen.
Both of them decay into a pointer to int, that is why people commonly confuse them with pointers. But they are pointer types; they are distinct types on their own.
Unfortunately either clang or GCC (I can't remember) decided the obvious behavior was wrong and changed it so that _Generic behaved as-if array expressions decayed to pointers. The C11 specification for _Generic was insufficiently precise, and for various reasons both vendors and (IIRC) the C committee are going to go with the least common denominator approach (just treat them like pointers) for consistency.
So newer versions of clang and GCC print out all false.
But another way of showing that arrays are real types is with
on all version of clang and GCC, and should on any other conformant C compiler. Although I would think that the simple sizeof proof should suffice to show that arrays are real types, notwithstanding that their evaluation rules are peculiar.
Alas, the disaster with _Generic and array expressions only proves that the situation is less than ideal. Although part of the problem is that _Generic was a novel language feature that didn't fit neatly into the historical translation phases. IMO C++ gets a lot of things wrong about C semantics, but apparently they got decltype right (presuming the behavior is a product of a clearer specification, and that behavior is consistent across implementations).
To be fair, although inelegant the compromise behavior for _Generic makes some sense. The principle use for _Generic is to implement crude function overloading. Because arrays always decay to pointers when passed to functions, it's convenient that _Generic would capture array expressions as pointers. OTOH, it makes some useful behaviors impossible. And the convenient behavior could have been had by manually coercing arrays to pointers using a trick like:
Well, had C been really strongly-typed then the hypothetical 'int[5]' and 'int[7]' would have been different types, and 'int[]' would be the size-agnostic type for an array.
Then you could easily get creature comforts such as:
On the other hand I agree that:
makes more sense than more commonly used: although it's messed up anyway as: will result in a surprise.