The correct rule is "follow the C grammar". An easier to remember and also correct rule is "start at the identifier being declared; work outwards from that point, reading right until you hit a closing parenthesis, then left until you hit the corresponding open parenthesis, then resume reading right..." (this is sometimes called the "right-left rule").
The "spiral rule" dances around the truth without actually being precise enough to be useful.
const char *foo
foo -> const char *
foo -> const char * 
foo -> const char *
*foo -> const char
int (*const bar)[restrict]
bar -> int (*const)[restrict]
*bar -> int [restrict]
(*bar) -> int
f -> int (*(*)(int))(void)
(*f) -> int (* (int))(void)
(*f)(0) -> int (* )(void)
*(*f)(0) -> int (void)
(*(*f)(0))() -> int
bar -> const pointer to mutable array of ints
*bar -> mutable array of ints
int *const p;
const int *p
int const *p
const is one of storage classes and is read at its order, not just "to the right".
To follow the declaration you make use of the fact that postfix operators in have a higher precedence than unary, and that of course unary operators are right-associative, whereas postfix are left-associative (necessarily so, since both have to "bind" with their operand).
If there are parentheses present, they split this process. We go through the postfixes, and then the unaries within the parens. Then we do the same outside those parens (perhaps inside the next level of parens):
1 2 3 4
The result is in fact a spiral just from going root postfix unary out postfix unary out. We just don't have to focus on the spiral aspect of it.
Instead, when you write declarations, do it from right-to-left, e.g.:
char const* argv;
It doesn't help with reading, unfortunately.
The other part is `int* x, y`.
Also your argument about which modifies which is strongly anglocentric: there are plenty of people whose native language puts modifiers after the things they modify.
let string: [&u8; 10];
std::array<std::pointer_to<byte>, 10> str;
std::array is useful for letting the compiler avoid array-to-pointer decaying, value semantics, and also actually putting array length type info in a function parameter.
var str: array[10, ptr byte]
Edit: and while I'm here, Nim has other sensible syntax for this low level stuff...
var b: byte = 10
str = addr b
On the other hand, IMHO the whole "make declarations read left-to-right" idea is misguided --- plenty of other constructs exist in programming languages which simply can't be read left-to-right, but are nested according to precedence. I mean, you might as well make 3+4*3 evaluate to 21 if you want to try making everything consistently left-to-right, but I don't really see anyone complaining about not being able to understand operator precedence...
The point here is that type declarations are regular to read, and those tend to be the tricky ones. Expressions tend not to be so difficult, and are more commonly factored if they become complex. For various reason, type declarations are not so practically factorable.
Declaring `v * T` means we can write `* v` as an expression, so the use of token * is synchronized for both these uses, but I must vocalize the * in my head differently:
`*T` vocalizes as "pointer to something of type T"
`*v` vocalizes as "that pointed to by variable v"
`&v` vocalizes as "pointer to variable v"
For example, D also uses a similar type syntax, so in D if you declare:
x // is legal
x // is legal
(PS. Golang has the right idea, since its developed by the guys who contributed to C)...
The alternative would be for the type syntax to mirror the expression syntax used to construct values of the type. Functional languages tend to do this, particularly ones which prefer pattern matching over destructors.
Have you ever had to write a C parser?
If you don't have the available type names then it becomes ambiguous.
Cdecl (and c++decl) is a program for encoding and decoding C (or C++) type declarations.
Even with typedefs, that declaration means “when you call baz with a bing and a pointer (named bratz) to a function of type boff(biff), then you get back a pointer to a function of type foo(buff).”
It’s an extremely concise notation for expressing type information without (much) special type syntax, and I think it’s quite elegant in that way.
foo (*baz(bing, boff (*bratz)(biff)))(buff);
With typedefs for function pointer types:
typedef boff (*bratz_t)(biff);
typedef foo (*baz_ret_t)(buff);
baz_ret_t baz(bing, bratz_t);
typedef boff bratz_t(biff);
typedef foo baz_ret_t(buff);
baz_ret_t *baz(bing, bratz_t *);
It's been 20years(!) Why is this incorrect advise still up at c-faq?
Yeah, I know, I'm not good enough, I didn't study enough, I'm not enlightened enough. But why make things so overly comples in the first place?
"complex" is subjective. It reminds me of stupid "rules" like "don't use the ternary operator", "every function must be less than 20 lines" (I am not exaggerating --- this was on a Java project, however); and you could easily extend that to "every statement must have a maximum of one operator", "you must not use parentheses", "you must not use more than one level of indirection", etc. Where do you stop? To borrow a saying from UI, "if you write code that even an idiot can understand, only idiots will want to work on it." I don't think we should be forcing programmers to dumb-down code at all.
That said, I'm not advocating for overly complex solutions, and will definitely prefer a simpler solution, but you should know and use the language fully to your benefit.
If the complexity can be avoided, why not avoid it. Removing complexity is not the same as dumb-downing code. It will improve readability and maintainability.
This mindset is defintitely applicable to declaration as well as code construct.
And people wonder why there are so many broken C programs out there...
To me, that makes about as much sense as when Ricky Bobby in Talladega Nights says "If you ain't first, you're last."