
Language Design: Use 'ident: Type' not 'Type ident' - pcr910303
https://soc.me/languages/type-annotations
======
TazeTSchnitzel
A more practical reason as a language designer is that C and C++-style type-
before-name syntax is a nightmare to lex and parse, as you can't tell whether

    
    
      A * B;
    

is a multiplication or a variable declaration, or whether

    
    
      A<B, C> D;
    

is a two comparisons with a comma operator or a templated variable
declaration, without first knowing the names of all declared types.

This means in practice that you have to declare types before they are used in
a file, which means forward-declarations if they are defined later, that you
can't separate lexing and parsing because the parser has to provide constant
feedback to the lexer, and that misspelling a type name can lead to a _syntax_
error! C++, not content with merely inheriting C's problems, throws in the
“most vexing parse” as a bonus.

~~~
mariodiana
I was just thinking about pointer syntax today, and if you ask me, there are a
lot of problems that could be avoided if language designers took a page out of
Guido van Rossum's book and extend his idea of forced spacing.

Take the first example you've given, and let's just talk about variable
declaration:

    
    
        int* a;
        int * a;
        int *a;
    

That's all the same. That's wrong. Obviously, you can write a lexer and parser
that doesn't give a crap, but the human mind does. If you think it doesn't,
that's only because you've internalized the various cases.

It should be this:

    
    
        int* a;
    

The type we're talking about is a pointer to a variable of type int: in other
words, "a" is an int-pointer. If you were to create a macro, you'd do
something like this:

    
    
        #define int_ptr int*
    

Do you see what I'm getting at? The star in this case is a suffix, equivalent
(in our minds) to "_ptr". Conceptually, it doesn't belong anywhere else than
attached to the type. It's a compound type, conceptually.

Now, take the star being used in a different context:

    
    
        int b = 10;
        int* a = &b;
        printf("%d\n", *a);
    

There, though we see the same character, it's a completely different thing.
It's a dereference operator. Conceptually, it belongs attached to the pointer
variable it is dereferencing; and since the star is already used in one
context as a suffix, here it should be used as a prefix.

This is no good:

    
    
        printf("%d\n", * a);
    

It doesn't matter that "this compiles." That's not what this is about.

Many C programmers (and programmers in other languages, even Python) are used
to writing things like this:

    
    
        int c = x*y;
    

That's wrong. Sure, the lexer and parser don't care. But that makes the
language worse, for the human operator. "But it saves space!" Spare me.

The thing with this one example, using the star, is that what we have is the
equivalent of a homonym. We have one sign that is actually three different
words. Mandating spacing removes the ambiguity you're complaining about.

C is what it is, but if we imagine someone were going to write it today, they
should incorporate the above and mandate spacing. For the sake of the humans.
"ident type" is not the only solution.

~~~
earthboundkid

        A pointer to an int "should" be &int, not int*. That we use *, the 
        dereference operator, to indicate pointers is wrong. * in a type means it's an 
        address, but * in a value means it's not an address. That's nuts! Make it 
        consistent and use & both places. If you need to have a distinction between 
        refs and pointers, it should be that pointers are nullable refs: &int? or some 
        such.
    
        Edit: How to escape asterisks in HN?

~~~
saagarjha
I’ve had luck with not putting something directly after the asterisk.

~~~
earthboundkid
Apparently HN does not have an escape character which seems like a real
oversight. :-/

------
Izkata
Strong disagreement here. "type ident" flows with the data during assignment,
doesn't confuse the infix operators, and doesn't misuse ":" from a human-
language standpoint.

For example:

    
    
      val x: String = "hello"
    

The type interrupts the flow of data from "hello" to x, so one thing that pops
into mind is that this is typecasting the value to a string before storing it.
Nope.

Another possibility I instinctively see this as is doing a comparison and
assigning the result (either true or false in this case) to x. Nope.

And human-language wise, colon is "description: explanation" (or more
generally: general to specific), which actually fits this syntax better:

    
    
      val String: x = "hello"
    

...and at that point, just remove the extraneous stuff:

    
    
      String x = "hello"

~~~
lliamander
I agree, though I could see the merit for a standalone declaration:

    
    
      val x: String
      x = "hello"
    

The type at this point is almost like a comment.

For declaration and assignment though, I agree that reading "ident: Type" is
harder for me.

Perhaps an interesting idea would be to have the type at the end of the
_expression_. Like so:

    
    
      val x = "hello": String
    

Essentially, you're making a type assertion on an expression. Since it's an
assignment expression (the value of which would be the assigned variable) then
it also type checks the variable.

~~~
djur
Most statically typed languages don't even need the type assertion in a case
like this, though. A literal has a definite type (hopefully), so the type of x
can be inferred.

    
    
      val x = "hello"
    

Standalone declarations are the most important problem to solve here.

~~~
lliamander
Yeah, type inference is preferred (and pretty common). I'm just saying that,
if I had (or wanted to) set the type, at the end of the expression would be my
preferred place.

------
nitrobeast
It is presented here that name before type is easier to read as a matter of
fact. I’m not so sure. In math or languages where type info is optional, we
often write “x = 5”. When type info is required, it is natural to evolve to
“int x = 5”. Readers would naturally focus on the latter part. When we write
“x: int = 5”, the type info is in the middle. We cannot skip it even when we
just want to focus on the name and value.

~~~
andrewla
Many languages allow you to elide the type, which is another nice thing about
the type following the identifier.

In Scala, in particular, types are not the assigned type like in C (where they
also serve as the storage specification) -- they are assertions, that the
compiler will check are compatible with the code.

So `val x: int = "hello"` is no good and the compiler can cut it short right
there; this is especially useful as call-site documentation.

~~~
throwanem
Conversely, a lot of languages will infer type from first assignment, so in
e.g. TypeScript "let x = 5", x is inferentially typed as 'number' and the type
checker will throw if the implicit constraint is later violated. This reduces
the need for explicit type annotations, clearing up a lot of the visual and
cognitive noise.

------
Pfhreak
Interestingly, I find ident: Type significantly more difficult to read. Having
the type information helps me contextualize what I'm about to read -- it
narrows the mental search space I need to explore when parsing the name.

For example, knowing something is a float, double, int, or string can make an
ident named "releaseTime" mean different things.

I also find that whitespace is more consistent when using Type ident, you get
rivers where the spaces all line up, so all the type declarations AND ident
declarations align. Whereas with ident: Type, I find it much more difficult
because of the variable length of identifiers. (Yes, one could fix this by
using tabs, but if idents vary in length by more than one tab stop, it becomes
difficult to read horizontally.)

------
kevmo314
This feels a little nitpicky/idealistic, I don't think the post does a good
job of conveying why it's more beneficial.

> This means that the vertical offset of names stays consistent, regardless of
> whether a type annotation is present (and how long it is) or not.

Why is this necessarily desirable? Strong typing systems have very expressive
types, to the point where if something is typed correctly, most of the time my
property names are just an alternative casing of the type. Types can be just
as expressive or even more expressive than variable names.

> The i: Int syntax naturally leads to a method syntax where the inputs
> (parameters) are defined before the output (result type), which in turn
> leads to more consistency with lambda syntax (whose inputs are also defined
> before its output).

Maybe this is nice in theory? But `Int` really isn't an output here, and the
value being assigned isn't either. Rather this seems more like `f(i, Int,
value) -> assignment`. It seems just as arguable that `f(Int, i, value) ->
assignment` is appropriate.

It seems like some of these are rooted in a "pure mathematical" approach which
I can surely appreciate, but ultimately lambda calculus is as much a language
as any other programming language, saying "lambda syntax does it this way"
doesn't convince me very much.

~~~
echelon
I've been using Rust a lot recently, which puts names before types and inputs
before outputs, and I will absolutely attest to how much mental work is saved
by ordering things this way. Skimming or reading Rust comes twice as easy as
reading Java, and I do a lot of both. Sure it's an anecdotal report, but I
have a real sense here that I feel compelled to report.

As other posters have stated, this order makes parsing easier. But I also
suggest this benefit extends to your own brain's parsing ability as well. The
old order is indirect and suboptimal and makes you think harder.

~~~
dntbnmpls
> and I will absolutely attest to how much mental work is saved by ordering
> things this way.

As you yourself noted, personal anecdotes are really not an argument. Someone
could say they find Java easier to skim than Rust and we'd be nowhere. Like
arguing which end of a boiled egg to crack first.

> As other posters have stated, this order makes parsing easier.

Programming languages don't exist to make itself easier to parse. They exist
to make it easier for programmers to program. Otherwise, we wouldn't have such
things like syntactic sugar. Hell we would just write in machine code and do
away with assembly and higher level programming language. And parsing is a
simple and superficial one time step. Being a tad bit more difficult is not a
convincing argument.

> But I also suggest this benefit extends to your own brain's parsing ability
> as well.

Based on what evidence?

This is the problem with tech evangelism. It has the same problems as
religions, lots of claims, no evidence.

~~~
kelnos
> _As you yourself noted, personal anecdotes are really not an argument._

Then what is? If you're looking for a randomized sampling of programmers with
sufficient sample size, you're not going to find it here.

> _Programming languages don 't exist to make itself easier to parse._

No, but a fine example is that of C++: the difficulty in parsing means that if
you make a typo, the error message you get might be bizarre and confusing. A
compiler for a language that's easier to parse will have a much better idea of
the programmer's intent and can provide a much better error message. I find it
astounding how often rustc can figure out exactly what I wanted to do and
suggest it as a note after the error message.

I would think that more-useful error messages pass your test of "make it
easier for programmers to program".

While we're talking about making it easier to program, "name: Type" make it
possible to avoid typing out "Type" at all, and letting the compiler infer it
(no, this isn't good and readable to do in all situations, but often it's
fine). If you have "Type name" style, and try to add the ability to infer
types, you end up with Java's "var" abomination.

Regardless, I'm in agreement: I find "name: Type = blah" much easier to read.
I read it as "name is a Type that is equal to blah". This also is an
improvement in parameter lists, when they're lined up vertically:

    
    
        def foo(bar: String,
                baz: Int,
                quux: Foo)
    

I find that _much_ easier to mentally parse to determine parameter order than

    
    
        void foo(String bar,
                 int baz,
                 Foo quux)
    

Worse, imagine that all three parameters were of the same type, requiring a
scan to the right to read the names. The important information to me at a
glance is the name of the parameter, not its type.

As someone who cut his teeth on C and later Java, much later learning Scala
and Rust, I immediately liked the style of the latter two much better. Lately
I've been doing a lot of Java and get constantly annoyed at the "backwards"
order.

> _This is the problem with tech evangelism. It has the same problems as
> religions, lots of claims, no evidence._

I suppose you could argue that what I've written above is just personal
preference, but I see it as a bit stronger than that.

~~~
dntbnmpls
> Then what is? If you're looking for a randomized sampling of programmers
> with sufficient sample size, you're not going to find it here.

Evidence. Maybe a study showing programmers have a natural preference? Or
scientific evidence? Anything more convincing than "Rust evangelist"
anecdotes.

> No, but a fine example is that of C++: the difficulty in parsing means that
> if you make a typo, the error message you get might be bizarre and
> confusing.

Difficulty parsing? If it didn't parse and found an error, then it means it
didn't have any difficulty parsing. That has more to do with the complexity of
the language itself than parsing. Parsing is a very simple matter. Or maybe
the compiler for one language is better? Also, I thought we were comparing
Rust to Java?

> I would think that more-useful error messages pass your test of "make it
> easier for programmers to program".

It does, but once again all you've done is provide anecdotes without any
examples or evidence.

> Regardless, I'm in agreement: I find "name: Type = blah" much easier to
> read.

I don't. The most important part of "name: Type = blah" is the Type. So it's
nice to have it first. But then again, there are people who love dynamic
programming languages. So once again personal preferences and personal
anecdotes aren't convincing arguments.

> As someone who cut his teeth on C and later Java, much later learning Scala
> and Rust

Yeah, I too fanboy over new languages I learn. But then I get over it and move
on with my life. My guess is you just wrote toy programs in scala and rust and
nothing substantive.

> I immediately liked the style of the latter two much better. Lately I've
> been doing a lot of Java and get constantly annoyed at the "backwards"
> order.

So then use Rust? Why are you using Java?

> I suppose you could argue that what I've written above is just personal
> preference, but I see it as a bit stronger than that.

I don't have to argue it. All you've provided is personal preference. "I find
"name: Type = blah" much easier to read. " is personal preference. It's no
more a convincing argument of anything than you prefering chocolate over
vanilla shows that chocolate is better than vanilla.

------
jyounker
I think the author misses the single biggest advantage of `identifier: Type`.

The moment `Type identifier` syntax encounters higher order functions and
types, you end up with messes of parenthesis. Figuring out what a type means
then involves bouncing back and forth across the type definition.

With `identifier: Type` complex higher order types still parse linearly left
to right.

It's enough of a UI issue that people will end up avoiding higher order
functions in `Type identifier` languages simply because they're a mess to
express.

~~~
Too
Yup, especially with structural typing as in Typescript when you don't have
aliases to your all your type constraints, having identifier:{complex:mess,
of:{nested:stuff}} is easier than other way around.

------
millimeterman
Language Design: This stuff doesn't matter that much. Focus on more important
things.

Syntax isn't unimportant, but don't waste energy on trivial matters like
these. Just pick something and people will get used to it. Focus on the
semantics of your language - that's what really matters.

~~~
mamcx
>Syntax isn't unimportant

#104#101#108#108#111,[Space]world![Space][Space][Tab][Space][Space][Tab][Space][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Space][Tab][Space][Tab][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Tab][Tab][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Tab][Tab][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Tab][Tab][Tab][Tab][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Space][Tab][Tab][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Space][Space][Space][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Tab][Space][Tab][Tab][Tab][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Tab][Tab][Tab][Tab][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Tab][Space][Space][Tab][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Tab][Tab][Space][Space][LF]
[Tab][LF][Space][Space]
[Space][Space][Space][Tab][Tab][Space][Space][Tab][Space][Space][LF]
[Tab][LF][Space][Space] [LF][LF][LF]

yep, no important at all.

~~~
millimeterman
I didn't say "syntax doesn't matter, pick any ridiculous thing you want".
That's what I mean by "not unimportant", though I admit it's not exactly clear
that's what I meant. My point is that within the space of reasonable,
comprehensible syntaxes, there are no demonstrable differences worth arguing
about.

~~~
mamcx
>there are no demonstrable differences worth arguing about.

That is a big claim. Is very easy to believe (I do it before, when my
knowledge of programming languages was about just 3 or 4. Now is more than
12). :

But is clearly false, and is easy to prove:

    
    
       async/await
       go chan
       fn sort<T>(of:list<T>...)
       try/catch
       match
    

All the above are just small things that have a HUGE impact in how develop
programs. Also, in matter of "small" stuff that could look insignificant:

    
    
       [1, 2, 3] + 1 = [2, 3, 4]
    

this one is a huge deal in certain niches, also, another "small" and
insignificant thing:

    
    
        SELECT ... FROM source
        source SELECT ...
    

All this are just small things. Not all that obvious at the time. Remember how
before the times of GOTO the idea of more specialized control flow was
unthinkable in the minds of many.

Syntax MATTER MOST. Because, is OUR interface. The space of improvement is not
super-big, truth, but it impact hugely.

Also, when done correctly, it make the semantics fit like a glove or not.

Another obvious example: Do concurrency whithout syntax help (just using
threads). Or performant, safe, concurrency friendly, zero-gc, system-
programming, etc without what rust and other langs have bridged.

~~~
millimeterman
I also know many languages (which is hardly some grand accomplishment) and
it’s my firm opinion that syntax MATTERS LEAST. You spend some time getting
used to it and it never really bothers you again. Semantics matter most -
syntax is just an interface to the important stuff.

The difference between Python, C++, Haskell, Common Lisp, Prolog, and SQL
isn’t syntax. If it was, everyone would pick their favorite syntax and use it
all the time. What matters is how well the semantics (and their potential
performance implications) match your problem. The syntax just needs to be a
decent enough interface to the semantics. Frankly, it seems to me like most of
your “counterexamples” are about language semantics, not syntax.

Here’s the thing. Would I like every language to have a consistent,
beautifully designed syntax backed by UX research and testing? Absolutely. But
language designers have bigger fish to fry. There’s little value in wasting
energy talking about syntax once it reaches a basic state of acceptability.

I do amend my statement - you’re right that it’s a big, unsubstantiated claim.
There are no _demonstrated_ differences. I haven’t seen an ounce of evidence
that it makes a difference beyond familiarity. Furthermore, even if it did,
that wouldn’t make it top priority. It would just make arguments about it
sensible.

~~~
mamcx
> The difference between Python, C++, Haskell, Common Lisp, Prolog, and SQL
> isn’t syntax

Ok, let's try: Do SQL without the SQL syntax.

P.D: I don't think we are that in disagreement ("The syntax just needs to be a
decent enough interface to the semantics"), is that the claim of "syntax don't
matter" make it look is just an irrelevant aspect of the language. Can be
argued how much relevant, but after years on this trade, go to the C++
community (for example) and tell them to change the syntax to lisp syntax and
see how much it will succeed.

Syntax is 100% tied to paradigms, idioms, and such. Is intrinsic to the
language we use.

~~~
ogoffart
> Ok, let's try: Do SQL without the SQL syntax.
    
    
        Select(`my_table`, [`column_A`, `column_B`])
             .Filter(`column_C` > 53 && `column_D` == $varA)
             .Sort_by(`column_C`)
    
    

There you go: you have the exact semantic as a traditional SQL query (1:1
mapping) and only the syntax is different. Now, one may argue that the syntax
is "ugly", less familiar, that the ` are hard to type or whatever, but this is
just taste. One simply get used to it. The expressiveness and semantic are the
same as in SQL

> Syntax is 100% tied to paradigms, idioms, and such. Is intrinsic to the
> language we use.

I think then we don't have the same definition of syntax. The way i understand
it is that the syntax is just the way to represent these idioms and paradigms
visually. What the parent is saying is that these paradigms and idiom as what
is important, but the exact way they are written, not as much (as long as it
is within reason)

~~~
mamcx
I was to talk about the SQL stuff, but I think it will be wasted as long we
get blind to the fact syntax IS semantic.

However this:

> but the exact way they are written, not as much (as long as it is within
> reason)

Then what is "within reason?". Is more logical to only have GOTO than IF, is
better to have ELSEIF or nest IF?, what happened if my lang say that null is
the same than Option.None?, what if generics use [] and not <>?.

Whitespace matter, yes? no?

Allow unicode?

CamelCase, snake_case or what? What if all const are lowercase, types mixcase
and the rest UPPERCASE?

For some, APL syntax make more sense than algol.

Talk about _why_ , that is the point of this kind of talk.

Is VERY easy to rug this kind of stuff. VERY. I WAS in that camp before. But
now, I try to build my own lang (relational), and DAMM, it start to be much
clear why syntax matter, even "the exact way they are written", because switch
_this_ to _that_ and suddenly, my lang is ANOTHER paradigm (or worse, will be
CONFUSED as be).

Naming, is one the hard things in computer science.

\---

I understand why is easy to dismmis this as irrelevant. Sometimes I don't see
why some people are so upset about typography and font selection, or why my
profesional brother complain about framing in photograph. But go and SEE what
the DESIGNERS of lang say about this stuff and you will note that for them,
even this apparent less-significant thing matter. you can even get a prize on
the field for show the importance of syntax
([http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pd...](http://www.eecg.toronto.edu/~jzhu/csc326/readings/iverson.pdf))!

If that mean that most will not see, GREAT! That is the mark of good design.

------
thom
You'd have to show me some data that one is easier to work with than the
other, because fundamentally types _are_ names, and I find them just as
expressive as variable names (many of which are just named after types, lets
be honest). Even if I didn't, I don't think my brain struggles to read things
in either order (or indeed in languages where types are rarely mentioned).

~~~
adamnemecek
It's easier to parse if nothing else.

~~~
Ididntdothis
That seems to be the main problem. Otherwise it’s just something to get used
to in my view.

~~~
adamnemecek
I think it also makes more human sense. The parameter name should in some
sense be telling you more than just the type. Like "size: Size" is kind of
repetitive.

~~~
Ididntdothis
I don't know. Seems programming languages are cryptic no matter what.

~~~
adamnemecek
They are not cryptic. They are trying really hard to come up with good
syntaxes and semantics actually. I think that modern programming languages
tend to have very clean syntaxes.

------
andrewla
Another very nice thing about this is that it is much much easier to parse,
because only one kind of thing can go in each position of the phrase.
Simplicity in parsing is something that I think is underrated in language
design; the harder it is for a computer to parse, the harder it is for a human
to parse, and parsing code is 90% of the programmers work (the other parts
being 9% debugging and 1% authoring new code).

~~~
Pfhreak
I replied elsewhere that I found the opposite to be true. So I suspect that
different people will find different styles to be easier/harder to parse.

> the harder it is for a computer to parse, the harder it is for a human to
> parse,

I don't think this is true -- assembly (or bytecode) is very easy for the
computer to parse, but much, much harder for humans to parse. English is much
easier for humans to parse, but pretty difficult for computers to parse.

------
qppo
I disagree. The syntax design should flow from the design of the language
itself and whether or not you use prefix or postfix notation for type
annotations depends heavily on what makes sense within the semantics of the
type system.

Design a language before you design a syntax.

~~~
throwanem
Granted that the pathological case of the error you warn against is Perl, and
that should be enough of a cautionary tale for anyone. But a language is a
user interface for programmers, too. Some affordance is merited, especially in
a case like this where prefix vs. postfix may affect ease of parsing, but
seems most unlikely to influence how the type system actually behaves.

~~~
qppo
You're right, I just don't care for the author's notes on language design
because they're all on syntax design, which is an impossible task to do in
morsels without knowing anything about the rest of the language or how it is
supposed to work.

I do prefer postfix because I think it flows very nicely "this is-a thing
assigned-to that" is nicer than "thing called this assigned-to that."

In terms of the impact on the language, optional postfix annotation makes it a
bit trickier if you want to make the identifier optional, and in languages
that support it you tend to see special syntax to deal with that case (which
breaks the author's fetish for self consistency).

Personally I think ordering of the trio of "alias" "thing" "value" should be
consistent across the language, which extends far past variable assignment,
and any one of the trio can be left out.

~~~
disconcision
What are examples of language semantics which are better served by pre/postfix
type annotations? Also, what exactly do you mean by making the identifier
optional?

------
marmada
A benefit of ident: Type is that it allows you to express complex anonymous
types.

Example from Typescript:

const foo: 'A' | 'B' | 'C' = 'A'

Which states that foo must belong to the given union type. How would this look
in a Type ident language?

const 'A' | 'B' | 'C' foo

That doesn't seem right. There's no clear barrier between the type and the
identifier name.

Here's another contrived example:

const foo: () => Promise<void> = async (x) => console.log(x)

Here foo is of type "() => Promise<void>". How would this look in a Type ident
language?

const () => Promise<void> foo = async (x) => console.log(x)

To me, this is unclear because it is hard to tell where the type ends and the
actual function begins.

Last example.

const foo: { [string]: number} = {"hello": 3}

I believe says that foo is an object with string keys and number values.

What does this look like in a Type ident language?

const { [string]: number} foo = {"hello": 3}

I think all of the Type ident examples are more confusing because it's to tell
where the type ends and the name begins (this is most clear in the first
example). This probably makes syntax highlighting worse/parsing more
complicated/is tougher on the user. With ident: Type, it is very clear that
the type starts after the ":" and ends before the "=" sign.

------
palerdot
Language Design Notes on Rust [0] from the same blog looks interesting too ...

[0] - [https://soc.me/languages/notes-on-
rust.html](https://soc.me/languages/notes-on-rust.html)

~~~
echelon
This is an interesting list.

Some of the things have been addressed (`extern crate`).

Many of the issues I disagree with: `Buf` is strictly better than `Buffer`
(less typing, like `fn`). I have no issue with mixing
`CamelCase::snake_methods`, and actually find it to be quite beautiful. The
good parts of being Pythonic.

I would like to see the alternatives to turbofish. What exactly is the author
suggesting? And what's wrong with `println!` and `format!` ? It isn't
articulated.

`[]` misuse is bad, semicolons aren't consistent, `PathBuf` is inconsistently
named, etc. Agree. `io::Result`, ...

Maybe there will be some cleanup in a future language edition.

------
pansa2
> This means that the vertical offset of names stays consistent

This is also an argument for using keywords of the same length for introducing
a variable and a constant. If that’s desirable, it rules out the obvious
choices `var` and `const`.

Possibilities include `var` and `val`, which may be too similar-looking, and
`var` and `let` - but are people used to (from JavaScript) `let` being
mutable? Any other options?

~~~
Someone
“Let” is mutable in Basic, too, but the part of the population that is used to
that is shrinking.

As to short, equal length options for ‘let’ and ‘val’: one could consider
using punctuation. Forth uses colons instead of ‘fun’, and I think, in a
concise language, one could get used to using, say, ‘!’ for immutable and ‘~’
for mutable. Unfortunately, they aren’t easy to type. An alternative could be
to always assume immutability and only use ~ in the rare cases where one needs
to mutate.

So, a simple

    
    
      foo = 3
    

or, if one wants to simplify parsing:

    
    
      = foo 3
    

introduces a new immutable variable, and

    
    
      ~ foo = 3
    

or

    
    
      ~ foo 3
    

a mutable one. If we allow leaving out spaces:

    
    
      ~foo 3
    

that starts to look like using sigils to indicate mutable state. I think that
might be a good option in a mostly immutable language.

I think I would use Forth’s colon instead of ‘=‘. That would make ‘=‘
available for equality testing, allowing us to get rid of ‘==‘.

------
Ono-Sendai
One downside of the 'ident: Type' approach is the extra colon character.

The major downside of the 'Type ident' approach, is that if 'Type' is
optional, then the parser can't be sure if its parsing the 'Type' or the
'ident' when encountering the first token. In practice this isn't too hard to
solve however, it can be handled with some backtracking.

In my language, Winter, I have chosen the 'Type ident', approach, mostly due
to similarity with C, C++ and Java. I do sometimes wonder if I made the right
choice however. Maybe it could be an option? :)

------
tlbsofware
I’m surprised this didn’t touch on the IDE autocomplete suggesting variable
names. In Java you would have something like `LocationBuilder locationBuilder`
which makes users just tab complete the variable name to quickly have access
to a variable. The argument in this article was about names being prioritized
and I think forcing no auto completion on a variable name would force the
developer to be slightly more descriptive than the variablized string of a
class name

~~~
mr_tristan
In Kotlin, IntelliJ has no problem with this. As you type a new value: `val
id`, and you have `IdentName` defined in scope, the value `identName` is
suggested automatically.

Not all IDEs are the same, though, and I'm not sure how sophisticated this
feature was to implement.

------
geofft
> _The ident: Type syntax let’s developers focus on the name by placing it
> ahead of its type annotation._

If this were true, we'd have to conclude that speakers of name-then-honorific
languages like Japanese ("Graham-san") are better at remembering and focusing
on people's names than speakers of honorific-then-name languages like English
("Mr. Graham.")

But there's no evidence of that, is there?

------
melolife
The most important result of this design is that the syntax unambiguously
determines whether you are referencing the type or value axis, and enables you
to split them accordingly. Having worked with Scala and been forced to return
to a C-style language, this is probably one of Scala's most overlooked
features.

------
js8
One additional reason why it is beneficial is that you can then naturally
extend typing to any expression, not just identifier. This can help the type
inference (and also can serve as documentation), which is (IMHO) a must in a
modern programming language.

------
malwarebytess
Seems a solution in search of a problem. Worse I think it creates visual
clutter.

------
IshKebab
Yeah I'm pretty sure the real reason for this is that it is way easier to
parse types if they are after the name. Especially complex ones like
functions.

------
dirtydroog
> 1\. Names are more important than types

I can't really agree with this at all. With type aliasing the new typename can
render the variable name pretty much redundant.

------
MayorMonty
The first point is the most appealing to my brain at least. Type inference is
a really useful feature (when paired with a nice IDE) and having a single
standardized prefix to declare variables regardless of what type it is can
help the mental model. This is especially true with more complex non-obvious
types, where you may not know exactly what type you have without the hint from
your environment

------
scriptproof
If consistency if so important, why do we have: function, func, fun, fn, def,
etc... depending the author? For clarity, use "function", for simplicity use
"fn", other forms are just fancy.

~~~
tharax
If consistency if so important, why do we have: function, func, fun, fn, def,
etc... depending the author? For clarity, use "function", for simplicity use
"func", other forms are just fancy.

"This is the standard we should all adopt!"
[https://xkcd.com/927/](https://xkcd.com/927/)

------
gherkinnn
Rob Pyke talks about this in more depth here in the Go blog [0]

0 - [https://blog.golang.org/declaration-
syntax](https://blog.golang.org/declaration-syntax)

------
strictnein
Sorry, but this article starts off with an excellent example of why this is
horrible:

    
    
       val x: String = "hello"
       String x = "hello"
    

The first line reads: "value X is of type String and contains hello"

The second line reads: "String x contains hello"

val and : are fluff and add nothing. Arguments about it being tougher to parse
would have some merit if this wasn't all figured out almost 50 years ago.

------
SamReidHughes
Even better: Use ‘ident Type’.

------
TOGoS
> The ident: Type syntax let’s developers focus on the name by placing it
> ahead of its type annotation.

I agree with the sentiment, but that apostrophe is bugging me.

