Hacker News new | past | comments | ask | show | jobs | submit login

The same is generally true for things written in Go, Java, C#, etc. Strong typing and memory safety eliminate huge classes of common bugs.

Do you mean static typing?

The distinction in the vernacular between statically typed and strongly typed languages is so narrow at this point that pointing it out is a bit pedantic.

I would say Rust is both a strongly typed language and a statically typed language. The static type checking happens at compile time, and in general the types in use are strict and strongly typed at runtime.

But, even Rust allows you to cast types from one to another and use dynamic types determined at runtime.

Yes, most people would say that static typing is the primary advantage you get from the compiler in Rust.

It wasn't intended as a drive-by pedantic swipe - I was genuinely curious whether OP meant strong or static. Conversations about type systems and application correctness are exactly the place where precise definitions are welcomed, but I understand that the distinction between strong and static is often conflated. It can be relevant if we're discussing static typing for example, as then something like TypeScript becomes useful.

There's a great writeup by one of the C# people (I want to say it was Erik Meijer, but I'm having a hard time finding it atm) about the distinctions we're discussing here, their relevance to correctness, and the impact on ergonomics. My takeaway from it was that occasionally you will encounter problems that are easier to solve with some freedom and that's why strong/static languages like C#/Rust include pragmatic escape hatches like the dynamic object and the Any trait (respectively).

I think that's fair.

If you do find that writeup, I'd be interested in reading it.

I think I meant static and typed strong.

> The distinction in the vernacular between statically typed and strongly typed languages is so narrow at this point that pointing it out is a bit pedantic.

I'm not sure why you think this has changed today, or what you mean. It appears to me that many programmers don't realize that the two are orthogonal, so I find it an important, not pedantic, distinction (it just happens that languages generally improve on both fronts over time, hence asking for one also gives you the other, but it's because of correlation, not causation). I'll make an attempt at describing it here, please tell if I'm missing something.

Strong typing means that types describe data in a way that the data won't accidentally be mistreated as something else than what it represents. E.g.

(a) take bytes representing data of one type and interpret it as data of another type (weak: C; strong: most other languages)

(b) take a string and interpret it as a number without explicitly requesting it (weak: shells, Perl; strong: Python, JavaScript, Ruby)

(c) structs / objects / other kinds of buckets, (strong: C if type wasn't casted; weak: using arrays or hash map without also using a separate type on them that is enforced, the norm in many scripting languages although usually strengthened via using accessor methods which are automatically dispatched via some kind of type tag; also, duck typing is weaker than explicit interfaces)

(d) describe data not just as bare strings or numbers, but wrap (or tag / typedef etc.) those in a type that describes what it represents (this depends on the programmer, not the language)

(e) a request of an element that is not part of an array / list / map etc. is treated as an error (similar to or same as a type error (e.g. length can be treated as being part of the type)) instead of returning wrong data (unrelated memory, or a null value which can be conflated with valid value)

These type (or data) checks can happen at runtime ("dynamically") or compiletime ("statically"). The better a static type system is, the more of these checks can be done at compile time.

For security and correctness, having strong typing is enough in principle: enforcing type checks at runtime just means getting a failure (denial of service), and systems should be designed not to become insecure or incorrect when such failures happen (fail closed), which of course might be done incorrectly [1]. Testing can make the potential for such failures obvious early (especially randomized tests in the style of quickcheck).

Static typing makes the potential for such failures obvious at compile time. It's thus a feature that ensures freedom of denial of service even in the absense of exhaustive testing. It can also be a productivity feature (static inspection/changes via IDE), and it can enable more extensive use of typing as there is no cost at run time.

[1] note that given that static typesystems usually still allow out of memory failures at run time, there's usually really no way around designing systems to fail closed anyway.

It's not that I disagree, with any of your points. I think they're all valid. I do think most people use the term strongly typed, where they mean statically -and- strongly typed.

I personally don't fret about it unless we get into specific details about these notions. I do especially like your (d), which many people often overlook when designing programs. An example would be to use a String as the Id in a DB, but not wrap the String in a stronger type to represent the Id, thus not getting the advantage of static type checking by the compiler. So there are definitely areas where this conversation can lead to better advantages of different languages.

For example, in Rust declaring a type to be a `struct Id(String);` would cause no overhead to be associated with the Id in terms of memory allocation to that of just a String. Not all languages can say that, thus we could also get into a fun conversation about the overhead associated with the type system itself. All fun topics.

Thanks for your reply. I fully agree.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact