
Talking about “types” - jashkenas
http://www.cl.cam.ac.uk/~srk31/blog/2014/10/07
======
tikhonj
This whole article is a reflection of a belief that I really don't like but is
all too common these days. It's the idea that nothing is better than anything
else, everything is a trade-off and strong positive are inherently wrong.

I don't buy it.

It's a tacit assumption that, somehow, everything is on the Pareto frontier.
Profoundly implausible. In the background, it's basically a hybrid appeal to
moderation (all extreme positions are wrong) and popularity (if so many people
are doing something, it can't be wrong). Never mind the fact that popularity
is almost meaningless, driven by fashion more than anything, and that what's
"extreme" is arbitrary and largely driven by popularity itself...

This is how we get "worse is better"; this is how we get standards that are
compromises ideal for nobody but still widely adopted because they're standard
or because that seems like the thing to do or because that's what everyone
else is doing or because " engineering".

The worst part is that this all goes unsaid, assumed correct by default. Just
the fact that we're having the discussion the article wants, and not this
meta-discussion, implicitly elevates its premise!

Some things are better than others. A strong debate between strong positions
is healthier than an artificial social force reverting everyone back to mean.

Now, all this is not to say the usual message about types is perfect. It could
definitely be improved in many different ways. But not by muzzling, weakening
or qualifying: not by most of the argument's suggestions.

Especially ironic is the rhetorical techniques the author uses to argue
against rhetoric. The one I personally find the most grating—beyond the
ambient implication that everything is equal that I've just written about—is
establishing cheap credibility by aligning himself with the camp he's arguing
against. "I like X, but..." Or, in this case, "I use OCaml, but..."

Then again, my pointing this out is probably ironic in the same way. Or is it?

~~~
skybrian
It seems more unrealistic to assume everyone agrees on the same cost-benefit
function. We are different people and value different things. So in any
attempt at a proof involving what should be valued more, you have to state
your assumptions and they're generally debatable.

~~~
cantankerous
There's also the bit of refusing to agree to the same cost benefit function
because it commits you to a standard that you can't back away from. I mean
look at the global warming debate, for example.

------
vilhelm_s
Certainly very sensible points. But I take some issue with the bits towards
the end:

> Fetishising Curry-Howard

Hey, I happen to find Curry-Howard sexy, you have no right to shame me!

> To specify and check liveness or reachability, we need to represent programs
> as some kind of flow graph, where flows are very likely to span multiple
> functions and multiple modules.

But no, this is exactly what basing systems on Curry-Howard (with terminating
functions) buys you. If I give a HTTP response handler in Coq the type

    
    
        HttpRequest -> HttpResponse
    

I _know_ it is live, because the function is total. And similarly, dead code
paths can be gotten rid of by eliminating a proof of False.

So this lets you state liveness and unreachability. Of course, the _proofs_ of
those statements is very different from program-analysis-style systems. In a
whole-program analysis, the analyser goes over the entire program, and at the
end it says "well, I could not find anything calling this function, so I guess
it's dead". In a Curry-Howard-style system, you need to look at the function
and figure out when it should be called and what preconditions need to be true
for it to work properly. But this is exactly why people like type systems, you
can think about programs modularly rather than all at once.

~~~
skybrian
I suppose, but outside certain mathematical systems that are designed that
way, assuming that all functions are total is unrealistic. The languages we
use every day are Turing-complete languages and infinite loops can happen.

~~~
vilhelm_s
You don't have to assume everything is total, you just need to track in the
type system which things are and which aren't. E.g. you could also give it the
type

    
    
        HttpRequest -> IO HttpResponse
    

where IO is some type allowing nontermination (e.g. defined coinductively
along the lines of
[http://www.cs.swan.ac.uk/~csetzer/slides/goeteborg2009AgdaIn...](http://www.cs.swan.ac.uk/~csetzer/slides/goeteborg2009AgdaIntensiveMeeting.pdf)).

Having a type for total functions gives you the _option_ to state and prove
totality properties, even if you maybe don't want to go through the effort of
doing that for every function you write.

~~~
chriswarbo
You don't need to go all the way to IO, you can make every function total
using Partial:

    
    
        data Partial a = Now a | Later (Partial a)
    

It's straightforward to do things in Partial, since it's a Functor,
Applicative and Monad:

    
    
        instance Functor Partial where
            fmap f (Now x) = Now (f x)
            fmap f (Later x) = Later (fmap f x)
    
        instance Applicative Partial where
            pure x = Now x
            (Now f) <*> (Now x) = Now (f x)
            (Later f) <*> x = Later (f <*> x)
            f <*> (Later x) = Later (f <*> x)
    
        instance Monad Partial where
            return = pure
            f >>= (Now x) = f x
            f >>= (Later x) = Later (f x)
    

Partial is so trivial that everything can be made co-recursive by wrapping all
function bodies in "Later". All of the "Later" wrappers will get nested
together by the Applicative/Monad; everything will "just work". The one
missing piece is the following function, which isn't total:

    
    
        runPartial :: Partial a -> a
        runPartial (Now x) = x
        runPartial (Later x) = runPartial x
    

We could name it `unsafeForcePartial` and treat it with care, like we
currently do with `unsafePerformIO`.

~~~
throwaway283719
Can you explain how this makes every function total? For example, take the
`head` function which is not total

    
    
        head :: [a] -> a
        head (x:xs) = x
        head []     = undefined
    

How can you make this total using `Partial`? I can get as far as

    
    
        headTotal :: [a] -> Partial a
        headTotal (x:xs) = Now x
        headTotal []     = Later (...)
    

but what goes in (...)?

~~~
tome
I guess you need a special definition

    
    
        diverge = Later diverge
        headTotal [] = diverge

~~~
chriswarbo
Yep, although it doesn't actually diverge; that's the point ;)

The classic "loop" function, which is completely polymorphic, does diverge:

    
    
        loop :: a
        loop = loop
    

"loop" will never return a value, since to perform one 'step', it must perform
an 'infinite' amount of computation.

The function you've written doesn't diverge, since it will immediately return
a value "Later x" after one step. As long as we're using laziness, it doesn't
matter that "x" itself is 'infinite':

    
    
        loop' :: Partial a
        loop' = Later loop' 
    

Note that we can do an equivalent thing in strict languages, by eta-expanding
our definitions into thunks:

    
    
        data Partial' a = Now a | Later (() -> Partial a)
    
        loop'' :: () -> Partial a
        loop'' _ = Later loop''
    

We can use "loop :: a" in place of "undefined :: a", since they have the same
type. We can't use "loop' :: Partial a" in the same way, since its type is
different.

"loop' :: Partial a" is a bit like a 'phantom type', where the "a" in its type
signature never actually occurs in the value. In fact, we can define a type
which doesn't contain "Now", and is thus _guaranteed_ to never terminate:

    
    
        data Inf a = Inf (Inf a)
    

Of course, the "a" in "Inf a" is a true phantom: it doesn't affect the values
at all. We can get rid of it to obtain:

    
    
        data Loop = Loop Loop
    

We can use this type to "drive" main loops for servers, operating systems,
games, etc. in a total language:

    
    
        server :: Loop -> IO Loop
        server (Loop x) = do req <- readRequest
                             sendResponse (process req)
                             Loop <$> server x
    

"Loop" is equivalent to "Stream ()", where:

    
    
        data Stream a = Cons a (Stream a)
    

Of course, "Stream a" is to "[a]" what "Inf a" is to "Partial a": it's just
missing the base-case ("[]" and "Now a", respectively).

Interestingly, we can think of "Partial a" as being like "[a]", except that it
stores elements in "[]" rather than ":". Alternatively, we can think of
"Partial a" as being like "Nat" (the Peano numbers), which uses an element of
"a" in place of "Zero".

------
colanderman
This article says many things that need to be said, but I take issue with
this:

> _No sane person can argue that we want to bifurcate a programming language
> into distinct base-level and type-level fragments._

Well, I must be insane then. I much prefer writing type expressions in a
language free from, say, mutation and iteration.

Restricting languages aids not only computers which analyze programs, but also
humans which create programs. It's much easier to remember, say, "I'm not
allowed to mutate anything, there's no syntax for it", than, "I can use
mutation, but if I use it in such-and-such a way, then compiler won't be able
to statically determine the type, unless I have version 4.2.3 or later".

~~~
imanaccount247
I think most of the article is as bad as the part you are criticizing. The
part you refer to is just particularly bad in it's presentation. Making
unsupported claims while declaring oneself correct and anyone who disagrees is
not sane is not helpful or productive.

~~~
ubernostrum
There is precedent that these types of splits don't make things as safe as
advocates believe. Amusingly, the precedent is part of the foundational bits
of modern theoretical CS :)

~~~
colanderman
Care to explain? Keep in mind I'm not talking about safety; I'm talking about
usability.

~~~
ubernostrum
The short version is to look at the early twentieth-century work on formal
systems, languages and metalanguages, etc., and then notice how those
particular sandcastles got kicked over by the incompleteness theorems.

By the way, personally I tend to look at type systems in terms of the
tradeoffs they make. It certainly seems to me like the diminishing-returns
point -- where writing verifiable code within a "more expressive" type system
becomes more work than writing unverifiable code in a "less expressive" system
and dealing with any bugs after the fact -- has already been passed by some of
the popular FP languages.

~~~
colanderman
I think you misunderstand me. I'm not talking about computation languages. I'm
talking about _type languages_ (as was the OP). As in, the grammatical
construction in which types are expressed. For example, OCaml has a separate
type language (the module system) from its computation language. Languages
like Python and Elixir do not (you can define classes as part of a
computation).

The OP is advocating that the type language be one in the same as the language
in which computation is performed. I'm suggesting that this leads to usability
issues, as it's not clear from the syntax what type expressions are and are
not usable by the compiler to check/optimize/etc. the value-level computation.

This is entirely orthogonal to the point you seem to be making about
expressive vs. inexpressive type systems.

------
StefanKarpinski
Bravo. This is the best thing I've read anywhere about the general issue of
types, both static and dynamic. It is even-handed, acknowledging the benefits
and limitations of both static and dynamic languages, and incredibly well-
informed. I now have something to send to the next person that threatens to
send me a copy of PFPL.

~~~
tome
But unfortunately it doesn't say anything _concrete_ about what the benefits
and limitations of static and dynamic languages are.

~~~
yogthos
That's because we don't actually have any concrete evidence one way or the
other. This is the biggest problem with the whole static/dynamic debate.

Until there are studies that clearly demonstrate that projects written in
statically typed languages have statistically less defects, are developed
faster, or are easier to maintain, it's all anecdotal evidence.

There are tons of large real world projects developed in both static and
dynamic languages, there's very little indication that projects developed in
dynamic languages have significantly more defects or that they're more
difficult to maintain.

In fact, some of the largest and most robust applications are written in
Erlang. See the paper from Joe Armstrong as an example
([https://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf](https://www.sics.se/~joe/thesis/armstrong_thesis_2003.pdf)).

~~~
mpweiher
> That's because we don't actually have any concrete evidence one way or the
> other.

We do have some evidence that if there are differences in _safety_ , they
appear to be less than overwhelming, see
[http://blog.metaobject.com/2014/06/the-safyness-of-static-
ty...](http://blog.metaobject.com/2014/06/the-safyness-of-static-typing.html)

One can debate how viable the evidence is, but at least it is some evidence.

However, there does appear to be some evidence that static typing helps with
program comprehension/navigation, for example:
[http://dl.acm.org/citation.cfm?id=2568299](http://dl.acm.org/citation.cfm?id=2568299)

Again, not overwhelming, but some evidence.

~~~
yogthos
That's exactly my point, the best anybody managed to say is that there may be
some slight difference. Even then it's difficult to draw any sweeping
conclusions since there's a lot of variation between languages in each family
as well.

------
emotionalcode
> Now that I've ranted my heart out, here's a coda that attempts to be
> constructive. As I mentioned, part of the problem is that we equivocate when
> talking about “types”. Instead of saying “type” or “type system”, I
> encourage anyone to check whether anything from the following alternative
> vocabulary will communicate the intended meaning. Deciding which term is
> appropriate is a good mental exercise for figuring out what meaning is
> actually intended.

> * data abstraction

> * data type

> * predicate, proposition

> * proof, proof system

> * interface specification [language]

> * [compile-time] checker

> * specification, verification

> * invariant (the thing that “types” are usually specifying!)

I add precondition, postcondition, state. As well, I think the space in
between has importance: map, transformer, relation, command. Formalization is
tricky, from my understanding.

------
mej10
"Rich Hickey's transducers talk gave a nice example of how patterns of
polymorphism which humans find comprehensible can easily become extremely hard
to capture precisely in a logic."

I haven't watched his talk, but Rich Hickey also commented on /r/Haskell with
the types. It doesn't seem like it was that difficult to capture. There are
also other comments that approach the types from different angles and achieve
some success. Maybe this isn't a good example for this thesis?

[http://www.reddit.com/r/haskell/comments/2cv6l4/clojures_tra...](http://www.reddit.com/r/haskell/comments/2cv6l4/clojures_transducers_are_perverse_lenses/cjjyay7)

~~~
swannodette
I don't think anything in that Reddit thread was conclusive, nor did they
capture all of their properties (early termination, cleanup step).

~~~
tel
Those, I believe, are fairly easy to capture in the `Moore b r -> Moore a r`
formulation. It also captures linearity.

That said, I think the more general formulation is something like Fold or
Traversal in Lens which operates polymorphically over the whole thing with a
type that a refinement of

    
    
        Profunctor p => p a b -> p s t
    

where such a type generalizes all of the concrete Haskell formulations.

------
chrismorgan
In my defence, on the question of header typing, I did mention that these were
not things that couldn’t be done in dynamically typed languages, but rather
that they were more dangerous in such languages, that it was the assurance of
correctness that languages like Rust get you. A strongly typed header scheme
is absolutely possible in Python, but the risks are considerably higher.

I certainly agree that there were a few points that I didn’t substantiate or
provide sufficient evidence for—I was deliberately intending to be broad and
wanting people to think about these things, rather than going down rabbit
holes. I’m afraid that half an hour really isn’t long to talk about such
things in detail.

------
porker
I posted this 4 hours before jashkenas
([https://news.ycombinator.com/item?id=8420560](https://news.ycombinator.com/item?id=8420560))
- why the duplicate?

~~~
jashkenas
Because the /newest page is fortune's fickle wheel, where worthy essays like
this one are lost, more frequently than not, in the flood of spammy chaff.

I thought it was worthwhile, so I submitted it again. We got lucky this time
;)

------
mercurial
I take objection with the article regarding "Pretending that syntactic
overhead is the issue". Or rather, I would say it's an incomplete way of
looking at it. Non-statically typed languages (I'm thinking reasonably full-
featured languages like Perl, Python or Ruby, not Bash) are, IMHO, go-to
languages for certain tasks because they allow you to get results quickly.
How?

* Less typing (there is freedom in being able to cobble together a reasonably complex program in a single file of a few hundred lines in an editor, vs a number of files and thousands of lines in a IDE).

* Advanced features! It is only now that closures, first-class functions, etc, start to trickle down to mainstream, statically typed languages. And they do make for shorter programs (see, eg, the post on expired links on HN).

Of course, it's not to say that non-statically typed programs don't suffer
from a number of drawbacks, but dismissing conciseness as something trivial
does not, I think, accurately reflect reasons for why people use this kind of
language.

------
cbd1984
For reference, here's the difference: Strong typing ensures all results are
well-defined, as the result of a specification, and it uses type information
to do that. Weak typing throws away type information, so it can no longer
uphold such specifications.

Note that auto-conversion and error on mismatch are both evidence of strong
typing in action, because both behaviors rely on type information in order to
occur.

------
ericssmith
Sinner here.

But I could only identify with four of the seven. Mostly because I couldn't
make sense of his rhetoric for the other three.

> I imagine that my passions are still rather thinly veiled in places

You got that right. But I don't deny you your right to be as passionate about
it as you like. It's a debate after all.

------
kazinator
> _If somebody stays away from typed languages, ..._

These people who stay away from typed languages, who are they and what are
they using?

Shell? Awk? Assembler?

When is a language untyped: is it when you can misuse an operand by applying
the wrong operation, with no diagnosis and strange, non-portable consequences?
Or is when the character string "1.3" can be used as an arithmetic operand,
automatically becoming a floating-point approximation of "1.3"? (If that is
"untyped", how is it that "1.3" is identified as a character string, and how
is the context identified as requiring an arithmetic operand?)

Or is it untyped when wrong type operands do something unpredictable, but
dumb?

Bash:

    
    
        $ string=abcd
        echo $(( string + 3 ))
        3
        $ number=4
        $ echo $(( number + 3 ))
        7
    

That _must_ be untyped. No?

~~~
freyrs3
> When is a language untyped

There's the formal definition of type and there's the colloquial definition of
type. A type in the context of the article means a formal type, it is an
syntactic classifier that is part of the static semantics of the language. The
properties you describe ( string/number exceptions ) are part of the dynamics
of the language, which is a separate concept. Dynamically typed languages
quite often use boxed values with an attached runtime tags which are
colloquially referred to as "types" even though they have nothing to do with
static types.

If you read the article in the context of "type" meaning "static type" it will
make more sense.

~~~
kazinator
Does that mean I can search and replace every occurrence of "type" in the
article with "static type"?

Are you sure dynamic types have nothing to do with static types? What about
compilers for dynamic languages which reason about type, like that if the
program calls (symbol-name x), it is inferred that x must hold a string, so
that a subsequent (cos x) is inconsistent? Is such a compiler is reasoning
about something which has nothing to do with static type?

~~~
chriswarbo
> What about compilers for dynamic languages which reason about type, like
> that if the program calls (symbol-name x), it is inferred that x must hold a
> string, so that a subsequent (cos x) is inconsistent?

That's irrelevant; if the program containing (symbol-name x) and (cos x) works
in an interpreter but not in the compiler then the compiler is broken. Note
that throwing-exceptions/triggering-errors/etc. is "working", since it's a
valid form of control flow which the program could intercept and carry on.
Aborting immediately (like a segfault) isn't "working". Triggering undefined
behaviour also isn't "working", but since most of these languages lack any
real _defined_ behaviour, it's assumed that "whatever the official interpreter
does" is correct.

Usually, these compilers completely acknowledge that their behaviour is
incorrect WRT the interpreter. They embrace this and define a new, static
language which just-so-happens to share a lot of syntax and semantics with
that dynamic language. For example, HipHop for PHP could not compile PHP; it
could only compile a static language which looks like PHP. Likewise for
ShedSkin and Python. PyPy even gives its static language a name: RPython.

Hence, these compilers don't show that runtime tags and static types are the
same/similar. Instead, they show that sometimes programmers without static
types will use tags to do some things that static types are often used for.

~~~
kazinator
> _if the program containing (symbol-name x) and (cos x) works in an
> interpreter but not in the compiler then the compiler is broken._

The compiler isn't broken.

The interpreter throws an exception, which meets your own definition of
"works".

The compiler gives an opinion "x has an inconsistent type". In this manner,
the code also "works" in the compiler.

You can still run the program if you like; then that opinion becomes an
exception if an input case exercises the code, proving the compiler's opinion
right.

Compilers for dynamic languages do not preserve all interpreted behaviors.
This is usually explicitly rejected as a goal: users must accept certain
conventions if they want code to behave the same in all situations.

For instance, some Common Lisp implementations re-expand the same macros
during evaluation, which is friendly toward developers when they are modifying
macros. But in compilation, macros are expanded once.

Usually these compilation-interpretation discrepancies are obscure; we are not
talking about obscure features here. It's a basic tenet of type checking in a
compiler that it will flag things that will throw type-related exceptions at
run time. You cannot say that type checking compilers for dynamic languages
are simply not allowed because type mismatches are well-defined behavior;
that's simply outlandish.

~~~
chriswarbo
> The compiler gives an opinion "x has an inconsistent type".

> You can still run the program if you like; then that opinion becomes an
> exception if an input case exercises the code, proving the compiler's
> opinion right.

In this example, the compiler _cannot_ give "x" the static type of "string"
and _also_ allow the program to handle errors when "x" is used in places where
strings aren't allowed. If the static types don't match, the result is
undefined; that's what static typing _means_.

If the compiler causes an exception to be thrown, then either:

\- That's just an implementation-specific quirk, which just-so-happens to be
the way this compiler behaves when it hits this undefined case; it's not part
of any spec/documentation and useful only to hackers trying to exploit the
system.

\- We accept that the compiler didn't assign "x" the static type of "string"
after all; it actually assigned it "string + exception + ..." (which may be
the "any" type of the dynamic language, or some sub-set of it)

~~~
kazinator
The situation isn't one of static typing, but of static type checking. Check
(parent (parent (parent yourpost))) where it says

> _What about compilers for dynamic languages ..._

A compiler for a dynamic language that checks types does not bring about
static typing. The language in fact remains dynamic.

What the checking means is that the compiler can give an opinion based on a
static view of the program, and we can run that program regardless.

The compiler can say: yes, all expressions in the program can be assigned a
type; no, some expressions have a conflicting type, or couldn't be assigned a
type.

Static typing indeed means that we use the result of the static analysis to
remove all traces of type from the program, and only run it when its type
information is complete and free of conflicts.

The dynamic language optimizer can in fact take advantage of its findings to
eliminate run-time type checks where it is safe to do so.

~~~
chriswarbo
> What the checking means is that the compiler can give an opinion based on a
> static view of the program, and we can run that program regardless.

> The compiler can say: yes, all expressions in the program can be assigned a
> type; no, some expressions have a conflicting type, or couldn't be assigned
> a type.

What is a "conflicting type"? In the dynamic language, there is only one
static type (any = int + string + float + (any -> any) + array(any) + ...), so
there can't be any conflicts.

It's fine to have a compiler try to specialise the types of variables beyond
that, eg. narrowing-down the type of "x" in the example to "string + float",
based on how it's used (or just leave it as "any").

A "conflict" would imply that a variable is used in ways that don't match the
inferred type; _but the inference is based on how it 's used_.

------
jashkenas
This is a bit of a nit to pick, but once again, I'm mystified as to why the HN
front page mods changed the title of this submission.

I had it as: "The Seven Deadly Sins of Talking About Types", which also
happens to be the title of the blog post itself, give or take a definite
article.

What gives?

~~~
GeneralMayhem
My guess would be it's an application of:

>If the original title begins with a number or number + gratuitous adjective,
we'd appreciate it if you'd crop it. E.g. translate "10 Ways To Do X" to "How
To Do X," and "14 Amazing Ys" to "Ys." Exception: when the number is
meaningful, e.g. "The 5 Platonic Solids."

[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)

~~~
jashkenas

        > Exception: when the number is meaningful
    

As it is in this case:
[http://en.wikipedia.org/wiki/Seven_deadly_sins](http://en.wikipedia.org/wiki/Seven_deadly_sins)

If a (some might say, literary) allusion has no place on Hacker News, just
because it happens to have a number in it, then the small-minded literalists
have really taken over this joint.

~~~
forthefuture
You don't need to know the seven deadly sins to know anything about types.
They could have had 5 or 8 sins having to do with types and the content
wouldn't have changed. That's pretty much the definition of unnecessary.

~~~
tome
It should have been "The sins of talking about types" then.

------
egonschiele
Oh good, a discussion on typed vs untyped. I expect this will be a solid
exchange of ideas, with each side trying to learn something from the other
side.

~~~
untothebreach
Did you read the article? It is actually quite refreshing in it's tone:

    
    
        Patronising those outside the faith
    
        If somebody stays away from typed languages, it doesn't mean they're stupid,
        or lack education, or are afraid of maths. There are
        entirely valid practical reasons for staying away.
        Please drop the condescending tone.

~~~
egonschiele
Regarding that particular section, I agree with this comment:
[http://www.reddit.com/r/haskell/comments/2ijwwz/seven_deadly...](http://www.reddit.com/r/haskell/comments/2ijwwz/seven_deadly_sins_of_talking_about_types_cross/cl2u82b)

And overall I agree with this:
[http://www.reddit.com/r/programming/comments/2ijt11/seven_de...](http://www.reddit.com/r/programming/comments/2ijt11/seven_deadly_sins_of_talking_about_types/cl2slxn)

> People who I enjoy discussing issues with bow out of the discussion the same
> way they do when people bring up emacs vs. vim. They know they aren't going
> to hear anything new, so why bother?

That's how I felt after reading this: "why did I bother?"

------
z3t4
Doesn't all modern programming languages has Loose typing? It allows more
abstraction. Like for example in JavaScript where a object can be anything!

~~~
steveklabnik
'loose' typing isn't a word normally used when talking about type systems.
Most people work with a two-axes "strong" vs "weak" and "dynamic" vs "static"
system. But very, very few languages are 'weakly' typed, to the point of it
almost being a silly way to talk about them. Ruby, for example, is Dynamic and
Strong.

Then there's the 'types' vs 'tags' issue...

~~~
jholman
> _Most people work with a two-axes "strong" vs "weak" and "dynamic" vs
> "static" system._

I dispute this.

First of all, I think that _most_ programmers use a one-axis system of
strong/weak, where "strong" means approximately "the safety that I like", and
"weak" means approximately "lacking the safety that I like", a system in which
they have no words for "additional safety that I do not value". I'm not aware
of any useful definition of these words, nor any specific definition widely
agreed upon by any meaningful non-partisan class of speakers (e.g. "working
programmers" or "published academic type theorists"). Can you correct me?

As such, I think that saying "very, very few languages are 'weakly' typed" is
unlikely to be understood by your audience in the sense that you intended it
(whatever sense that was).

I agree that the dynamic/static axis seems to be in wide use, and I'm unaware
of massive divergences in usage. Although note that many people spell the word
"static" using the letters ['s', 't', 'r', 'o', 'n', 'g'].

~~~
imanaccount247
>nor any specific definition widely agreed upon by any meaningful non-partisan
class of speakers (e.g. "working programmers" or "published academic type
theorists"

How are "academic type theorists" partisan exactly? That sounds a lot like
saying "I haven't seen a definition of medicine that is widely agreed upon by
both biologists and homeopaths". Of course not, but those are not equal
opposite sides. One side is correct, and the other is not.

~~~
jholman
"Working programmers" and "published academic type theorists" are examples of
meaningful-to-me hopefully-non-partisan classes of speakers. I can't manage to
parse what I wrote any other way, I don't know what you read.

In this context, advocates of a particular programming language, or particular
paradigm, are what I had in mind by "partisan", in that I believe them to be
likely to have relatively-agreed-upon definitions of "strong" (within their
language community), and I also believe those definitions to be amusingly
self-congratulatory.

Also, btw, while I think that homeopathy as a "medical discipline" is
contemptible, it's pretty stupid to say "biologists are correct about the
meaning of this word, and homeopaths are not". The way that words work is that
they mean what people use them to mean. This, in fact, is my point about the
phrases "strong typing" and "weak typing"; in my experience and my literature
surveys, these phrases are NOT used consistently, and thus i.m.o. if you wish
for your audience to hear the thing you intend to mean, you're better off
avoiding those phrases.

~~~
imanaccount247
>I can't manage to parse what I wrote any other way, I don't know what you
read.

I misread it as examples of partisans groups, with the implication that they
were opposite sides of a coin. My bad.

