
Dunning-Kruger and other memes (2015) - ikeboy
http://danluu.com/dunning-kruger/
======
jpatokal
"The less someone knows about a subject, the more they think they know." I
think that's an exaggeration and understood by most people to be one.

What _is_ , however, absolutely clear from the original study is that
incompetent people have a highly inflated assessment on their own abilities:
in all four experiments (!), the people with actual 10% abilities rated
themselves at 50-70%. This is perfectly in line with eg. Urban Dictionary's
definition[1]:

"A phenomenon where people with little knowledge or skill think they know more
or have more skill than they do."

[http://www.urbandictionary.com/define.php?term=Dunning-
Kruge...](http://www.urbandictionary.com/define.php?term=Dunning-
Kruger%20effect)

[1] Chosen intentionally because this is a "popular" source, not an academic
one.

~~~
inanutshellus
One thing about Dunning-Kruger that I've never heard mentioned is self-esteem
affecting one's answer. As in, if you explicitly mark a low score for yourself
you're both divulging to others _and admitting to yourself_ that you think
you're a failure in that category. So imagine you're presented with one of
these forms... what incentive/preventative is given to not let your ego's
self-preservation instincts kick in and put you down as "average" (50%+) for
anything and everything?

Basically, I can't imagine how you'd account for the societal matter of
bravado and I posit that it has an influence on the outcome of DK experiments.
e.g. if you tried to account subvert bravado and had some kind of reward at
the end to say "Hey, if you successfully guess your percentile for a given
skill we'll give you a cookie." (which of course would incentivize you to put
"0%" and intentionally bomb the test.)

~~~
sklogic
Do not forget that this very "self esteem" thingy is almost exclusively
American. The other nations do not even think in such terms.

~~~
inanutshellus
I can't even process what you're saying. What culture doesn't acknowledge self
respect, and one's place in society relative to others?

Certainly there are small communities _everywhere_ (even in America -- after
all, they were prolific in America's infancy) that sought the creation of a
hyper-communal and idyllic towns... but... those never scale because, well,
people are selfish. But still. I just can't process what you're saying. Maybe
I'm blinded by offense.

~~~
sklogic
> What culture doesn't acknowledge self respect, and one's place in society
> relative to others?

What culture (besides the Northern Americans) would so blindly equate self
respect to self esteem? The others understand better that you can respect
yourself even without the overblown, unrealistic views on your own abilities
and virtues.

~~~
squeaky-clean
> The others understand better that you can respect yourself even without the
> overblown, unrealistic views on your own abilities and virtues

I don't think self esteem means what you think it means.

------
mevile
> The pop-sci version of Dunning-Kruger is that, the less someone knows about
> a subject, the more they think they know.

Author's take on Dunning-Kruger is a strawman. I haven't seen that version be
a "meme". I also dislike this single word rejection, the calling it a "meme",
of how people talk about DK, it's like name calling or something. Unwarranted
and arrogant dismissal. It's one thing to be wrong, it's another to be wrong
and then also haughty about it. I feel like most people that I've seen bring
DK up understand what the implications of it are. My favorite thing I've read
about it is that the less you are competent in something the less you are able
to gauge competence in that something.

> In two of the four cases, there’s an obvious positive correlation between
> perceived skill and actual skill, which is the opposite of the pop-sci
> conception of Dunning-Kruger. A plausible explanation of why perceived skill
> is compressed, especially at the low end, is that few people want to rate
> themselves as below average or as the absolute best.

In one sentence he's declaring someone else's opinion on it as pop-sci than
offers his own similarly silly take on it. Oh wait he showed some charts. I
did like seeing that people with more skill saw themselves as better than
people with less skill, but conjecture on what people want to think of
themselves? That's pop-sci.

~~~
coldtea
> _Author 's take on Dunning-Kruger is a strawman. I haven't seen that version
> be a "meme"._

The latter doesn't make it a strawman. Perhaps you've just haven't read the
same posts/comments the author has read -- or as many.

Here:

1) Top voted definition from the urbandictionary: "A phenomenon where people
with little knowledge or skill think they know more or have more skill than
they do."

2) Article on OpenCulture website, titled "John Cleese on How “Stupid People
Have No Idea How Stupid They Are” (a.k.a. the Dunning-Kruger Effect)".

3) Article in the online outlet unbiased.co.uk: "This behavioural concept (the
discovery of which won the Ig Nobel Prize in 2000) describes the tendency of
people who know very little to believe they know a lot. "

4) LinkedIn post: "Charles Darwin once said that ignorance tends to beget more
confidence than knowledge. In a nutshell, it explains the Dunning-Kruger
effect, which is a cognitive bias where incompetent individuals tend to
overestimate their skill, cannot perceive the magnitude of their own
inadequacy and will admit to their lack of ability only when they are trained
in that particular skill."

5) Another popular blog: "The Dunning-Kruger Effect: Are the Stupid Too Stupid
to Realize They’re Stupid?"

6) The Dunning-Kruger effect represents a cognitive bias under the influence
of which relatively unskilled individuals suffer from illusory superiority,
mistakenly assessing their ability to be much higher than it really is.

I could go on for ages...

> _I feel like most people that I 've seen bring DK up understand what the
> implications of it are._

You'd be surprised.

I actually don't get your comment, it's like you feel defensive and try to
defend a psychological finding from being called a "meme".

First, it's not like the DK effect is a person and will feel bad.

Second, the internet is rampant with people who casually mention the DK effect
in comments and posts and articles while not understanding fully what it is
about.

> _In one sentence he 's declaring someone else's opinion on it as pop-sci
> than offers his own similarly silly take on it. Oh wait he showed some
> charts._

Some charts based on the original research, and more faithful to its
conclusions and reportings than the pop-sci adaptation the author talks
against. So?

~~~
penguinduck
It _is_ a strawman. Your quotes don't say what the author wrote: "the less
someone knows about a subject, the more they think they know". And again
later: "In two of the four cases, there’s an obvious positive correlation
between perceived skill and actual skill, which is the opposite of the pop-sci
conception of Dunning-Kruger." He is specifically talking about an expectation
of an _inverted_ relationship - a negative correlation - and his whole
argument is based on that.

By his definition, the "meme" would mean people believe that those who are
among the worst in the world at something would estimate their ability the
highest, and those who are among the best would estimate their ability the
lowest (on average).

Do you really think people believe this? That olympic gold medalist swimmers
would put themselves in the first percentile of fastest swimmers, while people
who've never entered water would put themselves in the 99th (well, not
necessarily 1st and 99th - maybe it's 3rd and 97th, but the medalists would
choose the lowest number, and the non-swimmers the highest)? I don't think I
know a single person who believes this.

I have huge respect for Dan Luu but he screwed up here and created a classic
strawman.

------
ternaryoperator
This article touches on one of the highest-payback practices I've developed
over the last few years: going to the original sources. I am constantly
rewarded by this; typically finding out that the downstream analysis
misunderstood some aspect, latched on to only a fraction of the whole story,
or willfully misrepresented by speculating on absent data or by inserting a
plausible narrative for items that fit a private agenda.

~~~
nabla9
It's very scary.

Almost every time I check some thing in the news or public opinion, it's
either wrong, misinterpreted or too simplified. Often it's just people talking
from their ass. It's amazing how we can word as a society.

~~~
kbart
_" Almost every time I check some thing in the news or public opinion, it's
either wrong, misinterpreted or too simplified"_

I'd attribute that to the decline of real (read professional) journalism, not
an actual decline of society. People have been talking from their asses
probably for as long as they can speak, but Internet allowed them to bypass
usual bullshit meters.

~~~
ArkyBeagle
It may be worth reading "The Chief", a bio of William Randolph Hearst then.
Follow that with outright political slander in newspapers in the 19th Century.

Even Walter Cronkite might oughta not said what he did w.r.t. Vietnam. More
time to pull out would have saved lives, and many, many people have felt very
badly about that. That one is complicated.

No doubt that the sheer quantity of "news" has dragged more muck off the
bottom of the barrel, but the good old days weren't so good.

------
danbruc
The static typing example seems weird to me. I did not read the entire linked
summary but only the first five and last three papers discussed there and they
mostly hint at at least some positive effect for static typing but the author
of the summary essentially just dismisses the results for various reasons. I
am not saying that all the judgments in the summary are necessarily wrong but
overall that summary seems a pretty strange basis for saying that static
typing is worth nothing. And the author of the submission is also the author
of the summary.

~~~
sklogic
Because the author has an agenda, he's a dynamic typing and unit testing
proponent. Do not expect anything even distantly resembling any kind of an
objective science from someone who clearly is not interested in facts.

~~~
dang
This is pretty much a personal attack and those are not allowed here.

Also, what you're saying doesn't match my recollection of luu's writings on
this topic, which is that he's mildly in favor of static typing but changed
his mind somewhat after looking at the dismal state of the evidence.

------
tikhonj
The type system question is different from the psychological examples: the
problem is not people misinterpreting evidence, but that reliable empirical
evidence simply does not exist. Papers on the matter are sparse, completely
uneven and full of methodological issues.

Personally, I'd argue that "statically typed" vs "dynamically typed" does not
even make sense as a single question. There's more difference between Haskell
and Java than between Java and Python, and an experiment comparing two
identical languages with and without static typing won't tell us much beyond
_those two languages_. (I recall seeing at least one paper that did this; it's
probably worth reading, but not for making broader conclusions.)

Moreover, there simply isn't a compelling way to measure most of the things
that programmers actually care about like expressiveness, productivity or
safety. Existing measures (like counting bugs, lines of code over time,
experiments on small tasks often performed by students... etc) are limited,
full of confounding variables and quite indirect in measuring what people
actually care about. I've looked through various studies and experiments in
software engineering and while _some_ are compelling, many are little more
than dressed-up anecdotes or "experience reports".

It's especially hard to study these things in the _contexts_ that matter. What
we care about is experienced programmers who've used specific technologies for
years applying them in teams, at scale. What's easy to experiment on is people
who've just learned something using it on tiny tasks. Observational studies
that aim at the broader industry context are interesting but hard to
generalize because of confounding variables and difficulty of measurement.

In the absence of this sort of evidence, people have to make decisions
_somehow_ , and it's not surprising that they overstate their confidence. We
see this in pretty much everything else that doesn't have a strong empirical
basis like questions around organizing workplaces, teams and processes. Just
look at opinions people have about open offices, specific agile processes or
interview procedures!

Another side to the question is that languages inevitably have a strong
aesthetic component, and talking about aesthetics is difficult. But you're
certainly not going to convince anyone on aesthetic matters with an experiment
or observational study, any more than you can expect to accomplish anything
like that in the art world!

------
dahart
Something I didn't realize before is that, meme or not, Dunning-Kruger tested
perception vs skill on _basic_ tasks, things that if someone asked me I might
easily mistake my own ability, since it's something I'd feel like I should
know how to do.

Ability to recognize humor isn't what I'd even call a skilled subject matter,
and it's not something we learn in school or normally get exposed to graded
metrics or comparisons against other people.

These aren't highly skilled subjects like Geophysics or Law or Electrical
Engineering or Art History. I'd be willing to bet it's a lot easier to both
self-identify lack of ability and admit lack of ability in a subject the more
skilled it is.

------
stepvhen
I like to think SMBC[1] presents a more accurate graph of confidence vs
knowledge, but I don't know enough to really speak about it.

[1]: [http://www.smbc-comics.com/?id=2475](http://www.smbc-
comics.com/?id=2475)

~~~
duaneb
Quantifying the subtleties of knowledge as experience increases is difficult.
For instance, one might understand the details but not compositional
complexities. Or vice versa. But comparing the two situations is difficult and
inextricably contextual.

------
dahart
John Oliver did an awesome bit recently on scientific studies and how popular
conceptions of them, especially media portrayals, completely distort the
results.

[https://www.youtube.com/watch?v=0Rnq1NpHdmw](https://www.youtube.com/watch?v=0Rnq1NpHdmw)

------
ywecur
I'm curious as to why nobody here has yet to comment on OPs claims about
"Hedonic Adaptation". I've been told by various sources that this is the way
the brain works, even in my recent biology class where the teacher would say
that "dopamine sensitivity" was to blame.

It seems like a really big deal to me if he's right, and could really change
your outlook on life.

------
musesum
D-K is one of my favorite patterns. This is the first time I've seen these
charts. Some questions about methodology:

The x-axis shows quartile, not score results. If the range was between 80 and
90%, then all participants were accurate in assessing their ability as "above
average". [EDIT] I doubt that's the case, but would rather see scores.

How was the self declared expertise in "humor" judged? That seems pretty
subjective. Maybe the subject is hilarious to his or her friends.

Did the subject know what the examiner's definition of "logical reasoning" is?
Was that street logic or discrete structures? What if the subject was able to
glance at the test questions? Only then answer the question as it pertains to
the test. How would the results change?

Grammar is idiomatic. In some places "over yonder" is contextually concise.
Other grammatical forms may never occur. How is self-assessment over tacit
expertise judged? Maybe another glance at the test?

Maybe Dunning-Kruger shows that there is a disconnect in how examiner and
subject interpret a question? Maybe, it is a matter of saving face in saying
that you're above average? Maybe, because the subjects are college students,
that they actually are above average? Or maybe, these are above average
participants that aren't quite sure of the question, so they say that they're
above average?

------
mwfunk
The idea that there's an inverse relationship between how much someone thinks
they know about a subject and how much they actually know is pretty timeless.
When people refer to Dunning-Kruger I take it to mean shorthand for that
phenomena rather than a reference to results from a specific study done in
1999.

I may be misremembering, but when I first saw references to it on Slashdot,
etc., it was from people reacting in amusement that someone was able to
quantify and measure what seemed like such a commonly experienced aspect of
human behavior. If someone had done an academic study on the increased
likelihood of friends having scheduling and availability issues around
weekends in which one friend was moving to a new house but was too cheap to
get movers despite having plenty of money to do so, it would've gotten a
similar response. :)

Since then, it's just been convenient having a name ("Dunning-Kruger", that
is) for a concept that was widely understood but didn't have shorthand for
referring to it. I'm not surprised that the study itself wasn't definitive and
airtight.

------
irrational
One thing I never see in the income/happiness studies is - Is this just for a
single person, or is it for a family? And if for a family, then what size is
that family? I can see being happy earning 75k/year and being single, but not
so much if I have eight other family members to support with that same salary.
Is there some sort of "number of people being supported on this income"
adjustment to the income/happiness studies?

~~~
cjlars
At least one study uses household income [1]. The effect isn't adjusted for
family size in any way I can see. Do note that the linked study differentiates
between 'enjoyment of life' which they estimate starts to plateau around $75k
/ household and 'life satisfaction' which keeps going up with earnings [2].
That difference may explain much of the supposed controversy as outlined in
the original post.

[1]
[http://www.pnas.org/content/107/38/16489.full#T2](http://www.pnas.org/content/107/38/16489.full#T2)

[2] [http://blogs.wsj.com/wealth/2010/07/02/money-can-buy-
satisfa...](http://blogs.wsj.com/wealth/2010/07/02/money-can-buy-satisfaction-
if-not-happiness/)

------
MPSimmons
This article had more assumptions in it than examples of assumptions it was
complaining about.

------
cowpig
> Apparently, there’s a dollar value which not only makes you happy, it makes
> you as happy as it is possible for humans to be.

> If people rebound from both bad events and good, how is it that making more
> money causes people to be happier?

I saw graphs that proved happiness causes money. What did you see?

disclaimer: I am trying to be snide on the internet. What I mean to say is
that I was confused by the use of the word "cause".

~~~
dgacmu
I see graphs that proved that being born into a financially well-off family
with access to good education might have something to do with factors involved
in the graph.

(Appreciated your humor. Am attempting a mildly humorous speculation about a
plausible cause for both factors. It's probably less humorous than I think it
is, but as someone lacking skill in humor, I overestimate my own hilarity.)

------
mwexler
Here's a link to the original paper.

[http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/...](http://psych.colorado.edu/~vanboven/teaching/p7536_heurbias/p7536_readings/kruger_dunning.pdf)

------
tommynicholas
Dan Ariely and his team has done some great work on the "happiness" meme, and
they generally support the popular notion that there are massively diminishing
returns to accruing wealth. Yes, (as this post shows) happiness does continue
to increase as you accrue wealth, but there are other things that you can do -
including giving money AWAY - whose returns on happiness and satisfaction do
not diminish. The point is, if you take a long view on life and what to focus
on, getting to a certain level of financial stability should take a high
priority, but becoming incredibly wealthy should not.

------
coverband
Is this a clever ruse to test whether we'll read the cited sources? ;^)

------
thomasahle
All the income happiness data seems to stop shortly after 64k. Hardly evidence
that there is no plateau.

~~~
costein
If you click on the link "is robust across every country studied, too", there
is data up to and above 500k for the US.

------
sklogic
Was ok up until type systems. Please stop citing this pathetic "empirical
study" already, it's totally unscientific.

~~~
lolc
If that one is not, are there any scientific studies then?

Strongly typed languages require of me to do more work upfront, to satisfy
their type checker. They must necessarily reject programs that would work
correctly. In this process a lot of mistakes are eliminated, and this gives me
more confidence that the result will work. I like that way of working. But
does it produce more robust code? Is it more productive? It feels like it, but
that doesn't mean it's true.

~~~
pklausler
"Strongly typed" is not the same thing as "statically typed". Most dynamically
typed languages are strongly typed, too. The distinction between static and
dynamic type systems comes from whether type errors are caught at compilation
time or run time.

Which basically settles the question for me as a programmer, anyway.
Eliminating the possibility of a class of run time failures -- how can that
not be a good thing?

~~~
lolc
I meant statically typed, thanks.

The question to me is not whether type checkers are useful tools, but at what
point they become a hindrance. If I may rephrase your question: The programs
rejected by the type checker, how can they not be bad programs?

------
59nadir
I love that the people who come into the comments to argue about the benefits
of static typing seem to have totally missed the point that the post argues
that you need _evidence_ , not just beliefs.

~~~
sklogic
Empirical evidence is nearly impossible in this area.

On the other hand, we have a solid _theory_ , not some "beliefs". If you want
to dismiss the entire PL theory, you have to try really hard to justify such a
stupid move first. The problem is, most of the dynamic proponents know next to
nothing about the PL theory anyway.

~~~
59nadir
> Empirical evidence is nearly impossible in this area.

Then stop arguing as if you have evidence. It's really that simple.

Present something that is not derived from opinion and speculation and make
this an argument that is not subjective.

For what it's worth, I prefer static, strong type systems and I was recently
dreaming out loud about strong, static typing in erlang with a colleague. I
don't confuse my opinion and speculation in what's good and not with fact,
though, which is the big difference.

> If you want to dismiss the entire PL theory, you have to try really hard to
> justify such a stupid move first.

It's a _fact_ that there exists no definite proof of the objective superiority
of static, strong typing. I don't need to "dismiss the entire PL theory" (what
a silly thing to even say; not all PL theory is concerned with types).

You've come exactly 0.0% closer to showing any kind of evidence, empirical or
not, and have only speculated more (on the value of static, strong type
systems and of the skill and knowledge of people who disagree with you).

~~~
sklogic
> Please, do present the solid theory that is not simply derived from
> speculation and opinion.

What theory shows is enough to claim superiority:

1) Dynamic typing is a subset of a static typing. This thing alone is enough.

2) Static typing provides _more_ semantic options in both compile and run
time, meaning that you can do more diverse things. Also quite a strong claim
for superiority.

~~~
59nadir
> 1) Dynamic typing is a subset of a static typing. This thing alone is
> enough.

This is like saying that more syntax is better. No, cutting away from
something can make it better. This argument is not at all enough to claim
superiority.

(C can be considered a subset of C++. Which is better?)

> 2) Static typing provides more semantic options in both compile and run
> time, meaning that you can do more diverse things. Also quite a strong claim
> for superiority.

"More diverse things" is ill defined. Which are they and why are they a net
win? This is not at all a strong claim for anything, except "There is more".

~~~
sklogic
> No, cutting away from something can make it better

What?!?

You can build a dynamic type system on top of a static one. The opposite is
impossible. What else is there to even talk about?

> "More diverse things" is ill defined.

It is very well defined. Static (i.e., compile time) metadata allows to infer
_constraints_ in compile time. Dynamic metadata is useless for deriving
constraints. A very obvious consequence of this observation is that there will
always be far more boilerplate with dynamic typing than with static.

~~~
59nadir
> You can build a dynamic type system on top of a static one. The opposite is
> impossible. What else is there to even talk about?

We are talking about the value of different kinds of type systems and using
them. Being able to build a dynamic one on top of a static one says very
little about whether or not dynamic or static typing is better for actual
usage. On top of this lots of languages have added gradual typing, so this
idea that you cannot take a language that is not statically typed and add a
type system seems misguided.

> A very obvious consequence of this observation is that there will always be
> far more boilerplate with dynamic typing than with static.

I hope you realize that this is not at all what reality looks like.

~~~
sklogic
> We are talking about the value of different kinds of type systems and using
> them.

Exactly. And you're apparently suggesting that there may not be a single case
where you may want static constraints. Kinda very strong position, needs very
strong proofs indeed.

> gradual typing

Gradual typing IS a static typing, period.

> you cannot take a language that is not statically typed and add a type
> system seems misguided.

What?!?

You cannot build a gradual typing system _on top_ of a dynamic one.

> this is not at all what reality looks like.

I can only conclude that you do not know much about the reality if you think
so.

~~~
59nadir
> Exactly. And you're apparently suggesting that there may not be a single
> case where you may want static constraints.

No, I have consistently asked for objective proof that static typing is a net
win over dynamic typing, something you have yet to even address. I don't know
if you're intentionally misrepresenting my argument or if you're simply
misunderstanding it, but I think you should re-read this whole thread.

As I've said previously, I prefer static strong typing, but I'm also in touch
with reality and to present my opinion and speculation as some kind of fact
isn't something I'm interested in.

> I can only conclude that you do not know much about the reality if you think
> so.

If we're jumping to conclusions I'd like to conclude that you think all PL
theory is type theory and that you're ignorant of every other bit of it (and
also that you're the type of person to think your every opinion is fact. I
think both of these have been on display in this thread, so I actually think
that's a stronger conclusion than the one you've drawn).

~~~
sklogic
Sorry, cannot reply down the thread, so I'll put my answer here:

> This is not necessarily true: Static typing quite often requires you to
> satisfy the type system

We're talking about static typing in general, not some particular
implementation of it.

Any static type system with an "anything" type (think of the System.Object in
.NET, for example) allows a transparent fallback to dynamic at any time.

So, claiming that "there is a cost" is an outright lie.

> I haven't stated that dynamic typing is better, but I have stated that
> people claiming one or the other need to have proof.

You know, there is a little funny thingy called "logic". And one of the most
common tricks in logic is a proof by contradiction. When you're asking for a
proof that static typing is superior, the simplest way is to start with "let's
assume dynamic typing is superior". This is exactly what I did. Unfortunately,
you could not follow.

> If your programs are as airtight as the "proof" you've given here, I'm not
> sure I ever want to use them.

It's understandable that a person who do not know much about type systems in
particular and PL theory in general also apparently does not know much about
proofs and logic in general. After all, type theory and proof theory are
indistinguishable.

~~~
59nadir
> You know, there is a little funny thingy called "logic". And one of the most
> common tricks in logic is a proof by contradiction. When you're asking for a
> proof that static typing is superior, the simplest way is to start with
> "let's assume dynamic typing is superior". This is exactly what I did.
> Unfortunately, you could not follow.

Condescending, but not to be confused with correct. I'll try as well:

Given your obviously limited knowledge and familiarity with English I can
understand that you seem to have issues understanding my basic argument, but
I'll restate it for you:

If you are trying to claim something as superior, you need to provide actual
reasons for it, not just speculation.

I hope you followed that.

> It's understandable that a person who do not know much about type systems in
> particular and PL theory in general also apparently does not know much about
> proofs and logic in general. After all, type theory and proof theory are
> indistinguishable.

It's actually not understandable that someone who claims to have a lot of
knowledge in type systems and type theory, as well as logic, to provide
"proof" that in no way proves what was asked for. It's also surprising that
someone who claims to be so well versed in PLT essentially says it's all type
theory.

It's understandable if a person with reading comprehension issues would have
problems reading this post, so if you have any questions regarding it (or the
previous posts), feel free to ask.

~~~
sklogic
It is very childish and stupid to respond to a proof with a shit like "no,
this is not a proof".

> The idea that static type systems are better to use (in general) because you
> can make dynamic type systems on top of them is simply not something you can
> just say and then have taken as fact.

Oh, did not realise you're _so_ incompetent (although I should have guessed
after your epic fail with the gradual typing). Do I have to prove that 2+2=4
too?

Once again: dynamic typing is a subset of static typing and therefore it is
_less powerful_. Period. You cannot do anything with this fact.

Also, funny that you did not respond to my accusation that you believe that
type systems are only for "validity checking". Which suggests that I was
right.

~~~
dang
We bent over backwards not to ban you, gave you lots of warnings and cut you
tons more slack than we usually do. You're aware of how unacceptable it is to
post comments like this to HN, and still you did it repeatedly in this thread,
turning a large section of it into a toxic waste dump.

Obviously, your account is now banned. If you don't want it to be banned, you
can email us at hn@ycombinator.com, but please don't do that until you're
sincerely committed to never spoiling HN like this again.

~~~
cbd1984
This kind of moderatorial bickering does not belong on HN.

~~~
cbd1984
And now you're disagreeing by downvoting.

------
dkarapetyan
But type systems do help. You don't have to go far to notice the shortcomings
of any large enough project written in python, ruby, javascript, etc. Whereas
a project of equivalent scale written in c#, typescript, java, dart, etc. is
much easier to maintain and debug. So given enough discipline and enough good
programmers I agree that there isn't much difference but in practice this is
not the case and having the compiler double check your work helps a lot.

