
What else are we getting wrong about programming languages? - theaeolist
http://danghica.blogspot.com/2016/09/what-else-are-we-getting-wrong.html
======
nradov
One big mistake we keep making is drawing a false distinction between static
versus dynamic typing and treating it as all or nothing. Type systems should
be _scalable_ , in that programmers should be able to specify as much or as
little type information as they want. For example, a variable could be
declared with any of the following types. 1\. A value. 2\. A number. 3\. A
floating point number. 4\. A floating point number between 0.0 and 100.0. 5\.
A floating point number between 0.0 and 100.0, and optimize for runtime
performance instead of precision. Etc. That way you could start with something
quick and dirty for a prototype and then gradually lock it down through
progressive iterations. Ada and Microsoft Visual Basic incorporate some
elements of this concept but didn't take it far enough.

~~~
FLUX-YOU
Every step of the scale will have to work with every other step of the scale.
Does every step handle and allocate memory the same way? What about pointers?
C/C++ interop? Does each one interact with the operating system the same way?
How does a thread from one scale interact with objects from another scale?
That's a lot to ask for something that's just going to be a prototyping
feature (or a mode that is never used in production). How do you even think in
such a mode, and is that easily teachable to other people from either
background?

~~~
nradov
The simplest approach would be to take a dynamically typed language, like
Python or something, and then add a full range of scalable type annotations
that could be checked by the compiler (or a separate static analysis tool).
The runtime system wouldn't have to change much, except perhaps to make the
type annotations available through a reflection API.

C/C++ interop and OS interactions are separate issues mostly orthogonal to
type systems.

Teaching would probably start without using the type annotations for
simplicity. Let the students get something working first with basic dynamic
code. Then teach the type annotations and reasons for using them in more
advanced courses.

~~~
raiph
The Perl 6 student begins:

    
    
        sub add(\number1, \number2) { number1 + number2 }
    

If you call `add` with simple numbers it works:

    
    
        say add(1, 2) # 3
    

Other types of number too:

    
    
        say add(0.1, 0.2) == 0.3 # True
    

It works even for newbie definitions of "number":

    
    
        say add('0.1', '0.2') == '0.3' # True
    

A sufficiently determined newbie can of course break `add`:

    
    
        say add('foo', 'bar') == 'waldo'
    

The program stops with a suitable error message ("Cannot convert string to
number ...").

\----

The student moves on to a more advanced course...

\----

Type annotations are introduced:

    
    
        sub add(Numeric \number1,
                Numeric \number2) { number1 + number2 }
    

Everything still works as expected:

    
    
        say add(1, 2) # 3
    

But we get better error checking:

    
    
        say add('foo', 'bar') == 'waldo'
    

This generates a _compile-time_ type-checking error ("Calling add(Str, Str)
will never work with declared signature (Numeric \int1, Numeric \int2)").

So, reason 1 for using types: get type-checking that catches errors at
compile-time.

You can use native types for closer-to-the-metal speed (compact
representations, no automatic overflow checking, etc.):

    
    
        sub add(num64 \number1,
                num64 \number2) { number1 + number2 }
    

`num64` corresponds to C's double float datatype representation. This sub will
likely outperform the versions of `add` that do not have natively typed
parameters.

A third reason for explicit types is to take advantage of C interop:

    
    
        use NativeCall;
        sub add(int32, int32) returns int32 is native("calculator") { * }
    

If you've got a local C library (called "libcalculator.so" or somesuch) then
this:

    
    
        say add(1,2)
    

will call the function with the symbol name `add` in that library.

And so on.

------
cbanek
I feel like what we are getting wrong is throwing the baby out with the
bathwater... repeatedly.

Engineers love to engineer things. Why do people invent new languages?
Sometimes it's a company trying to do something new, or something hard in a
different language. Other times, it's someone who enjoys languages making a
new language, thinking they've finally solved all the "hard problems," where
"hard problems" is defined as problems they actually care about.

If you're not making a programming language, you're probably making a DSL or
something on top of another programming language, so now you have the fun and
quirks of two languages.

Instead, what if we had just a few programming languages, and made them more
expressive? We'd avoid problems with language incompatibilities, libraries,
and other general impedance mismatches.

I chalk this up to a few of factors: 1) Your syntatic sugar is probably not as
sweet as you think it is. Many times, people claim something is easier to read
or work with when it is merely easier for them, and different for others. This
causes confusion for people who aren't used to that syntax. Please stop trying
to make boring code look interesting. Boring code is the easiest to work with
and fix.

2) Languages are hard, and working on a language is a lot harder than most
people think. Once your "brilliant idea for a new language feature" actually
gets implemented, the more it does, the more likely it is to screw something
up.

To quote Scottie: "The more they overthink the plumbing, the easier it is to
stop up the drain."

3) Because of #1 and #2, language development for major languages like python
and C++ are slow. People hate slow, and so they'd rather build something
special for themselves, or try some new language, because it's more
fantastical. If this is a project for fun, go for it. But if this is for a
business, people always constantly think some new tool is the silver bullet.
What actually happens is you end up hunting bugs up and down your stack, lack
robust programming tools, and end up keeping up with breaking changes rather
than using that new margarita machine.

~~~
adrusi
_Instead, what if we had just a few programming languages, and made them more
expressive?_

Because one of the most valuable things a language can do is remove
unnecessary components to make a coherent small set of abstractions. That's
incompatible with designing a language to support everyone's favorite feature.

~~~
cbanek
I agree with the spirit of what you say, but there's a lot of finer points in
here.

There's what the language can do (syntax wise), what the standard library can
do, and what 3rd party libraries can do.

I agree less syntax is better. The syntax is probably the hardest part, since
it is the base.

A lot of standard libraries are where the meat is though, and when these are
lacking, problems quickly arise. Example: C++ before the STL (yeah, dating
myself here.)

If you have good syntax and good standard libraries, then you should be able
to make good 3rd party libraries.

"Language features" I typically lump into the bundle of syntatic sugar. The
only time this doesn't come into effect is when you simply can't accomplish
the job any other way. Overall, I agree, we should have as few as possible
"language features."

------
lambdasquirrel
Sometimes, ideas come before their time? How many times has parametric
polymorphism been reinvented? Java tried to pretend it wasn't important, and
then Go.

Python isn't really typed, which allows it to avoid those kinds of mistakes.

Having been a Haskell programmer, I would argue that the type system is
beautiful and powerful, but still not powerful enough that it's a game
changer. And it may never be powerful enough.

People forget that the reason we program is to make stuff for other people.
Python gets used because it's accessible to the people most connected to the
real world problems. The primary in my mind is that there is too much risk
involved in transitioning from messy languages to cleaner ones.

Much of that isn't just in the language, but e.g. how easy it is to debug and
deploy. So I look forward to things like stacktraces for ghc, and for stack to
continue maturing. I look forward to more people being comfortable with the
language.

That all having been said, it's just way too easy to run into the limitsof
non-dependently-typed programming in Haskell. So here's another thought, what
if string type inference is a dead end?

~~~
dnautics
Have you seen Julia's type system? It is strong, and parametric on types,
bitvalues, and tuples thereof. So for example Array{Int32, 2} is a
2-dimensional array of Int32.

Types are tied into multiple dispatch; so the properties are defined by the
functions created for it. Moreover, JITting is type-dependent; if you call a
function with a type signature, it compiles it, unless it's seen that
signature before.

~~~
Xcelerate
I love Julia's type system. I just wish the whole thing was static (I've never
had a case where the Any type worked better than explicitly specifying a
type).

~~~
dnautics
It works fine when you are lazy... I didn't feel like figuring out a type for
a dictionary that was stashing parameters, so I just made it {symbol, any}.

------
lisper
My nomination for what we get wrong over and over again: working under the
assumption that there is One True Syntax. There isn't. Different syntaxes are
suitable for different needs.

A single language _can_ support multiple syntaxes _if_ the AST is exposed as a
first-class construct. This makes it easy to write new syntactic front-ends.
Unfortunately, the only language to date that supports this idea is Lisp,
which means that this incredibly powerful idea is conflated in most people's
minds with lots of irritating silly parentheses. (One of the reasons for this
conflation is that once you start adopting this mindset the parens become a
lot less silly and irritating, but that's another story.)

I'm currently working on cryptography code, where algebraic syntax is very
convenient. Here's an excerpt from some code I'm currently working on:

    
    
        (define-method (point-double (ec elliptic-curve a b c q r) x y)
          (bb
           lambda modp(q, (3*x*x + 2*a*x + b)/(2*y))
           x3 modp(q, lambda*lambda - a - 2*x)
           y3 modp(q, lambda*(x-x3)-y)
           (values x3 y3)))
        
        (define-method (point-add (ec elliptic-curve a b c q r) x1 y1 x2 y2)
          (if (and (= x1 x2) (= y1 y2))
            (point-double ec x1 y1)
            (bb
             lambda modp(q, (y1-y2)/(x1-x2))
             x3 modp(q, lambda*lambda-a-x1-x2)
             y3 modp(q, lambda*(x1-x3)-y1)
             (values x3 y3))))
    

Notice that it looks like Lisp (and it is Lisp) but there's infix code
seamlessly embedded. Moreover, the MODP construct automatically converts
everything to modular arithmetic operations, so, for example, x/y doesn't
actually divide but instead expands into (* x (modular-inverse y p)). But when
I code, all I have to do is copy the mathematical formulas more or less
verbatim and my compiler takes care of expanding everything out into
calculations that use Montgomery reduction or whatever optimizations I want to
provide. This approach also has the advantage that I can test the correctness
of my protocols completely independently of the correctness of my
optimizations.

~~~
err4nt
Hey thanks for taking the time to write this out. I'm new to programming and
learning JavaScript, and what you said about exposing the AST and writing new
syntaxes for the same language is really intriguing.

I try to learn about the history of computer science and I try to understand
where languages have come from - and one so far I haven't dabbled with any of
the Lisp languages at all. If I was a JS learner curious about Lisp - what's
the easiest way for me to get started with some experiments?

~~~
lisper
I started working on a book a while back targeted specifically towards people
who already know a little about programming:

[https://github.com/rongarret/BWFP](https://github.com/rongarret/BWFP)

It's nowhere near complete but I'm working on chapter 3 now.

------
glangdale
This is a very good article. I do like Yaron Minsky's line, dismissing
generally programming-student-based studies rather evocatively: "there is no
pile of sophomores high enough to prove anything".

That being said, the idea that a programming language is a user interface and
might be primarily evaluated like one, rather than a cult object to be
venerated, is intriguing. If half the effort that has been put into ever-more-
esoteric type theory had been put into serious long-term studies (perhaps with
non-sophomore populations?) of usability, we might know rather more and might
be able to design better languages. My suspicion is that very few researchers
would be willing to follow UI-driven PL research wherever it leads; at best it
might skewer some sacred concepts, at worst it might lead to a bunch of
inconclusive statistical hints.

Far better to stick a bone through your beard and declare your chosen flavor
of { functional programming, type theory, OO, logic programming, ... } the
winner.

~~~
pmontra
The usability of programming languages is important. Look at how Elixir became
popular by adding a Ruby like syntactic sugar on the top of Erlang.

I can't find the source but Matz said that when he was in doubt about the most
easy to understand way to design some Ruby syntax, he asked his little
daughter to make a choice. When I look at some other languages I'm pretty sure
nobody performed any kind of usability tests on them. It's a pity because
there are languages stuck within the same community of users that used those
languages parents and grandparents and can't break out into the mainstream.

------
steveklabnik
Regarding the litte call-out to Rust here, while it's true that we have a
strong type system, we also aren't doing very much _innovative_ work on type
systems. And in many regards, we're not even that advanced by the standards of
Haskell or Idris.

The "production-oriented" bit is important as well; we _need_ those features
to achieve our goals, the type system features aren't just there because we
like type systems. And we've resisted doing so; the most classic example being
higher kinded types. People have been asking for years, and they would be
useful, but there's a lot of stuff that would also be useful, and we're
focusing on them first. We're very wary of adding type system features just
because we can.

~~~
theaeolist
Do you have any evidence that your way of doing systems programming is more
effective, for programmer productivity, than the alternatives?

~~~
steveklabnik
Depends on your standard of evidence; it's not like we have a peer-reviewed
study or something. Then again, that's a pretty high bar, almost no language
does.

Like most languages, we take users' experiences into account. We find that
we're more productive, and so do many of our users. And where people don't,
we're working on improving it.

------
chenglou
Is there a programming language/research area dedicated to the transitioning
of one library API to the next?

For example, I depend on Foo v1, and would like to upgrade to Foo v2.
Currently mainstream languages either use a type system to indicate the
upgrade was unsuccessful (can't catch everything, and still require the user
to patch things), or allow the user to muddle along, knowing things might
break subtly but not enough to be "bothersome" (e.g. node ecosystem).

So one Foo upgrade is costing O(n) in refactoring cost where n = dependents of
Foo. Given enough patience the Foo author could make an adaptor between v1 and
v2 to reduce it to O(1) upgrade (inside Foo itself), but this is asking so
much that nobody sensibly do it, except for big projects which provide great
migration warnings, codemods, bridges, etc.

This, combined with, say, plaintext structural sharing/diffing between library
versions (so that code size doesn't blow up if you have 5 transitive
dependencies on Foo), might solve two important problems of
dependency/versioning hell imo, and keep software as fresh as realistically
possible.

I understand the general version of this is uncomputable, but that's my
question: is there research on constraining the expressivity of an language
just enough to make version upgrade of a library mostly automatable? Here's an
idea: first-class support for fetching the code of a transition between v1 and
v2, half-automated by the language & tools, half community-maintained &
uploaded to a dedicated place. Alternatively, push as much as possible to data
instead of functions, to make writing transition code (which would simply be
massaging data from one form to another) much easier, like GraphQL deprecation
strategies ([https://facebook.github.io/graphql/#sec-Object-Field-
depreca...](https://facebook.github.io/graphql/#sec-Object-Field-deprecation))

Real-life example: two React.js components written under two different React
versions naturally don't interoperate. You can't include one inside another.
Right now you "can", but this is thanks to sheer human effort from React.js to
maintain backward compatibility, so components using React 15 work with
components using 14, etc. Same for Java, Windows, and others. The language
doesn't help you with that at all. Semi/fully automated version
transition/code generation would be a killer language feature.

------
dancek
> A couple of years ago I had breakfast with Guido van Rossum -- he almost
> left the table when I told him I was an academic PL researcher -- and I was
> quite impressed with the way he dismissed all mathematical aspects of the
> study of programming languages but was almost exclusively focussed on "soft"
> aspects of usability such as concrete syntax.

I know next to nothing about academic PL research, but this tells me enough. I
know that Haskell is an academic language, and it's a good language. But then
Python is a good language too, and it's apparently almost anti-academic!

Anyway, the author seems to have good ideas about where programming language
research should be directed. Hopefully someone can one day find a great
combination of rigor and usability in a language. Until then I'm left
wondering which of the 5-10 good languages that I know I should use for a new
project (or whether I should learn X this time).

~~~
charlesism
There's some wonderful talks by Guido on YouTube. Before Python, he was
involved with the ABC language. He discusses in one of his talks, I wish I
could remember which, the fact that they conducted a lot of research with
users for ABC. The sort of thing where you ask a naive user what they think a
command named "join" might do, etc. As a result, Guido has a better idea than
many, of how real-world users expect languages to behave.

------
freddref
Each new language seems to provide diminishing returns from the previous ones
if you consider only the language constructs. It appears that everything
surrounding the language has more of an impact, how easy it is to get started,
how big the community is, how good the package management system is, openness
of the language ecosystem.

Maybe what we need is more automation and support around the language instead
of new languages. Building automated compilation, testing, linting
infrastructure could be a way to go.

------
Iv
Over time, I saw one intriguing idea that any person would reject immediately
as they shock the sane mind, but that I thought may be worth a bit more
consideration:

Moving away from a pure text format. If you think about it, a lot of projects
already do that: Android projects and Visual Studio projects often include GUI
descriptions. Sure, they are XML, technically a text format, but not designed
for direct redaction by a human.

It is the thing I have realized when, despite a lot of ideological
reservations, I started to like Visual C#: a programming language is linked to
the way it is used, to its IDE.

I wish language designers would embrace that fact, and not assume that the IDE
would be a generic text editor. I think that is holding back a lot of possible
designs.

------
good_gnu
If anybody wants an example of how programming languages can be designed with
an emphasis on empirical evidence over mathematical proof, take a look at
[https://quorumlanguage.com/](https://quorumlanguage.com/)

------
programminggeek
Here is one thing we get very wrong. Programming is a communication tool to
describe communication systems. If you were to design a tool to design/build
communication systems, it wouldn't look like what we program with.

------
mannykannot
Perhaps the dogma that needs to be reexamined is the notion that programming
languages are the key to improving productivity. Perhaps the key might be
found in teaching programming differently, or in better or different tools.

------
wmwragg
I think one of the main things we get wrong is that pretty much any language
can be used successfully to write a small to medium sized program, it's when a
large program is written that things can start to fall apart. Some language
features aid large program developement e.g type systems, but can get in the
way, or even seem pointless, when writing small programs

------
yoodenvranx
> We should worry, for instance, when Python, a language that breaks all
> academic language design tenets, is one of the most popular languages in the
> world.

Well, Python has a) an extensive runtime library (e.g. batteries included) and
b) you type some text in a file and then you can run your program without
dealing with any build system. I'd argue that both of these things might more
important for the average programmer than some perfect but esoteric type
system.

I kinda love the language C++ but I hate dealing with all the hassles
surrounding it. Why is it so complicated to include a random C++ library from
the internet to my project? Why do i have to redo my build system when I
switch platforms? I usually use Python for 99% of my pet projects because it
is just so easy to get started. I have some hopes that Rust improves the
plumbing situation for low-level programming languages.

~~~
cabalamat
> Python has a) an extensive runtime library (e.g. batteries included)

Which is also decently documented. The importance of good documentation for
libraries (and other tools) cannot be over-emphasized.

Also, just as important as the standard library are third party libraries.v
The advantage of using Python (or other popular languages) is that there is
probably already a library for what you want.

------
Razengan
> What are we getting very wrong about programming languages?

Perhaps it's the fact that languages have always had to accommodate the
hardware, rather than the other way around.

Why don't we have CPUs and memory controllers with built-in support for type
safety and garbage collection etc. on the silicon yet?

~~~
pmontra
I guess it has to do with this quote from Coders ar Work, from the interview
to Fran Allen

> Allen: By 1960, we had a long list of amazing languages: Lisp, APL, Fortran,
> COBOL, Algol 60. These are higher-level than C. We have seriously regressed,
> since C developed. C has destroyed our ability to advance the state of the
> art in automatic optimization, automatic parallelization, automatic mapping
> of a high-level language to the machine. This is one of the reasons
> compilers are . . . basically not taught much anymore in the colleges and
> universities.

> Seibel: Surely there are still courses on building a compiler?

> Allen: Not in lots of schools. It's shocking. there are still conferences
> going on, and people doing good algorithms, good work, but the payoff for
> that is, in my opinion, quite minimal. Because languages like C totally
> overspecify the solution of problems. Those kinds of languages are what is
> destroying computer science as a study.

> Seibel: But most newer languages these days are higher-level than C. Things
> like Java and C# and Python and Ruby.

> Allen: But they still overspecify.

My guess is that it is left to the programmer because that was the way C did
(no type safety and malloc) and the CPUs were developed to work with C. We're
still stuck with that.

------
JBiserkov
>Is any current language 100 times better than FORTRAN for programmer
productivity, just as FORTRAN was compared to machine-code programming?
Probably not.

I beg to differ. A well-designed Lisp, with the power of homoiconity (code-as-
data, real macros), immutability by default and a sane model for concurrency.
Running on the most optimized virtual machines (JVM, .NET CLR, JavaScript),
with the ability to use all of their existing libraries.

I'm talking about Clojure.

[http://clojure.org](http://clojure.org)

[http://clojurescript.org](http://clojurescript.org)

~~~
nradov
How productive is a Clojure programmer when the requirement is to write a high
-performance application which performs vector operations on large matrices?

~~~
didibus
You can be pretty productive.

Have a look at core.matrix
([https://github.com/mikera/core.matrix](https://github.com/mikera/core.matrix))
and vectorz.clj ([https://github.com/mikera/vectorz-
clj](https://github.com/mikera/vectorz-clj)) libs. They give you almost native
speeds.

If you need even more performance, checkout neanderthal
([https://github.com/uncomplicate/neanderthal](https://github.com/uncomplicate/neanderthal))
which has a GPU back-end.

------
prmph
I'm looking forward to an extremely-typed language, where in addition to the
usual types, more specific types can be defined with regex expressions, sub-
ranges, even function-based rules, etc.

Such extra-strong typing would lead to much more robust program and reduce
complexity of testing

~~~
raiph
Perl 6 supports arbitrary run-time predicates as "subset" types:

    
    
        subset Foo where /foo/;
        subset Bar where 0 .. 99;
        subset Baz where * > rand;
    
        my Foo \foovar = 'Some food';
        my Bar \barvar = 42;
        my Baz \bazvar = 0.5; # Fails typecheck half the time

------
js8
"But I know of no scientific studies actually backing up the (sometimes
implicit) claim that this types-first methodology leads to better languages,
from the point of view programmer productivity."

So here's my problem, maybe I am using Haskell wrong, and somebody can give a
good answer (FYI - I am big fan of Haskell, but I am still much more efficient
in Python).

Let's say I write a library, I annotate all the functions with types, so I
give it a nice API, and then what happens.. I realize I should have used a
different type for this parameter in this library. For instance, a type class
instead of normal type. But boy, it's like everywhere. Or it's a third party
library and I can't really change the type signatures. I just wish sometimes
that I could just "forget" the types temporarily and just borrow the existing
logic, and then reintroduce different types, type check, done.

To be fair, Haskell has potential solution for this type of problem - type
inference. But the thing is, once you write the (expected) type signatures
into your program (which is sometimes needed to actually help the type
inference, and it's also useful as a documentation), they are always there,
and sometimes, they obstruct.

So I guess I wish there was a tool with which I could manage type information
in the program in a lot more dynamic way than I currently do. That would,
IMHO, close the gap to the dynamic languages.

What about making every type into a type class, would it do the trick? But how
would the concrete type be specified, then? Would the compiler pick it?

(Maybe it's obvious, but let me point out the connection of this question to
lambda calculus. I can have two functions in typed lambda calculus which are
different, because they have different type signatures, but their actual
semantics is identical (provided they get arguments of correct type); or in
other words, if we convert both into untyped lambda calculus, we get identical
functions. So the question is, why cannot we somehow reuse code from the first
function, why we have to define the same code twice with different type
signature?)

~~~
tome
What's wrong with just wrapping and unwrapping values as different types, at
the few points you need to do so?

~~~
js8
Not only it becomes ugly, but it also doesn't help you in all situations.

Let's say you have a function in a library that has signature Int32 -> Int32.
Then you realize you need to do the same thing, but for Int64. You can't reuse
it by wrapping.

The above can seem silly, but problems like that are very real for Haskell
strings.

I mean, from type-purity perspective, this behavior is OK. But from practical
perspective (like Python's), it's a bummer. Why you shouldn't be able to reuse
existing code just with different type signatures?

~~~
tome
> Let's say you have a function in a library that has signature Int32 ->
> Int32. Then you realize you need to do the same thing, but for Int64. You
> can't reuse it by wrapping.

For that we have typeclasses.

------
m1sta_
Consistency, intuitiveness, error rates, task performance - even if we just
heard reference to these four more often things might improve.

------
ClayFerguson
Non-typesafeness of javascript is a massive disaster. TypeScript changed my
life as a software developer by bringing types to javascript.

~~~
aikah
> Non-typesafeness of javascript is a massive disaster. TypeScript changed my
> life as a software developer by bringing types to javascript.

Javascript was built for an environment that needs to do everything not to
fail, just like HTML. A javascript engine will do everything to save a bad
script from yielding an error. Javascript wasn't build with code correctness
in mind. In that perspective, the language is a success. I personnally don't
like languages that allows the developer to add a string to an integer without
yielding a type error.

Typescript can help but it is still javascript underneath. And it doesn't
correct the libraries you use that are written in Javascript.

------
cndkxjsnf
Do you want to invest your time worrying about types or actually programming?
My issue with building these type-theoretic monstrosities is that now you
suddenly have to tackle two problems instead of just one.

Just like the underlying machine code is hidden in modern languages, types
_should_ be totally abstracted away when programming in a high level language.

~~~
tines
In my opinion, you've got the order wrong: types _are_ the abstraction, and
untyped-ness is the concrete implementation (just look at almost all assembly
languages).

> Do you want to invest your time worrying about types or actually
> programming?

Another order reversal: type systems allow you to focus on programming,
instead of hunting around for bugs because somebody 50 levels up the call
stack passed you a value that doesn't support the interface it needs to
support.

This is an endless debate of course, just wanted to throw in a typed pl fan's
opinion in the mix.

~~~
cndkxjsnf
>Another order reversal: type systems allow you to focus on programming,
instead of hunting around for bugs

Except that is false. Bug prevalence is empirically orthogonal to type system.

See Xmonad

~~~
dancek
Are you saying that Xmonad is buggy? I never realized that, but I'm quite new
to it. I'd think I've seen other window managers crash with similar amount of
use, though.

Or are you saying that Xmonad is not buggy? But then I don't understand how it
relates to empirical orthogonality at all.

~~~
taeric
Window managers crash often? I can't think of the last time I saw one do so.

