Hacker News new | comments | show | ask | jobs | submit login
Clojure/Conj 2017 – Opening Keynote by Rich Hickey [video] (youtube.com)
216 points by kasbah 36 days ago | hide | past | web | 176 comments | favorite



No one in this thread is yet talking about the content of this talk, which I think is interesting. Rich is doubling down on dynamic typing. As the world is gradually moving to more statically-typed languages (Rust, Swift, Go, Elm, Purescript, Typescript are all popular tools that come to mind), Clojure remains in the embrace of dynamic typing, and this talk is an interesting look at the perspective of why that is, and adds perhaps a little bit to the debate between static and dynamic typing.

I admire Rich a lot, but I also admire John Carmack, and it is apropos that yesterday another comment was shared on another post here [0] about his opinions on static typing, and how in just a few years (his talk was in 2013), the industry as a whole has made considerable changes in its opinions of static typing; according to his talk, in 2013 most of the industry was still not convinced of the benefits of static typing.

[0] https://news.ycombinator.com/item?id=15460604


I think its a mistake to directly compare the experiences (and conclusions) of these two exceptional programmers.

Rich Hickey explains at the start of his talk where he is coming from, what the context of his development work, and he calls it "Situated Software". His background and the context for his development is building information systems in organisations where the rules are messy and no doubt change often. One of the things he calls out explicitly in his talk is the messiness of dealing with the "Two for Tuesday" rule in one of the systems he worked on. He also spoke about how different working in that domain was, to doing compiler development.

John Carmack's experience and domain expertise is building high performance game engines. I'd argue this is closer to compiler writing in the sense that it is a much more rigid problem than say having to serve the needs of an organisation whose requirements may change often and in seemingly arbitrary ways, and usually on a tight deadline.

I am not trying to elevate one domain over the other. They are different kinds of problems, and have different kinds of limitations and constraints applied to them. As such I don't think its any surprise that the priorities for each of these developers is different.

Choosing between more dynamic or more static approaches without considering the context in which you are working in, is a mistake.


This is an outstanding point. When watching this talk, I got the distinct impression that Rich has simply worked on a very different class of problems than the ones I have worked on. As we all are wont to do, we extrapolate our experience to the whole universe. But that's not always the right thing to do.

I would say that for the most part, a statically typed language can do the same things a dynamic language can do, but not the other way around. Another commenter mentions the Any type, but I don't think that's it. It's maps. Rich wants to be able to combine maps with a union operation, take subsets of keys, etc. And that is exactly what maps afford. I'd rather work in a language like Haskell where I can use strong types when I need them and drop down to untyped maps when I need them than a language like Clojure where all I ever have is the maps.

Another thing static types do for you is that they make things in your application more discoverable. Instead of tracing through to figure out which data is available and operations something supports, the compiler tells you. I'd rather work in a language where my compiler can help me this way than in one where it can't.


"a statically typed language can do the same things a dynamic language can do, but not the other way around"

It's interesting you'd say that, as I'd consider the opposite to be true. There are functions that are trivial to write in a dynamically typed language, but are hard to statically type. For instance, the `assoc-in` function in Clojure.

"Another thing static types do for you is that they make things in your application more discoverable. Instead of tracing through to figure out which data is available and operations something supports, the compiler tells you."

You don't necessarily need static typing to give functions some form of type signature or specification.


Well it's strictly true, in the trivial sense; although the ergonomics might be unpleasant. Many static languages provide a Dynamic type, into which you can stuff any value. The type supports a mechanism for querying the inner type of the Dynamic value, and extracting it. So yes, you can implement dynamically-typed data structures and logic in a statically-typed language. But to use this approach in a widespread way, it might take a lot of scaffolding to make the experience pleasant. As Greenspun warned us, you might just end up implementing a crappy Lisp. :)

https://en.wikipedia.org/wiki/Greenspun's_tenth_rule


I also find the opposite to be true, as you do.

The human mind and human problems, operate more like a dynamic language. To put it in a simple, silly example: When i tell you to "cut something with a scissor", the scissor doesn't specifically cut paper; there are many things that can get cut by a scissor. Or when you take a pot, put it over the stove and boil water, you do it and the pot is not specifically designed to boil water, the pot can heat anything; so you can say that many operations (verbs) in real life do not get performed in a "static typing" way, but in a "dynamic typing" or at least "duck typing" way.


Both of those examples are trivially modeled with typeclasses (java style interfaces will also cut the mustard here too).

Your paper example I would have a “Cutter” and “Cuttable” type classes to describe things that cut and can be cut. In languages like Haskell this only involves a couple extra lines of ceremony over writing a “cut” method on each type.

The pot example is even cleaner with types and show the power of them. I’d have a Pot typeclass that can heat things. I’d have a liquid class for all liquids that water implements and finally a boil free function that takes a pot and a liquid. Exactly as expressive but now with compile time checking.


All this is covered in Rich's talk. Your Cutter and Cuttable are different from my Knife and Cuttablito. When your program needs to talk to outside world, it all becomes messy.


Here is assocIn variant 1 type in TypeScript:

  declare function assocIn<T, K extends keyof T>(t: T[], x: [number, K], val: T[K]):T[]
You can try it out: https://goo.gl/3JrmX3


It generates a type error when I write:

    assocIn({ a: 1, b: 2 }, ['b'], 4)


> There are functions that are trivial to write in a dynamically typed language, but are hard to statically type. For instance, the `assoc-in` function in Clojure.

This is not a hard function to write in Haskell. You make it operate on a list of maps, which is what the Clojure one is doing too.

> You don't necessarily need static typing to give functions some form of type signature or specification.

But you do have to have static typing to make sure those signatures are automatically verified to be coherent.

It seems like you're thinking about my statement from a turing-complete computable point of view. But I meant it from the point of view of what the compiler can do for you. It's not what things you can compute, but what things will be automatically checked.


"This is not a hard function to write in Haskell. You make it operate on a list of maps, which is what the Clojure one is doing too."

It's harder than you think. Just try it. I believe it's possible to achieve in Haskell, but it requires some rather esoteric type system extensions.

"But you do have to have static typing to make sure those signatures are automatically verified to be coherent."

If you want a guarantee, then yes, but you can get an good approximation of correctness with runtime specs coupled with test generation.


> it requires some rather esoteric type system extensions

No, it's just a simple [Map Text Text]. Or if you want a little more safety, then [Map Text Value].

> you can get an good approximation of correctness with runtime specs coupled with test generation.

But this requires more code. More code is error-prone and takes time to write and maintain.


"No, it's just a simple [Map Text Text]. Or if you want a little more safety, then [Map Text Value]."

I think you're confusing `assoc-in` with `assoc`. The latter is trivial to type; the former is not.

The problem with typing `assoc-in`, is that the type of the vector of keys is derived from the type of the nested map in a way that's hard to generalize.

For example:

    (assoc-in m [a] b)
Has (approximately) the type signature:

    Map a b -> a -> b -> Map a b
But:

   (assoc-in m [a b] c)
Has the type signature:

   Map a (Map b c) -> (a, b) -> c -> Map a (Map b c)
And so forth.

"But this requires more code. More code is error-prone and takes time to write and maintain."

No it doesn't; you can generate and run the tests from the function's spec automatically. If I run `(clojure.spec.test.alpha/check)` in Clojure, it will generate and run tests for all functions that have specs.


> The latter is trivial to type; the former is not.

That's why I'm not trying to make it typed. Instead of

    Map a (Map b c)
I'm dropping down to something like

    Map Int (Map Text a)
    Map Int (Map Text Text)
    [Map Text Text]
    Vector (Map Text Text)
...which is more like what's really going on in Clojure.

Come to think about it, assoc-in is a great example of why I want static types. It's very hard to figure out what that function does from reading the documentation. There's no type signature to help me, the description doesn't give me much more to go on. Reading the examples leaves me wondering whether I actually understand what it's doing or if I'm missing some corner case.

> it will generate and run tests for all functions that have specs

But you have to write the specs. And that's the code I'm talking about. Fundamentally the generator can't just know what behavior you want and what you don't want. You have to tell it.


"That's why I'm not trying to make it typed."

Are you proposing that a distinct `assoc-in` function should be written for every combination of types you'd need in your program?

Doesn't that rather prove my point that some functions that are trivial to write in a dynamically typed language are hard to write in a statically typed language?

"Come to think about it, assoc-in is a great example of why I want static types. It's very hard to figure out what that function does from reading the documentation."

I've always thought the function was trivial. I mean, it's just four lines of code:

    (defn assoc-in [m [k & ks] v]
      (if ks
        (assoc m k (assoc-in (get m k) ks v))
        (assoc m k v)))
I have a much harder time relating back a complex type signature in Haskell back to what the function actually does, but perhaps that's something that just requires more practice.

"But you have to write the specs."

Sure, but you have to write the type signatures, too. Type inference certainly cuts down on a lot of work, but it doesn't entirely eliminate the need for explicit typing.

I also tend to prefer adding explicit type signatures, even for functions that could be inferred. It makes type errors easier to catch.


> Are you proposing that a distinct `assoc-in` function should be written for every combination of types you'd need in your program?

No. I'm proposing something like what tel described elsewhere in the thread. If that's not dynamic enough for you, then you can go with something like JSON's Value type. In both cases lenses give you very convenient and composable access and manipulation.

> I've always thought the function was trivial. I mean, it's just four lines of code:

But you have to read the code. The type signature / code boundary is very useful for allowing you to chunk things and abstract over implementation details. This particular case may not be much code to read, but that is often not true in the general case.

> Sure, but you have to write the type signatures, too. Type inference certainly cuts down on a lot of work, but it doesn't entirely eliminate the need for explicit typing.

> I also tend to prefer adding explicit type signatures, even for functions that could be inferred. It makes type errors easier to catch.

That's exactly my point. Type signatures can be inferred, specs cannot. Choosing to add them is irrelevant. If you want the add them that can be done automatically.


"No. I'm proposing something like what tel described elsewhere in the thread."

tel's solutions are "use lenses" or "assume all keys are strings", both of which miss the point.

Yes, you can work around the limitations of a static type system, either by finding another solution (lenses) or by making functions more specific (assume all keys are strings), but that doesn't mean the limitations disappear. It just means you're working around them.

In a statically typed language the solutions to a problem are constrained by the type system. The question is not whether statically typed languages are more constrained than dynamically typed languages, but whether the constraints that static typing introduces are offset by the guarantees they purchase.

"That's exactly my point. Type signatures can be inferred, specs cannot."

The compiler can infer some type signatures, and if you're not explicitly typing your named functions (which is generally cited as good practice), that does mean you get some checking "for free".

But specs can also test more things than types; they occupy a space somewhere inbetween static types and a generative testing solution like QuickCheck. Depending on the function, a spec might be more or less verbose than the equivalent checks in a language like Haskell.


> Yes, you can work around the limitations of a static type system, either by finding another solution (lenses) or by making functions more specific (assume all keys are strings), but that doesn't mean the limitations disappear. It just means you're working around them.

I'm less concerned about the theoretical limitations of types that you seem to be talking about and more interested in looking at concrete real-world applications where you think you need dynamic types and seeing exactly what we can do to address them with a type system. I know there exist things that are difficult or maybe impossible to define types for (the Y combinator for instance, or possibly this assoc-in). My argument is that you don't need this power. I don't care whether difficult to type things exist, I care about whether real-world problems require them. I have yet to encounter one that I felt requires the same amount of dynamic typing power that you seem to be arguing for.

See this thread for some concrete examples:

https://www.reddit.com/r/haskell/comments/792nl4/clojure_vs_...


You can use lenses, it's something like

    over (at "foo" . at "bar") :: (a -> b) -> (Map String (Map String a) -> Map String (Map String b))
Neatly, this system generalizes to typed realizations immediately, too.

    over (foo . bar)


Sure, lenses are a solution. My point isn't that an alternative solution to this is impossible (or even difficult) in a statically typed language, but that static typing makes some solutions infeasible. There are some very trivial functions that Clojure has that are nevertheless impractical to write a robust type signature for.


There's also the way where you encode `Map String Value` as a member of `Value`. This is closer to what Clojure does and does allow assoc-in to be written directly. I was just playing along with the other comments' encoding of maps.

    data Value = 
      ...
      | VMap (Map String Value)
      | VNull
      ...

    assocIn :: [String] -> (Value -> Value) -> (Value -> Value)
    assocIn (name:names) f (VMap m) = _
In general, the more type information you leave around the more you have to do to ensure that it remains meaningful. Erase enough of it and you can arrive back at Clojure.


You're not solving the same problem. You're fixing the keys of the nested maps to a single type, but the difficulty is when the keys are all different types.

In other words, you want to type a general form of:

    Map a (Map b (Map c ... z)) -> (a, b, c...) -> z -> Map a (Map b (Map c ... z))
And even that is more limited that Clojure's `assoc-in`, as the type signature still assumes each nested map is homogenous.


When they keys are all different types then you've got

    data Value =
      ...
      | VMap (Map Value Value)
      ...

    assocIn :: [Value] -> (Value -> Value) -> (Value -> Value)
I was again assuming types more informative than Clojure. Also again, lenses actually solve the higher information type situation that you're pointing at in your comment.


Yes, you can write the function using entirely dynamic types:

    Map Dynamic Dynamic -> [Dynamic] -> Dynamic -> Map Dynamic Dynamic
But would you actually write a function like that? Of course not. Not only is it a pain to unwrap nested maps of dynamic types, but you lose almost all the advantages of a static type system.

So what's the alternative? As you point out, a more idiomatic way is to use lenses, as they solve a similar problem but can be typed.

However, in Clojure we can use assoc-in or lenses. Dynamically typed languages are not constrained by what is feasible to statically type. In Haskell, it is hard to the point of being infeasible to robustly type assoc-in, so it's not idiomatic to use it. But in Clojure we're not restricted by the limitations of a static type system; we can use functions like assoc-in.

This is the trade off of static typing. A static type system limits what solutions are feasible and idiomatic; even functions that are simple to express and reason about, can be extremely hard to accurately type.


I wouldn’t because once I’ve got a type system the tradeoff never works out. But the same thing can be expressed.

I don’t see the thrust of your argument. You can write both styles of program in both languages, but you can only have type guarantees in a language with a type system.

So, you’ve shown that assoc-in can be written so long as type guarantees are thrown away. That seems like a reasonable reason to pick lenses either way.


"You can write both styles of program in both languages"

No you can't, not in any way that's practical.

It's possible to write Haskell by wrapping every value in a Dynamic type, but possible does not mean feasible.

If you don't believe there's a distinction, then try writing a functionally equivalent `assoc-in` in Haskell. I don't mean sketch out a design in pseudo-code; I mean produce a fully working function that operates seamlessly with all possible types of Data.Map.

"So, you’ve shown that assoc-in can be written so long as type guarantees are thrown away. That seems like a reasonable reason to pick lenses either way."

Well, that's the question: is it?.

Lenses can be typed, but `assoc-in` is conceptually simpler. Haskell advocates using only solutions that can be typed; Clojure advocates using the simplest solution.

Static typing is a constraint, because it limits the solutions that are feasible and idiomatic. In return it provides a compile-time guarantee. The question is whether the guarantee is worth the solutions that are (for all practical purposes) removed.

Immutability is another type of constraint. As is automatic memory management, variable scoping... programming languages are designed around constraints. Pretending that you can introduce a constraint and not narrow the solution space is wishful thinking.


It’s feasible and easy to write assoc-in atop of Dynamic... it’s just not done because nobody programming in Haskell thinks it’s valuable. We could choose to support much more opt-out dynamic behavior no sweat, but nobody wants to.

The idea of assoc-in working for every type of Map also doesn’t make sense. Map only exists statically. Types as Clojure discusses them exist at runtime, are statically Dynamic, and it’s trivial to write assoc-in that works over all of them in any language.

Your argument hinges on the idea that it’s impractical to write using Dynamic in Haskell. I do not think this is true. There is perhaps some value in greater library support, but it is easy to do this. It is simply not considered valuable.


"It’s feasible and easy to write assoc-in atop of Dynamic..."

Then why don't you do it and post up the code? If it's as easy as writing the same function in a dynamically typed language it shouldn't take more than a minute or two.

"The idea of assoc-in working for every type of Map also doesn’t make sense. Map only exists statically."

Maybe you're misunderstanding me? I mean it should work for every `Data.Map k a`, where `k => Ord`, just like all the other Data.Map functions.

"Your argument hinges on the idea that it’s impractical to write using Dynamic in Haskell. I do not think this is true."

Then try it?


> I mean it should work for every `Data.Map k a`, where `k => Ord`, just like all the other Data.Map functions.

This is where we're got a difference of opinion. My statement is that assoc-in doesn't do anything close to working over "every `Data.Map k a`". Data.Map is a compile-time idea classifying names in the language and the Data.Map functions are designed to respect that both dynamically (manipulate some memory properly) and statically (create updated contexts with new and different information).

On the other hand, assoc-in works only dynamically/at runtime over all kinds of (Clojure and Java) data. In particular, it has meaningful semantics for every possible input value although it only has "interesting" semantics for certain runtime shapes.

That's easy to write in Haskell.

https://gist.github.com/tel/bb84fcc3f1b7488b53349156284b509a


Your code doesn't compile. I can fix the obvious typos and missing imports and language pragma, but the type errors that remain are beyond my knowledge to correct.


I fixed the type errors in what I presented before. Little changed: I had to fix the typos, bad imports, and missing pragmas, and then I had to instantiate Val as Hashable and Eq, but since all things inside of Val are Hashable and Eq it was straightforward.

https://gist.github.com/tel/bb84fcc3f1b7488b53349156284b509a


Can you give an example of use? It's not clear to me how this interacts with standard Haskell data structures. If I have an existing map, do I need to walk the tree and encase everything in a `Val`, and vice versa when I want to pull values out?

Apologies, quick sketch in an airport between flights. May fix this later when I’ve actually got an internet connection.


I mainly program in clojure. The experience is just too good to be swayed by the bad points.

That said, I wrote a moderately complex program recently where having dynamic types was a hindrance. I had to do mental book keeping in order to make sure I was constructing correct structures. I frequently didn't and the bugs were hard to track down.

However, Clojure Spec came along, and I applied it to the problem. The difficulties became surmountable.

It's a case where if I had been fluent in haskell, I probably would have used it. But I'm not, and this project was for work, so I didn't have the luxury of time to figure it out.

Spec alleviates mental book keeping of structures by adding checkers and clarifying/codifying intent.


How much Spec do you sprinkle in your code? I'm often using a keyword to query a map. Should I always spec/assert the return of this simple and common task? Because if you type the wrong keyword, a typo or just because you confuse different keywords in your mind, then it doesn't matter how much spec you have applied to the top-level data structure, the function arguments or other things, you still end up with a very easy nil or just incorrect value floating through your data transformations.

The only way I see to have a lot of confidence is to add Spec to every new binding that is created, confirming that all named values are exactly what you think they are. Perhaps that is how most Spec users are working? That would mean asserting every item in a "let" for example.


I would suggest that, rather using spec everywhere, you can use it as a barrier to contagion. That is, only checking that data conforms to your spec when it goes through the major interfaces between parts of your program.

In this approach, you're not relying on spec to verifying the correctness of everything. Rather you use to reduce surprise bugs where a little change here makes something weird and unexpected happen all the way over there.


Agree, this is my approach.

Also, I do checks on the data structure, less on how they are used.

So if a function expects a certain structure I’ll spec that. But I don’t do checks on getter setter type operations.


I'd add that I think the correctness problems with (good) dynamic language really only start appear as the program gets larger.

Dividing your program up into large chunks with rigorously defined and checked interfaces (e.g. using spec) does a lot to mitigate this. Meanwhile, you still get the benefits of dynamism (concise code tightly focused on the problem domain) within each module.


My take: "a statically typed language can GUARANTEE the same things a dynamic language can GUARANTEE, but not the other way around. A dynamically typed language can PERMIT the same things a static language can PERMIT, but not the other way round."


Sorry to go off-topic, but this was a very nice sentence: "We extrapolate our experience to the whole universe."


Nicely sieved out! I too found this sentiment very essential.


Dynamism is certainly available to statically typed languages, but I believe the problem comes at how difficult it is to write dynamic code in static-typed languages, and to use the common building blocks that help you solve real problems.

Writing dynamic code in static languages is like running through mud, just like writing static checking in dynamic languages is.

I think both paradigms are important and valuable for different applications, but I have never seen a language that does both in any truly useful way.


I think the gradual typing TypeScript provides is pretty useful, with relaitvely low friction.


    > I'd argue this is closer to compiler writing in the
    > sense that it is a much more rigid problem than say 
    > having to serve the needs of an organisation whose 
    > requirements may change often and in seemingly
    > arbitrary ways, and usually on a tight deadline.
Ex-game developer. If only this were so.

When making a new game, the requirements are constantly in flux and the driving force changing them is incredibly elusive: "fun". This depends a lot on the experience level of the designers and how original the game is trying to be, but it's very common for features and systems to completely change as designers try things out and discover what does and doesn't feel good.

And, of course, game development is always under very heavy time pressure because the margins are narrow.

Carmack's situation is a little different because he's been making similar games for much of his career and much of the stuff he's building (rendering and low level infrastructure) are relatively decoupled from the gameplay and game features that are more volatile.

Game developers are under a really tight crunch. They need to write software that's nimble and flexible so they can iterate on the game design quickly and figure out what's fun. At the same time, the code needs to be very efficient, even during early phases of development (it's hard to tell if a gameplay idea is fun or not when the game is running at 3 FPS) and certainly by the time it ships.


Current game developer.

This is why you tend to use a low level / static / concrete language to write the (relatively constant, well-defined) game engine — the graphics pipeline, model loading, animation playback, sound, etc — and you use a high level scripting language like LUA to write the (constantly changing, unique) game logic.

To tie this back to the OP, this is one of the reasons I'm fond of Clojure (even though I don't get much of an opportunity to use it). It's written to be concise and expressive (as described in Rich's talk, when compared to C++) but also relatively high perf (by leveraging the native capabilities of the host on which it sits). It has the potential to be a great language for both the high-perf core of a game and the high-level logic.

Sadly, it doesn't yet target a host without a GC (and probably never will), so you're going to have a devil of a time using it in a way that guarantees pause-free execution.


Choosing between more dynamic or more static approaches without considering the context in which you are working in, is a mistake.

It's amazing how well the parallels and metaphors work between the study of weapons and historical martial arts and programming. (Perhaps not so amazing, since both have a certain nerdy component.) As Matt Easton of scholagladiatoria says, "Context, context, context!" It's erroneous to say that sword X is the best/ultimate sword! It all depends on context. Are you up against armored opponents? What kind of armor? Is this on a battlefield? Is it a duel? Is it on horseback? Do you need to wear one at court?

Design is all about context and response to the forces in that context.


Static vs Dynamic typing... Why one has to win?

I don't think it's about being convinced of benefits of one or another. We need to be convinced about the drawbacks of each approach. Then we will understand that there is no silver bullet and each kind of language has strengths and weaknesses.

I love static typed languages. I love that they show me silly errors as soon as I type them on my IDE. I love that I can get auto complete and automatic refactoring tools. I love that I can easily jump from one function to another by just clicking on their names names as if they were hyperlinks.

I also love dynamic typing. I love how few lines of code you need to do some complex tasks. I love how easy is to manipulate complex structured data coming in JSON, YAML or XML forms. I love how easy is to do metaprogramming with them, how easy is to create your own DSL and make tasks like DB access a piece of cake. I love how you can just REPL into your server running in production, change things, test them and leave.

I like that we have both kind of languages. I chose static typing for some projects and dynamic for others. I hope no paradigm wins. Let us the best tool for the job.


For me static typing is one less thing to think about and I can concentrate on the problem.


Ironically, those who prefer dynamically typed languages would make the exact same argument.


Yea as someone who really learned coding in a dynamic language, static typing drives me bonkers and makes me think about things I'd rather ignore. I know static typing has many advantages, especially when coupled with an IDE, but I don't want that. I only really need a REPL to test out snippets and a text editor to assemble them into a coherent program before feeding it to the compiler. Well, that and Stack Overflow. Some of my ideas might change if we're talking about a large project though as I mostly code for task automation, data analysis, and helper scripts.


Well just because static typing isn't present in your language the benefits of the concept is still something one has to consider in a non static language. My experience of people coming from non static typed languages to a non static language really have a problem with the effects of what static typing brings, like encapsulation. While the contrary, when I write in non typed languages I really benefit from the thought patterns static typing creates and helping me to produce better code.

Not having static typing is just freecard of doing what I want instead of relying on explicit (language) structure. With that said you can still produce good code in non static typed languages, it's just damn easier to not to since the language doesn't help you with it.


I think it's important to note Carmack also tried Racket after that, even wrote a server that was in production for a while. As I recall from his twitter feed he really likes it especially for beginners (got his son to make games with it) and especially for getting things done as a beginner to the language. Later he discovered Typed Racket and appreciated it, I remember something about with proper types at least at the interface level he found some design flaws to fix.

Just like some people think dynamic types are limited to the subset of static types with Any type for everything and think JavaScript is a great example of its utility and power instead of e.g. Common Lisp, a lot of people think static types are limited to what you get in C++98 instead of e.g. Haskell. The type flame threads would be a lot more interesting if people had tried using more than the most popular representatives of each side before forming opinions...


I love Clojure and dynamic typing and actively use it in one of my commercial projects.

However, my observation as an instructor is that static typing fixes a lot of issues in day-to-day coding of average (boring) projects, especially in projects where the average caliber of the programmers is low. It is not that it prevents bugs per se, but makes the creation process much smoother for people via instant feedback that the code won't work/compile at all.


I believe it also fixes a lot of issues in day to day programming of non-boring projects where the average caliber of programmers is high.

The "me and my team are too smart and our project is too interesting to benefit from static analysis!" meme is silly.


It's less about being smart and more about having habits that fill in the gaps that a lack of static analysis leaves. This can happen through TDD or through REPL driven development. Each has their own strengths and weaknesses.


REPLs are super nice, TDD is super useful, but neither of them either preclude or obviate the advantages of static analysis. Although in practice, none of the popular statically typed languages has a standard REPL (except maybe TypeScript?) which is really a shame.

But I agree with you that it has nothing to do with smart vs. less smart.


On the other side of the coin, static typing doesn't catch a lot of things that TDD & REPL development can, although nothing prevents you from using tests with static typing.


No static typing advocates suggest that you shouldn't write unit tests - they just suggest that you can write dramatically fewer tedious but important tests that boil down to verifying type safety.


His argument that static typing introduces coupling and reduces modularity is convincing. His argument that Java's Spring framwork is little more than necessary dynamism forcing its way back into a statically typed language was eye opening to me. Certainly an interesting talk.


> His argument that static typing introduces coupling and reduces modularity is convincing.

It's absolutely bogus. In Java, C#, C and C++, I can pull in dependencies via scanning a plugin directory that searches for JAR resp. DLL files. The only coupling is the interface, which in a properly factored system will be a single module containing only the related set of interfaces.

Claiming this exposes the author of never having worked on any actual medium or large system in any commonly used static language. (Maybe the more academic ecosystems like Haskell or ML require implicitly pulling in dependencies via textual imports, but this is absolutely not the only way to do it.)


Clojure is a reasonably sized system written in Java.


By the way he doesn't get to static types till 1:06:06 out of a 1:15 talk - there's a lot of other stuff.


>As the world is gradually moving to more statically-typed languages

Depends on how do you see it; because Python has grown a lot in the last 2 years, and Javascript (pure) is still growing.

And don't discount Julia; i can foresee good future for it.


Exactly. Static typing isn't some new technology -- languages like Pascal (which was very popular in the 1970s and 1980s) had it. Scripting languages (which tend to be dynamically typed) took off in the 1990s as a reaction to the inflexibility of statically typed languages. If static typing was really the silver bullet its current defenders think it is, why do they think scripting languages took off?


Because the static languages of the time, C++, Java, etc had serious deficiencies that made that style of static typing a hindrance. Type inferences, type class style generics, and gradual typing have been huge wins for the ergonomics of static typing. Additionally IDEs have improved immensely since the 90s and static typing makes the experience much better.

Finally I’d argue that the size of apps has increased a lot. Dynamic typing is great for small apps that you need to get out the door. Static typing shows its advantages in the long tail. The more people you have working on a codebase the more the static typing pays off. Typescript for instance was created specifically to manage large JavaScript code bases.


Really big apps have existed since the early 70s. I'd argue that the average application size gets lower every year due to the increasing number of specialized libs every year.

And C++ has supported generics long before the boom in popularity for dynamic scripting languages...


I think it's interesting that Rich seems to miss out on Erlang (and Elixir) entirely, even when he gets to talking about Situated Programs and Runtime Tangibility, not even to speak of Concurrency.


I do 100% Clojure work, but am learning Elixir (and may someday use that at work as well). In any case, there's a lot to like on both sides, but there's also a dichotomy: communicate by sharing state (Clojure - atoms, channels) vs. share state by communicating (Erlang/Elixir - everything is a one-way message).

To be honest, what I really like about Elixir so far is the developer affordances: excellent REPL, super-fast startup time, and building on 30-ish years of OTP.


OTP in this case means "on time performance?" I love Clojure and your comments about Elixir intrigue me.



Elixir seems very similar to Clojure to me (the former being influenced by Clojure).


Absolutely, (I moved over to Elixir from Clojure) and hence it seems like a strange blind-spot.

When Rich talked about Systems programming and Runtimes, I expected Erlang/Elixir to come up as obvious examples along with Smalltalk and Common Lisp.


I'd love to read a writeup on your experience doing this.

I love Clojure, and really don't like the Elixir syntax (though I used to dislike the Clojure syntax, so...) Have you found Elixir's oddities (like calling lambdas in a different way than you call functions lambda.(:bar) vs func(:bar)) to be annoying?


Chiming in here, having only dabbled with Elixir for a couple of months. My gripes:

- Calling lambdas with a dot is a real wart. I find it very easy to forget and it looks ugly.

- I also don't like the Rubyesque end keyword you have to provide for functions. Get's verbose. Whish Elixir was white space sensitive like Python, Haskell, Elm et al.

- Pattern matching in functions is an idiom that is overused. It easliy gets verbose since you have to type out the full function signature.

- A proliferation of data structures dictated by performance issues (lists vs. tupels, maps vs. keyword lists). Ugly map litteral: %{}. (Personal issue I guess, I tend to find $%& etc. ugly in a programming language. It's swearing at me!)

Apart from these gripes I like Elixir a lot. A clean and pretty simple language. Protocols are used in a nice way to acheive polymorphism (similar to Clojure, I think). Good as a first FP language. Gives you access to an interesting runtime.


Okay, but at least with static typing one can have dynamic typing simply by using an all-encompassing type.

With only dynamic typing, one cannot have static typing.


I invite you to watch this talk.

Dynamic typing is about a lot more than having an "Any" type. It's about widespread lack of extra code and cognitive overhead associated with the imposed structure of adding static types to a project, as well as the verbosity that static typing typically adds (including in type-inferred languages).

You can't just turn off the static typing in a static-typed language; you can turn off the static type for a specific type, yes, but that doesn't give you the experience that dynamic languages offer.


(I haven't watched the talk because nothing you said makes me expect hearing something new.)

WRT verbosity and cognitive overload: I honestly challenge this statement. In a modern statically typed language, you have to add a few type annotations here and there but in exchange you can skip most runtime checks (and sometimes nullity checks). You don't have to remember the type of a variable because the compiler can tell you.

I admit dynamically typed languages are nice for interactive exploration in the REPL but that's about it.

Too bad typed clojure didn't become the mainstream closure.


> I admit dynamically typed languages are nice for interactive exploration in the REPL but that's about it.

For Clojure programmers, "interactive exploration in the REPL" is the primary way we write programs. So, if one admits that dynamic typing is ideal for this, then making Clojure dynamic was the right choice.


No, because you usually have to maintain programs, add features later on, rewrite/refactor parts of the program. Do you write only one-off scripts?


I must emphasize that any time we "maintain programs, add features later on, rewrite/refactor parts of the program", we are doing so with a REPL. We don't just use the REPL for one-off scripts.


As someone who recently moved to Clojure, REPL based programming is a completely different experience. It isn't like using python's REPL.


The cognitive overhead is really not about type annotations. It's about program structure. If you watch the video, I think there are some good points made.


> imposed structure of adding static types to a project

I'm sure you didn't mean it this way but, for me, I think I reach for my pet dynamic or static typed language depending on whether or not I'm expecting to think about the structure of the program up front. Typing is not something I've thought of as an addition.

If I'm doing something small or I'm expecting to iterate around some ideas/play with some data, then a dynamic language lets me try things out sooner.

If I'm expecting to spend a fair amount of time on the project or I need to think hard about how I'm going to achieve stuff then thinking about the types up front is valuable. I'm likely to go for a statically typed language.

I also have a bias towards statically typed if the project is going to get large. This is because I've tended to have more serious problems with large dynamically typed codebases. But I appreciate this is a bias; there are many factors that affect maintainability.


I'd add one thing. The way we structure "logic" in subpart of systems is often one or nothing, while quite often we could check parameters by necessity.

F a b c d can be valid if called in F a b c d e f g, because F gets what it needs (a b c d), the rest can be ignored. Like an implicit subclass relationship. A subclass B of A can have more features, but as long as it is an A, something depending on an A can enjoy it.

Of course this can cause issues (stack use in function application with unnecessary information, albeit a non naive interpreter/compiler could prune this).

In the end we spend a lot of time circling things around for safety, when we could have something more relaxed and thus more resilient.


Yes. So how about a workflow where you write dynamically typed code, and then use a tool to convert that code (with user input) to statically typed code.

You could do this incrementally. So once you have "frozen" your code into statically typed form, you could add dynamically typed code to it, and then "freeze" that. Or you could "thaw" the whole codebase, and add dynamically typed code, etc.

> You can't just turn off the static typing in a static-typed language

I suppose one could design a language where that is possible (although you wouldn't use the full power of static typing in that case).


I think the thing is that you write code in dynamic languages that you would never think to write in typed languages because it's just too against the culture/grain. Two common situations in my job of throwing data around: Dataframes and pivoting- here the type of a thing (its columns) is never going to be easily statically provable because operations modify it dynamically. Reactive/dataflow style programming- look at the python MDF library, you annotate functions and it dynamically infers a dataflow graph from your code. You can do dataflow in typed languages but nowhere near as elegantly normally.

These two things are core to what I do and drive a train through a typesystem.

I think that the best possible solution would be to have some way to embed proofs and individual typesystems in different parts of your program, like a dynamic language with plugged proofs and types not a single typesystem language that you sometimes go around or switch off.


How much experience do you have with clojure.spec? I see it as a way to add dependent-type-like functionality to parts of Clojure, so my impression is that it addresses your final need very well. But if you know more about it than me and disagree, I'd like to know.


> Yes. So how about a workflow where you write dynamically typed code, and then use a tool to convert that code (with user input) to statically typed code.

This describes how I do a lot of my work. I prototype quickly in Python or Lua, and I have some in-house tools that let me migrate highly constrained subsets of those languages into C, with automated Quickcheck-style checking to make sure the translated code generates the same log messages as the source code.

For me the experience of starting out with something extremely flexible, with a repl, easily mocked components, and tons of "kitchen sink" functionality makes me feel free to experiment quickly. Once I have something that looks like it's working I iteratively remove reliance on dynamic features or built-in libraries.

At the end I have a Python function that looks a lot like a C function, but I got to that function a lot faster and more comfortable than I would have had I started in a pure C workflow. And from there, it's a quick build step to convert that function to a C function.

I haven't found a good common name for this pattern, but I think of it as "plastics and metals", analogous to industrial design. Even though you know a component will end up being made of steel, lots of the design questions are more easily and cheaply solved by making a similar thing in plastic. The plastic of course won't stand up to "production" load, but it usually doesn't have to do so.


> "SPJ in an excellent series of talks lists these advantages of types... the biggest thing left out of clojure..."

> "The biggest merit he says is in software maintenence, and i really disagree with just a lot of this. it's not been my experience; the biggest errors are not caught by these type systems; you need extensive testing to do real-world effectiveness checking."

I think Rich is dead wrong on types not providing good tools for software _maintenance_.

He is attempting to discredit types because they do not catch all errors, or the toughest errors; that's fine, I don't agree with people that would say types do that for you.

I think dynamism is great for speed of development and flexibility; but my impression about trying to debug and maintain clojure code is that one has to spend time tracing back the source of errors and shape of maps (pre-spec).

Those issues are somewhat mitigated by having a good type system, and what you loose on the flexibility going with types, is a trade that I think returns much more that what you give up.


No, I think his point is valid. Some of the biggest errors are subtle failures in the implementation of business logic. Cases where what the code does and what it looks like it does fail to line up.

Static typing can be brought to bear on those (encode your business logic in the type system), but what parts that fails to cover still need good testing. Plus, how do you test that your types correctly encode your business logic?

I'm still more in the static camp than out of it, but I can recognize that the problem is not as cut and dried as we want to believe. There's are reasons this pendulum has swung back and forth over the decades, and I don't see anything to suggest those reasons have changed.


> how do you test that your types correctly encode your business logic

types, as most commonly used do not _directly_ encode business logic; they just provide constraints i.e. it is not an all-or-nothing situation. You can still write unit tests.


What you have to demonstrate is that static typing catches a statistically significant amount of additional errors over plain unit tests. Nobody has been able to show this to be the case so far in practice.


Requiring statistically significant studies pretty much guarantees no provable claims can be made for any side; since that amounts to a social science laden with shaky assumptions and ambiguous modeling. Since it's is a standard of proof that no-one can achieve, i don't see why mention it here.

But to go along with this for a moment, I liked your response to this paper here https://www.reddit.com/r/Clojure/comments/73q8c2/a_largescal... ;

Though I'm not sure how we're deciding what is "statistically significant" in contrast to what the researchers have decided is statistically significant. Were their calculations incorrect?

However, you claim to reveal their "real conclusion", in contrast to their Actual Conclusion, in which they do indicate that "Among functional languages, static typing is also somewhat better than dynamic typing."

Still, I do not place much stock in these studies for the aforementioned reasons.

I also think you have to be careful about where you have placed the "burden of proof". (You've placed it on requiring poof that static types provide X benefit. One can just as well turn this around and require poof that eschewing static types provide X benefit; so we cannot a-priori place such demands in either direction).


The null hypothesis has to be that both approaches are equally effective. There simply isn't any evidence to suggest otherwise.

At this point, it's premature to discuss whether static typing or dynamic typing affords benefits. The first step would be to look at a large set of real world open source projects written in different languages. If we see empirical evidence that projects written in certain types of languages consistently perform better in a particular area, such as reduction in defects, we can make a hypothesis as to why that is.

For example, if there was statistical evidence to indicate that using Haskell reduces defects, a hypothesis could be made that the the Haskell type system plays a role here. That hypothesis could then be further tested, and that would tell us whether it's correct or not.

I agree that these studies are hard to do, and that they'll always be imperfect. However, that's still the best tool we have for approaching this empirically.


> The null hypothesis has to be that both approaches are equally effective. There simply isn't any evidence to suggest otherwise.

Sure, I think that is in the ballpark of an acceptable null hypothesis, for a given experiment. I'm not sure one can identify a null hypothesis that is universal among all experiments, but this captures the idea, sure.


Ehh, semantics. "How do you ensure that your constraints are correct?" is effectively the same question. Any nontrivial set of constraints will need to be validated, and validating those constraints is a pretty big challenge that the compiler by definition cannot help you with.


>"The biggest merit he says is in software maintenence, and i really disagree with just a lot of this. it's not been my experience; the biggest errors are not caught by these type systems; you need extensive testing to do real-world effectiveness checking."

I think I fully agree with Rick Hickey on this, and I have mentioned the same before here in HN. The biggest and most serious errors are not caught by the (statically checked) type systems.

I agree that simple type errors that, on a dynamic system without compile-time checks are not caught, will be then caught at runtime and you will say "oh, i need to get back and correct this, damn it". But this is a minor annoyance, that takes small time compared to the bugs i'm mentioning in the previous paragraph; bugs that can take days to be resolved.


I would say the main point of types is to prevent uninteresting bugs. For me that's a selling point because I don't want to be distracted by those kinds of bugs.


What kind of errrors do you think are not caught with static typing? Usually people who say this aren't familiar with languages that have type classes...


>What kind of errrors do you think are not caught with static typing?

Citing awj's post above:

"Some of the biggest errors are subtle failures in the implementation of business logic."

Other examples:

- bumping into unkonwn bugs of a library you are calling - implementation of an external (library) function behaving differently than the documentation - various memory leak problems - locking issues - race conditions

etc


>"Some of the biggest errors are subtle failures in the implementation of business logic

Static typing can prevent errors in business logic.

>various memory leak problems - locking issues - race conditions

All are bugs that static typing has had tremendous success in preventing. Have you any experience in a HM like type system?


>All are bugs that static typing has had tremendous success in preventing.

Please do elaborate, it could be a great post.

But If you mean to say that locking issues can be prevented because you can apply STM (software transactional memory) by using suitable Monads, then, well, i can also have STM in a dynamic language, even applying it by wrapping code in an "atomic" context (i.e. see the STMX library for Common Lisp) so the statements contained within are performed atomically (and thus guaranteed to work or either roll-back.)

As for memory leaks, Haskell (if that's the HM language you had in mind) automatically manages memory, so I would guess this isn't a problem at all. However, automatic management memory management is orthogonal to "static vs dynamically typed".

Thus, i'm curious, and yes, i don't have enough experience with HM-type languages, which should be the way to go if one wants to use static typing...


> i can also have STM in a dynamic language

How can you roll back failed transactions without explicitly typed effects?


Clojure tracks and rollbacks the STM-bound variables according the STM rules; it is not intended to prevent you from doing other effects inside those transactions; you are supposed to have an understanding not to do that in the transaction blocks.

It helps a bit that most clojure datastructures are immutable/persistant by default.


In other words Haskell's STM prevents bugs that Clojure's merely supposes the programmer not to have written.


What the two replies (at the time of writing) to this comment have missed is that maintenance and avoiding errors are not the same thing!


I submitted this link before I had watched the whole thing. As someone who has only dabbled in Clojure I think there are a lot of interesting ideas in there but found the type-system bashing pretty off-putting.

I am now watching his "Simple Made Easy" talk [1] after I have heard it recommended on a few functional programming related podcasts. Again really interesting stuff but I encountered another cheap shot at typed functional programming ("You can't use monads for that! Hurr hurr hurr").

Given how well received these talks seem to be by people that enjoy programming with advanced type systems I would have have really expected a more balanced discussion and some acknowledgement of the trade-offs between dynamic and statically typed functional programming.

[1]: https://www.infoq.com/presentations/Simple-Made-Easy


The new Conj talk is certainly an interesting look at one man's (or one community's) look at static typing. However, as much as I admire Rich, some of the points he made don't resonate with me, particularly the one about how compile-time checks to catch minor bugs in syntax are not a particularly useful feature of static typing. I certainly disagree. As someone who writes Clojure all day long right now for a living, I am constantly dealing with runtime errors that are due to minor typos in my code that I have to track down, and this time would be greatly saved by having a compiler tell me "on line 23 you looked for :foo keyword in a map but you meant to type :foobar, so that's why that was nil" and many other similar woes.

I love Clojure but I really miss static type checks.

The other item in his talk I do not agree with, he says (slightly paraphrasing) "in static typing, you can pattern match on something 500 times but if you add a case, you have to update those 500 matches to handle the new case, when really they don't care about this new case, only the new logic needs to consume this special case, it's better for the producer to speak directly to the consumer". Well, in languages like OCaml, Swift, Haskell, it is a feature that pattern matches much be exhaustive. This prevents bugs. In most cases, I'd expect that if I add a case to an enum, the chances are good my existing logic in pattern matches should know about that. Maybe not all, but a lot of them will. It's nice to have the compiler guide you to those places.

I certainly like how fast I can write programs in Clojure, and I like the minimal amount of code that makes refactoring and rewriting fairly straightforward since there is not a lot of time investment in the existing number of lines, and I like the incredible elegance of Clojure's approach to functional programming.

But I do miss having much greater compiler assitance with typos, mis-typed argument order to functions, mis-typed keyword names, etc. Would really save a lot of time.


> As someone who writes Clojure all day long right now for a living, I am constantly dealing with runtime errors that are due to minor typos in my code that I have to track down, and this time would be greatly saved by having a compiler tell me "on line 23 you looked for :foo keyword in a map but you meant to type :foobar, so that's why that was nil" and many other similar woes.

i wonder if this is because it really takes a quantum leap in one's development style between <insert your previous programming language> and clojure/<insert your favourite lisp>? as long as your environment allows for effortless evaluation of code you're writing, you'd be getting this feedback no slower than the edit/save/compile/retry cycle.


If your typos are triggered by UI events, then you often won't see these problems until interacting with your UI (I work mainly in Clojurescript). Further, these typos may not get noticed at all for a long time if a code path is never taken. Of course, that's what unit tests are for. But writing tests takes time also. I'm not sure it's worth the trade off to spend the time writing those tests that I could spend writing in a more statically-typed language that would catch some things that tests wouldn't be needed for. (Besides, writing tests for UI stuff is pretty hard).

I am griping, really, because I cannot stress enough how nice it feels most of the time to write Clojurescript. But in complex projects, there is not doubt that a lot of time gets spent on things that wouldn't need to be spent if the language had even a very basic type system to back up the syntax for some things.


My team recently settled on TypeScript instead of ClojureScript, as TS is the safer bet, more familiar, more consistent with the existing project's tooling, etc. But man... I've taken a handful of files and written them in both TS and CLJS. CLJS is just so much shorter and elegant. I sometimes think we made the wrong decision.


ClojureScript is great with Reagent or re-frame... If you write Angular use TypeScript. If you use React, ClojureScript! It's a match made in heaven.


Yeah. I've built toy apps with re-frame, and really liked the way the code looked. But my team is pretty Jr other than me, and I wasn't sure if ClojureScript would work well for us as a team. VS Code is our editor of choice, and it is just really a good environment when paired with TypeScript.

Also, my experience with Rails really has me fearful of doing any serious, big work, in a dynamic language.


which ui library are you using? not claiming to be an expert, but i always found it easier to test programs when logic is completely decoupled from event flow. but yeah, UI can be pita.

also isn't clojure.spec useful for describing and asserting the shape of data taken and returned by functions?


Clojure.spec is useful for a lot of things, but unless you are adding spec/assert to nearly every destructuring or "get" or "get-in" then it's still easy get nils running through your data transformations because you mistyped a keyword or something.

Also there is not a good answer for asserting the value of a function passed to another function; the return values of functions can be spec'd but they are not included in an assert test.


> the return values of functions can be spec'd but they are not included in an assert test

I agree this is a shortcoming, but that is why this library exists: https://github.com/jeaye/orchestra


Still reading your comment, but after the first paragraph, I would kindly suggest looking at clojure.spec. It's helped me immensely in similar problems.


I suppose you'd have to use spec/assert for every instance of destructuring or "get" or "get-in" to avoid common mistakes. That's a lot of asserts everywhere.


I don't understand this comment.

I spec types, and then I spec functions that need that type. But not all the function, just the heavy use ones.

I usually don't instrument the spec'd functions unless I'm actively debugging.

edit:

after having a minute to think on it, do you mean to catch a typo in the use of get, get-in, etc? I haven't tried that.

I suppose you could wrap get, get-in with a nil check or something.


> I suppose you could wrap get, get-in with a nil check or something.

Indeed I suppose the solution would be write wrappers around common getters that allow you to pass a spec to the query and have them automatically assert that everything is what you expect.


I really like Rich's views and find Clojure very interesting as well. That said, as a Java shop with Javascript frontend, nowadays the bulk of complexity in our code base seems to accumulate in the frontend due to mixed skill levels of the team and lack of opinionated structure in the language. This leads to some rather messy code that even skilled devs are afraid to touch because of lack of feedback from the IDE that some refactor is working without loose ends.

The same problem with the same people just doesn't happen in the backend and I link that to static typing and IDE maturity. We have started to adopt Typescript and are seeing improvements already.

We just have to live with the fact not all developers working in the code are mature enough to avoid language and code organization pitfalls. Refactoring should be mostly a safe endeavor, even if only structurally.

This is the main reason I wouldn't suggest Clojure for our team.


I think it's something else, as well. Rich even mentions it in his talk: languages like Java (which I'm reading to mean "statically typed") are great at mechanical tasks. Front end programming is mostly filled with mechanical tasks: scaffold this structure/layout. Wire up these events. Make this thing blue/bold/etc. Change the state when these events happen. It's fairly predictable in structure in line-of-business apps, at least once you're following an intelligent structure, e.g. the Elm architecture.

UI/Front end dev, IMO, can gain quite a bit from static typing. I'm a huge fan of clojurescript, it's what I reach for whenever I want to work on something, but I'm super excited about ReasonML for the future of my team; we struggle with our JavaScript code base right now due to the lack of imposed structure and feedback for our weaker developers.

I love Clojure and I think it makes sense in a lot of domains; most of my back end development is "take this data, transform it according to some nebulous business rules, and poop it out to some other place," which Clojure is amazing for. It's great for applications that don't require a lot of "wiring", and require a lot of "flow". UI programming is, for the most part, wiring things up. It's not that Clojure/Script is not up to the task (I think e.g. re-frame, and the stuff being done with Fulcro, is amazing) but I definitely see the benefits of static typing more in that domain.

And like Rich said, if you're doing UI it will usually completely dominate the problem space you're working in. So pick the right tool for the job. I'm not convinced TypeScript is the way exactly, but like I said, ReasonML and Elm are super promising.


I agree that there is definitely added discipline needed to succeed well in large dynamically-typed projects. I also think that learning to build large projects in such languages is like running your marathon training high in the mountains, so when you get back to sea-level your body feels the joy. You are forced to write very clean code in Clojure if you want to easily maintain it later. That's a great skill that translates to any other language where less discipline might still get you far.


> but found the type-system bashing pretty off-putting.

Why do you think it is type system bashing?

He is justifying why he didn't add types to Clojure. In his experience they add more complexity than they are worth.

The reason he talks about it at all is there are a lot of static typing enthusiasts who talk about static typing being a game changer.

In my experience static typing is a +-2-3% productivity influencer. You get a bit better IDE experience and refactoring is easier. On the other hand I've also found I need to refactor my C# code far more often than my Clojure code.


Gotta take the good with the bad. Tons of knowledge and wisdom to be gained from the FP folks but sometimes they do have the cheap shots and the bias of the community.

Ie, It’s easy to hate and joke about things like SQL databases and JSON when you live in your own utopian fairy land where everything is Datomic and EDN.


I believe the quote you're referencing about monads is "this is meant to lull you into believing everything I say is true, because I can't use monads for that" (referring to an animation of a stick figure juggling)


Rich is always a great speaker. Sometimes he gets a little esoteric and you hear all the monad loving neck beards giggle in the audience. Then again, he invented a lisp that runs on the JVM. Usually though he is very pragmatic and real-world. I love how often he says things like “the type checker in my compiler doesn’t matter to the users of my program” or, “the perfect, most beautiful search algorithm doesn’t matter if it can’t fit into a web page and work when your user hits enter”

So... even if you don’t care about Clojure or functional programming I’d certainly suggest listening to some of Rich’s talks. He’s a very sharp guy but his format is very friendly.


I have been to a local Clojure conference once. Most of the guys who started practising Clojure was inspired by Rich Hickey's functional programming talks. I'm wondering how many such BDFL inspire users to adopt something new just with talks!


I didn't even know about Rich Hickey talks and "Simple made easy" when I decided to start learning Clojure. I came to conclusion that I needed to learn a Lisp after learning Emacs and dubbing in writing small elisp functions. I had no idea and was really surprised when I found out how awesome Lisp can be, so I looked at the current state of Lisp at the time. And then after learning a bit of Clojure and Clojurescript and seeing things people were building with it, I quit my job. I really wanted to use Clojure full-time. Never in my life I ever before have had this urge to learn a language and build things with it.


are people without a degree in type systems even allowed to express their opinion on dynamic vs static? because if their opinion is automatically viewed as inferior whenever a person with a degree chimes into a discussion, isn't dynamic typing stuck in the limbo of not having competent defenders and the only path for a defender to become competent is to spend years of their lives learning the opposite thing?

i think (as a person without a degree in type systems) that relative success of Erlang and Clojure are a testament that pragmatic approach works, and that is not in any way evidence that static typing is inherently bad or can't work.


One thing that was a personal shocker for me to hear, from not just anyone, but Matthias Felleisen himself - "My research group has investigated the topic for almost 3 decades, and we came to the conclusion that Hindley-Millner doesn't really work". I was really baffled. That was at ClojureWest 2016. Here's the talk itself https://www.youtube.com/watch?v=XTl7Jn_kmio. That talk is not about Clojure - Felleisen doesn't even use Clojure. That happened before Clojure.spec was announced. I couldn't really figure out what exactly he was talking about. Not until later, when I tried Spec.


Erlang & Elixir have a very interesting approach to typing. The support for pattern matching in function definitions provides a kind of structural typing for functions. And they both embrace typespecs for type analysis through a separate program analyzer. So a lot of the benefits of static typing are there without the static types.


clojure has destructuring, which isn't quite pattern matching but it does help to detect wrongly shaped inputs. also dialyzer is amazing and i wonder if clojure.spec can be used like that.


clojure has pattern matching too. As a library. Much of the interesting functionality of clojure comes from libraries. (core.async, core.spec, core.match, etc..)

The matching library is good. I wouldn't use it in a tight performance loop, but as a productivity booster it's very nice.


Without a degree? I think that work experience in both dynamic and static languages is enough to offer a perspective. I don't have a degree in type systems but have done a lot of work in both Clojure and also C++ and Swift. I still can't decide if I prefer dynamic or static, they have nice tradeoffs.


Relative success? In what sense? As far as the jobs market is concerned Erlang/Elixir and Clojure have failed to make an impression. Ok, Wallmart, Bleacher Report etc. but search Indeed.com by job title and you'll get what I mean.


that's why it's called "relative", isn't it?

Erlang and Clojure are undoubtedly a success for me in terms of productivity.

Erlang is undoubtedly a success in a telekom industry since 90s, where extreme reliability is a basic requirement. isn't static typing promoted as a path to achieve reliability "once it's compiled"? go figure.

imo, that majority of developers are being indoctrinated into false-OOP paradigm is rather unfortunate historical curiosity than a success indicator.


according StackOverflow survey - Clojure developers are the most well paid in the industry. What a miserable failures they are.


When you say degree are you specifically referring to an academic certificate? Honestly, real world experience and decades of software engineering trump anything you can get from a university.


i agree, but not everybody does because i do find this sentiment repeated in such debates in different forms. for instance: https://i.warosu.org/data/g/img/0504/16/1442936748844.png


I understand that Rich Hickey is a fantastic developer, a productivity multiplier, and an industry leader, but I have a few issues with his portrayal of types.

In this talk he has repeatedly cherry-picked the most negative examples from every type system, while ignoring the more positive, simple, and elegant type system features. E.g., he talks about languages which represent product types as `int * int * string * string * ...`, i.e. as nameless clumps of data, while ignoring that those same languages usually support much richer record types which are of course product types with field labels.

Then he talks about Java types being non-composable while ignoring generics and row polymorphism in other languages. He talks about Clojure maps being powerful while ignoring the same maps with the same power in static languages. He talks about type system complexity while ignoring the mental burden of keeping the types in your head in a totally dynamic system like Clojure. Ironically, he even talks about how you can handle five to seven arguments mentally before getting lost, while ignoring that in a dynamically-typed system you're left to your own devices handling much higher cognitive loads.

Another beef I have here is he repeatedly talks about pattern-matching in 500 different places, and having to go and update them, while ignoring the fact that _no one does that,_ we actually do write modular code with data structures as abstract as possible and try to provide powerful, general-purpose functions which manipulate them instead of exposing all the variant cases for raw pattern-matching. PM is great, but it's still important to limit its use for scalability reasons.

I don't know to what extent he has explored the type systems of ML-family languages. But he certainly does not present them accurately here. I would love to see him just do a debate-style talk with someone really well-versed in type theory as well as practical systems, say, Yaron Minsky of Jane Street.


Rich would probably do very well in a debate-style talk. The art of argumentation is very subtle. If you think there'd be a blowout that entirely discredits the dynamic typing camp—that would rather be like expecting a debate that completely invalidates one side of republican/democrat debate. There are simply core values each side has that are very hard to persuade someone out of. mostly, what people do is "talk to their base/tribe", and then snipe the other side with cheap shots.


I'm not even looking for a winner-take-all argument. I just want someone to be able to immediately correct Rich when he says something wacky about static typing. E.g.,

'You don't have labelled arguments!'

'You have labelled arguments since Standard ML i.e. at least the '80s, e.g.:

    $ poly
    Poly/ML 5.7 Release
    > fun add { num1, num2 } = num1 + num2;
    val add = fn: {num1: int, num2: int} -> int
    > add { num1 = 1, num2 = 0 };
    val it = 1: int


Yeah. He's aware of that. I think he's generally talking to Java crowds, so the static typing experience of the general programmer is pretty bad.

That said, I liked this talk and agreed with much of it.


There are options on the JVM outside of Java, from Kotlin to Scala to Whiley to Eta, or even OCamlJava which might be nearing a release. An alternative to dismissing static typing entirely based on Java, is looking at other languages which have less verbose, more elegant type systems on the JVM. RH is ignoring that aspect completely. Understandable, based on his worldview and experience, but not very accurate.


> wacky about static typing

No more wacky than the claims people make about dynamic types, e.g.

"You can't have refactoring with dynamic types"

Refactoring was invented on Smalltalk. etc.


Yeah, but there's 'people' i.e. randos on the internet, and then there's Rich Hickey giving a keynote at a major conference. It deserves a better standard of accuracy.


I don't think he has explored Haskell's type system that deeply, since he mentioned Haskell as a complex language along with Java and C++. Haskell has a very simple core.


I think he's familiar enough; you also don't want to mistake the trees for the forest.


Can we please get this man a Wikipedia page now?

https://news.ycombinator.com/item?id=5619680


Wow that man Artem Karibov quoted in the link has some problems.


Any transcripts yet?


I always enjoy Rich's and Cognitect's increasingly futile attempts to bait and switch the Clojure community into using his expensive proprietary database, Datomic.

I love Clojure, it's a beautiful language, but its march has stalled, perhaps even reversed, and I lay the blame of that squarely at the anti-community practices of Cognitect, especially surrounding Datomic.


> its march has stalled

Clojure being a Lisp probably exceeded and overachieved its goals when it comes to popularity. Sadly, not a single Lisp probably will ever become a mainstream.

And for the same reasons why Vim and Emacs would never become more popular than IntelliJ and Sublime. People tend to choose something easy to start with, out of the box thing, not something that has a small, simple core yet extremely extensible and rich ecosystem. Most of the innovation in Clojure-space happening outside of Cognitect, driven mostly by individuals.

Look at Clojurescript. It hasn't stalled, au contraire - with every release it's getting better. Pick any large Clojurescript project - PrecursorApp or Circle CI, look under the hood. Compare Re-frame and Fulcro projects with other popular Javascript frameworks. Check out Re-Natal, Lumo and Planck.

Right now Clojurescript probably the most pragmatic choice if you want to build web-apps using a functional language. GHCJS and Purescript (that appeared few years before Clojurescript) - still hasn't quite reached popularity. Elm - which is 5 years old (same as Clojurescript) - still feels a bit experimental. ScalaJS - I don't even want to talk about Scala. Fable/FunScript/WebSharper? Microsoft has failed to convince even its own herd to start using F#. Largely - C# still in use. And those guys prefer Typescript - not really functional. Only ReasonML someday soon may become true competitor to Clojurescript simply because: a) Facebook behind it; b) ain't a Lisp.


is anybody forcing you to use datomic? no.

is cognitect doing something to clojure that is beneficial exclusively for datomic? no.

has cognitect banned you from making pull requests that improve clojure? no.

has cognitect banned you from forking clojure and fixing whatever you think they've done with some evil intent? no.

so wtf are you on about?


> has cognitect banned you from forking clojure and fixing whatever you think they've done with some evil intent? no.

I don't understand this objection. The value in a free software project, and especially of a programming language, is primarily in its community and the expectation of a future around the project, not in the existence of some code somewhere. It's extraordinarily hard to build a community around a fork, and if you need to fork to do reasonable things, that's a legitimate criticism of the people running the original community.

(That said, if you only need to fork to do unreasonable things, that's good community management. I don't pay nearly enough attention to Clojure to know what's actually happening - but I have seen both of these cases in other free software communities.)


> It's extraordinarily hard to build a community around a fork

it follows quite naturally, that reasoning to fork doesn't have enough community support, which only means one thing - you're wrong about clojure community's opinion on state of things around clojure.


I'd believe that if it were ordinarily hard to fork. But it's disproportionately hard to fork because of network effects. It's true that if the community approves of the current direction, a fork won't succeed. But that doesn't mean that if a fork doesn't succeed, it implies the community approves.


Adding to that, there are a few forks of popular projects that were quite successful: io.js, libreoffice, mariadb.


Blink, Ubuntu, neovim, LibreSSL, Plex, XOrg, etc, etc.

If you're dedicated to it and correct about your choices, forking doesn't seem to hamper anything.


"It's extraordinarily hard to build a community around a fork"

This may sound snarky, but I mean it earnestly: It's extraordinarily hard to build a community. Period.


AFAIK clojure doesn’t accept pull requests.


"send a patch" sounds so 90s, but you're right.


People that give you programming languages for free need to pay their bills in some way.


It could be argued they'd make more money from consulting with a much wider adoption of free Datomic. The problem in the Clojure community is much wider - it's not interested in wider adoption, competing for mindshare and rallying round a web framework to attract new users. I think it's partly a Lisp thing.


You mean that they want to make some money?


Do you imply that in general, the (perfectly valid) desire to make money would have justified bad behaviour towards a community?

In general, if you have a community around your comany, you can do it properly, or you can do it Oracle style. (pushing MySQL, OpenOffice, OpenSolaris, etc. into forks or death).

So the question is only whether Cognitect is good or bad for the community, not whether they make money or not. Making money is not a good excuse of bad behaviour. And while we are at it, doing voluntary work isn't a good excuse for bad behaviour, either, but that's another topic.

Having said that, I don't see where Cognitect is bad for the community. Rather, they seem to interact very well with the community.


> Do you imply that in general, the (perfectly valid) desire to make money would have justified bad behaviour towards a community?

But the OP made insinuations that the act of selling software was de facto negative to the community. It is only necessary to point out that making money is not automatically harmful to the community.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: