Hacker News new | past | comments | ask | show | jobs | submit login
A Year in Clojure (taylorwood.io)
313 points by tosh 6 months ago | hide | past | web | favorite | 156 comments

> I wondered why there’s a reduce in clojure.core but no fold? (It’s just a different arity of reduce that’s missing the initial state.)

There is literally an arity of reduce which skips the initial state. Check out `(doc reduce)`.

> No tuple type? Vectors are tuples. “What if one of the vectors is of a different length?” I gasped as I clutched my pearl monads.

`s/tuple` is the answer to that question.

> This ties into another concept that was new to me: applying a sequence of arguments to a function.

You're not applying the sequence to the fn. This is "function application", so you're applying the fn to the sequence. In Clojure, you have some data, you apply a fn to it, then you have some transformed data. Rinse and repeat. Also, check out `(doc apply)`.

> I’ve read/heard the Clojure designers encourage the use of namespaced keywords, but I rarely encounter them outside clojure.spec definitions.

As other people have done in the comments here, I make a case for namespace keywords here [1].

> I can’t talk about data in Clojure without talking about clojure.spec!

Please be sure to check out Clojure spec's instrumentation, along with Orchestra [2] to further improve the experience. My team uses `defn-spec` for basically every fn in both our front-end and back-end code; all of the data is spec'd, with shared specs between front-end and back-end, and we get full instrumentation during development and testing. Thanks to macros, that can disappear completely in production.

1: https://blog.jeaye.com/2017/10/31/clojure-keywords/ 2: https://github.com/jeaye/orchestra

Orchestra cannot be overstated. You'll quickly find the holes in clojure.spec, have a wtf? moment, feel intense disappointment then discover Orchestra and never look back.

I liked Clojure in 2014/2015 but like Scala and Groovy, interest has fallen way off. Seems like the JVM language wars are over. I still think everyone should learn LISP. I just don't think Clojure was that awesome.

Java 8 isn't awesome, but it was good enough to kill Scala and Groovy.

Can you point that you had with spec? Isn't it a burden to define specs for everything?

As to namespaced keywords, I was skeptical at first, and all my spec definitions used unqualified keywords. But as usual, if Rich does things a certain way, there are good reasons behind it, even if they don't immediately seem apparent. As my code base has grown, I found that namespaced keywords make a lot of sense, not just because of clojure.spec. They give you a convenient way of naming things uniquely, which lets you have data structures that combine several kinds of data without worrying about naming clashes.

You don't notice their utility in code examples, or small projects, but in large projects they are very useful.

From what I have seen in this article, and hn thread, and every time in general when there is a discussion about clojure or dynamic typed languages, you are only speaking about tiny examples and even more tiny projects. I can’t really imagine myself working on a million line project written in clojure or another language without any type safety. Happy to be wrong, but I’d like to hear the experiences of people actually working on this kind of big projects rather than pointing to toy projects less than 10k lines maintained by tens of people... From my experience dynamic typing can be awesome for small prototypes, but to be production ready you need to spend much more energy for testing.

More on the topic I can say that I don’t really see anything compelling that you can’t easily do in F# with a lot more safety in this article...

> I can’t really imagine myself working on a million line project

Any project of this size is too big and is better off being broken up.

You need tests no matter what kind of typing you are using.

The best thing about Clojure is that it is immutable by default. Start writing pure functions that transform you data step by step until you send it on its way with a side effect. I don’t miss types much at all - immutable maps lists, sets and pure functions get the job done and it is fun to program.

Static typing fans hardly ever acknowledge how difficult the types make it to read the code. Too much focus on that one time when someone made a typo and there was no unit test.

This has been argued infintum, nobody is changing their mind. WTF am I doing here?

How is this F# code difficult to read?

    let add n1 n2 = n1 + n2
Where do you spend time reading the types? Though it’s fully typed and if you pass an int and a string or some type for which the + operator is not defined you’ll get a compile time error.

This. "Static typing fans hardly ever acknowledge how difficult the types make it to read the code. Too much focus on that one time when someone made a typo and there was no unit test." Well said.

Same question to you, what do you find difficult to read in the code that I posted in the previous comment?

And, point well made about not needing to know the types but still getting compile assistance because of the types.

Hey, sorry, I meant to post my comment into the parent comment of yours. I didnt mean to state your code was difficult to read, I was thinking of all the Java code I've seen over the years...

>Static typing fans hardly ever acknowledge how difficult the types make it to read the code

I think types make code much more understandable. I like them to be explicit so there's no mental overhead of thinking "what type might this be" - and the larger the codebase, the more this pays off.

Million loc systems are a symptom of cultural disease, they are not something to celebrate. Take a step back and seek the root cause of the problem. The whole point of Java was to let companies throw bodies at a problem and not have the result segfault or get hacked. It succeeded, and now all those bodies are generating all those lines, just as intended. Clojure shows us, I think, a different way. One in which a small team with a small amount of money can out perform a billionaire's body shop.

Not to be disagreeable (because I think I agree with your main point, even though I've never used Clojure, or any similar language, in anything but toy programs), but we were writing million line pojects in C++ long before Java came out ;-)

There were really 2 main points to java (as I understand it). The first was to have a statically typed language that had a relatively fast runtime that could easily be ported to many platforms. Indeed, the killer application for Java, IMHO, was the idea of implementing that run time in hardware (and it's a shame that Sun never really pushed that angle very hard).

The other main point of Java was to codify a set of "best practices" as a kind of "defence against the bad programmers". The intent was to reduce the difficulties of hiring programmers by making the programming language less expressive (compared to C++, for instance). IMHO, this largely failed as programmers are creative and will always find ways to be expressive. So on one had we got C++ template crack and on the other hand we had abstract factory factories.

Basically, though, I don't think there was ever a real intent to allow Java (the language) to make it easy to scale development to larger teams, or larger code bases. The success that Java had in getting into large enterprises just meant that the normal Java user was interested in that, because it's the environment that they were working in.

The idea is attractive to enterprise people because they already have a lot of people and they want to get a lot done in a short period of time. Most managers (and many influential programmers) do not understand the loss of productivity that accompanies communication overhead from having larger teams. The large code bases are the necessary result of programmers trying to isolate themselves from the hundreds or even thousands of other programmers that they are working with. By building complex boundaries, they can reduce the communication complexity at the cost of increasing the program complexity. This is usually a good trade off if you are stuck having to work with an insane number of people.

BTW, in case you think that it is only non-technical people who fall into this trap, I will provocatively mention 2 words: micro services. The way micro services are usually designed and implemented, they are usually the very definition of premature subsystem decomposition. This premature decomposition is chosen so as to allow a few people to "lock in" design decisions and reduce the need to communicate with a whole bunch of people who might ruin their architectural vision. Basically, same problem, same result. Technical people (who didn't live through the CORBA years) drink it up like the yummy, yummy cool aide it is, though ;-)

From my experience, Java's killer feature was and always has been that you can throw significanty less capable programmers into the pool and not have such obviously catastrophic outcomes with doing this as with C or C++. This mostly comes down to taking memory management off the table, although the smaller set of features in Java (compared to C++) is surely also a factor. I suspect that platform portability is a bit like database portability - it sounds nice but in practice most projects will have committed to a platform up front.

Technical people understand that a technology that can drastically reduce LOC is a game-changer: you can hire far fewer programmers - these need to be better than your bucket-brigade Java folks of course, but that's OK because you're saving money on the quantity of them; you need fewer managers to coordinate everyone; the product itself will be better because it's had fewer fingers in the pie, and internal comms during development will have been better because the team was so much smaller. No, the big problem with this is that management in big corps simply do not believe that these sorts of savings are possible - it just seems too good to be true, and they are too risk-averse to rejig their organisations to the required extent; they also fear being beholden to a small priesthood of very clever technical people from which there is no way back - eg we can't ship our Clojure codebase back offshore to some commodity programmers because they won't be able to make head or tail of it.

In the 80s people were already developing multi-100kloc programs in Lisp. The operating system of the MIT Lisp Machine grew to something like 500kloc Lisp. Later versions (end 80s) of it were more like 1.5MLOC of Lisp.

Common Lisp was designed to be a successor of such a Lisp and was supposed to support both complex and/or large applications. Code density can be a bit higher than C++ (say, factor 3) for larger programs.

In the end 80s / early 90s Evans&Sutherland developed a high-end 3d design software in mostly Lisp - the Conceptual Design and Rendering System (CDRS) - many of the cars at that time were designed with it (from Ford, Jaguar, German brands, ...). A designer work environment did cost most than a million USD per seat - including a high-end graphics system from E&S. The 3d software was written in atleast 150kloc C++ and 400kloc Lisp.

Other CAD systems written in Lisp should have been much larger like iCAD or the design system from PTC (years ago to be said at 7MLOC Lisp). iCAD for example was used at Boeing for the development of turbines, wings, internal stuff - it's even said that there was a complete model of some Boeing in iCAD - which used kind of an object-oriented language on top of Lisp for the parametric construction of technical things. Similar applications and languages still exist - for example GenDL: https://gitlab.common-lisp.net/gendl/gendl

There are a bunch of things which makes this possible: Lisp compilers with various compile-time warnings, extensive runtime error handling facilities, object-oriented programming, good development environments, ...

> There are a bunch of things which makes this possible: Lisp compilers with various compile-time warnings, extensive runtime error handling facilities, object-oriented programming, good development environments, ...

I guess one issue might be that most developers that get introduced to Lisp, do so in primitive ways.

So they never get to learn about those toolings.

Even with Clojure, not everyone is willing to pay for Cursive.

I try not to get too deep into these kinds of discussions, so I'll just say that first, I'm pretty happy with a fairly large system written in Clojure and ClojureScript. Second, given Clojure's expressiveness, I find it difficult to imagine a system with a million lines of code.

Not a personal experience, but more of an easy to remember existence proof: https://www.ptc.com/en/products/cad/elements-direct/modeling is made of several million lines of Common Lisp and has been developed over decades.

I find the counter-arguments to your discomfort in this thread very strange. Some big problems just require big solutions. A language might indeed be best suited for smaller problems, but that doesn't mean that bigger problems are a "cultural disease". For Clojure in particular, I recall Hickey saying something to the extent that for the problems where Java is successful, Clojure should be successful too. Maybe this rhetoric has changed, but I think it's also related to people's experience that what takes 750k lines of Java takes under 1000 in Clojure, for some problems, so you get this bias that 750k lines of Clojure is "wrong". No, it's just that the equivalent Java would be a factor of 10-100 bigger. If a language can support modularity, it should be able to scale.

More at addressing your discomfort, I'd be curious to know what you mean by "language without any type safety". Dynamic typing isn't the same thing as untyped, and some dynamic languages (like Lisp) support a notion of compile time that lets you have type warnings, optimization, etc. that you might be used to from static compilers... as far as testing goes, the big testing pushes in industry have come from people dealing with giant C++ and Java projects more than the dynamic language crowds, because more energy for testing is needed in general. Whether dynamic languages need more or less of it could be interesting, but because of the variations in what we call "static languages" and what we call "dynamic languages" it might not be useful to compare things so broadly. Instead we could compare language against language, and take into account usual dev practices. Having a REPL to leverage while developing, testing, and debugging changes things in a big way, for instance.

I don’t really, really think that clojure is 750 times more dense than java. Probably F# can approach between 5 and 10 times more density than C# in the right conditions. The 750 times less is simply false in my opinion. Any strong proof of your extremely strong declaration?

You're right that Clojure isn't 750 times more dense than Java in even the 99% case, I'm not trying to claim that. I probably should have used something more conservative to make my point (that dynamic languages will usually result in significantly less code for the same problems and that might itself cause suspicion of large projects entirely by dynamic language fans rather than take on the question directly of can their language scale to projects millions of lines big). The particular project I was thinking of in this case is Hibernate ORM, which somehow managed to weigh around 750k lines last I remember. It's a bit of a facetious comparison, I'll admit, to point to a few Clojure projects I know of that seem to solve the same core problem better in around 1.5k lines. Even if you agreed that they did, I think we'd both agree it was because of language features apart from dynamic typing (your own experience of two static languages differing by as much as 10x implies as much). When the problem domain is related to symbolic computing and metaprogramming, languages that support symbols and macros are going to have a huge advantage, especially when you start generating code you would have written by hand otherwise.

I haven't worked on any significant Clojure codebases but I have worked in large codebases in other dynamic languages and it's a nightmare. Maintaining a Rails app, for example, after it gets beyond a certain size is just punishing.

The problem with Rails isn’t so much its dynamic nature as much as it is how everything is implicit, and it’s really hard to track down where certain functionality is happening. I doubt Clojure would be so bad, as it’s explicit. Immutability also helps.

Its a nightmare to modify a moderately large Rails App.

The article does address this, though maybe not authoritatively, under the "Refactoring" header.

> I’ve felt this pain a few times and the only conclusions I’ve drawn are to use clojure.spec for non-trivial code, especially around domain boundaries, and try even harder to solve very small problems in ways that can compose to solve larger ones.

This seems reasonable to me. A lot of where refactoring gets painful is when you have bits of code being reused across wide swaths of the program. Keeping modules small limits the opportunity for that to happen, and makes the scope of impact for a change easier to conceptualize without leaning on a compiler.

The problem I see there is, it seems to be exceedingly rare that you can get a whole team of developers to be so disciplined - and all it takes is one lapse in discipline for it all to fall apart.

The only issue I thought about would be when using json as I guess this would lose the namespace infos. Did you have issue with that ?

Here's some valid json that might be useful:

    {"signup/username": "John",
     "signup/password": "pg4pres"}
Then, because you can pass in a function to get called on the keys, just check for a regex like (.+)/(.+) and turn it into a keyword. Here I'll write a function like that real quick using a repl with a nice ux: https://s3-us-east-2.amazonaws.com/photoblobs/Screen_Recordi...

Sorta! Cheshire gives you a key-fn entry point. For small JSON objects you may just want the same namespace and then that is sufficient. Otherwise you can look up the right namespace in that fn or walk that tree (I’ve used specter for this) to correctly annotate.

I keep my data in a JSON database (RethinkDB). This is not a problem at all: namespaced keywords are converted to strings and the conversion is well-defined in both directions.

> "With function specs, I could assert the inputs and outputs to specific functions at runtime; not quite as reassuring as compile-time but better than nothing!"

For 10 years I've seen many developers who have only really used statically typed languages like C++ and Java, now finding the "joy" of dynamically typed languages and all the "freedom" they give you, only to realize within a year or two that, wait a second, verifying those types at compile-time was actually pretty useful. I've also seen mass migrations away from languages like Ruby, Python and Clojure towards OCaml, Haskell, and F#, because people really enjoyed the functional programming aspects but wanted their static typing back. After writing the 10,000th unit test that only exists to ensure that you spelled some function/method/hash-key correctly, it gets a little tiresome. Personally I've settled on TypeScript, which has nice IDE integration in VS Code so that it validates your types live while you code, while still giving you the flexibility of a dynamically typed language (it's all still JS at the end of the day).

> After writing the 10,000th unit test that only exists to ensure that you spelled some function/method/hash-key correctly, it gets a little tiresome.

One of the things I liked about Lisp when I first saw it was that, since it uses kebab-case and (usually) full words, I could just turn on my standard English spell-checker. After all, if "Programs must be written for people to read, and only incidentally for machines to execute", then it makes sense that I should be able to run a normal spell-checker on them. The only reason that was infeasible in languages like C is because they used (for both technical and cultural reasons) names like "memcpy".

Whatever field I'm working in already has its own vocabulary. I don't need to create a new <domain>-in-<proglang> vocabulary for a program about it. Or if I do invent an abbreviation that's so useful that I want to use it in my source code, I'll probably also want to use it in my documentation, webpage, emails, etc.

Most large programs that I've worked on, in other languages, have at least a couple misspellings in func/var names, which even Haskell's type system won't catch. "Referer" was clearly not written by someone with their spell-checker turned on for source code!

Once a month, I find myself wishing Clojure had static typing, before dramatically rediscovering why not having it can be such a relief. The reasons for this are expressed much more articulately than I can here: https://lispcast.com/clojure-and-types/ ... Personally, for the kind of work that I do, I've found clojure's compromises make my life on net easier, and my code, on net, more reasonable.

I'm finding that it really depends heavily on the domain I'm working in. Roughly speaking, if I'm dealing with "business problems" type stuff - grab data, report on it, keep a paper trail, etc. - I want a very loose and flexible programming environment with a REPL and dynamic typing and all that. Python, for example, treats me very well when I'm wearing my "data scientist" hat.

OTOH, when I'm doing more "building infrastructure" type work - implementing a data store, writing a compiler or interpreter, stuff like that - I start getting more interested in rigidity and formality. Static languages treat me well in these situations. The more rigid, the better - I'll prefer Scala to Java, for example, specifically because it gives me more tools in the type safety department.

I pretty much agree with this. When I can get away with it performance-wise (and I almost always can for personal projects), this is a big part of why I love using Perl 6.

I recently rolled my own (verg tiny and specific) build/deployment system for a project. Initial prototyping was a breeze, but once I had things more or less nailed down, I started adding types and, for example, was able to leverage the type system to make sure it can only be deployed listening on IP addresses in the private range. The next step is to rig up some sort of Zerotier integration to ensure I only make it available on my private networks.

Yes, I wish I had clarified that I meant that Clojure beats everything else for me for what you describe as "business problems" and also, I've found it quite enjoyable for writing simple games. Im afraid im not well qualified to comment on what "infrastucture" type work would be like, and what is best for it.

I think nowadays the comparison is vs gradual typing.

I can't say I've personally seen any evidence of "mass migrations" of this sort. However, I can personally say that after 5 years of working with Clojure and with several large Clojure projects under my belt now, I'm starting to realize that I'm losing confidence in using Clojure for large projects and the feeling of how _maintainable_ that project will be in 2-3 years or longer. I think Clojure hits a sweet spot (at least for _me_ personally) for small-to-medium sized projects though, and I don't hesitate to use it in those cases.

The thing I've _consistently_ seen with every large Clojure project I've been a part of is that as the project grows, it becomes harder to make sense of the types used across the project. On the last couple of these projects I've been on, Schema was used to annotate many types, but I've never found this to be quite enough and I still remain firmly convinced that gradual typing just doesn't work well enough generally. I think that with a _very_ highly disciplined team of experienced developers it could probably work well. But we don't all get the opportunity to work on such teams. What I've personally seen is developers are all too willing to either add a partial type spec/annotation (with probably some s/Any's lazily thrown in where there should most definitely _not_ be), or just flat out not adding any at all ("Why would I need it here? This is very simple code and it's obvious what is happening here!" ... sure, maybe it is to you now... what about in 6 months? Or what about to one of the other developers on the team?).

When discussions about static vs dynamic typing come up (which is almost always a fruitless argument in my experience), I've noticed that the dynamic typing proponents tend to get overly focused on the idea of "correctness". This has always disappointed me. Correctness is certainly an important benefit (though I think it tends to get exaggerated a bit much), however, to me the idea of documentation in the form of a type definition is _much_ more valuable over the longer term, especially so for large projects. Developers are usually quite lazy, especially when it comes to documentation, and often when looking at a piece of code in a large code base, the only documentation about the values being passed to a function is going to be the type definitions (aside from reading the code itself of course).

> every large Clojure project I've been a part of is that as the project grows, it becomes harder to make sense of the types used across the project.

Are there languages you've seen that don't show an ugly side as projects grow to a large size?

You see these migrations as companies grow past certain engineer counts if they start out with dynamic languages. Sometimes they might even modify the language itself to bolt on types on it.

Ex: Facebook and adding types with Hack & Flow. Dropbox's work to add types to python. Uber used to be a python and nodejs shop, and now new stuff is done in golang and java. Many nodejs shops go from pure js to typescript or flow.

Gradual typing is useful for companies that don't want to immediately migrate everything at once. If you have the right policy of 'type everything you touch', enforced with a linter, a codebase gets migrated relatively quickly.

My question is why anybody thinks that writing a project in a monolithic style is a good idea. Any large project can be broken down into small independent modules, and there are tons of benefits to doing that aside from being able to track types in it.

Independent modules are easier to reason about, they're reusable, and they can be maintained on independent schedules. The biggest advantage though is that this approach allows you to split teams up into smaller groups responsible for the individual modules. One of the biggest problems in maintaining large projects is communication overhead, and typically it's very difficult for teams with over 5-6 people to be productive.

So, if your project outgrows limits of dynamic typing, chances are it should be split up instead of using static typing as a crutch.

> using Clojure for large projects and the feeling of how _maintainable_ that project will be in 2-3 years or longer

Maintainable in what sense? And large in what sense?

I typically work on 1-10 million lines of Java code solutions split up into maybe 20-50 seperate components.

These 20-50 seperate components have untyped boundaries between them.

Setup correctly these untyped application boundaries rarely cause problems - of course you have to put a bit more thought into architecture, documentation, and verification but it’s not particularly onerous and we have been doing it successfully for decades now.

I’d say anecdotally errors caused by this lack of typing are well under 1% of total errors.

Every programming environment has down sides and Clojure is no exception but I still think that using Java instead of Clojure is not better because simply you have 3-10x source code to maintain and the number of bugs are correlating with number of source code lines. On the top, I quite often see null pointer exceptions in Java when in theory that should never happen because we are statically typed.

Pick a more modern language than 20 year old Java, such as Kotlin, and you get the best of both worlds: static typing and type inference.

Yeah it is on my list to check for a while. If it has the same interop with Java as CLojure than I will give it a try.

I'm not sure why you're contrasting Clojure and Java, but nowhere was I recommending using Java over Clojure (nor would I ever offer such a recommendation). Certainly there are much, much better choices for a statically typed language out there.

People usually not just chose languages. The VM, libraries and dependency management all together are chose. Btw. this is why Rich started Clojure on the JVM. If you stay on JVM land I quite often hear that the developers do not want to try Clojure because of the lack of types (which refers to static typing in this context). Good luck implementing a Hadoop project in OCaml or Haskell.

It seems like you're jumping ahead to conclusions waay to quickly here.

Instead of that, it would be better to respond directly to the specific points the OP made, rather than move the argument to a bunch of new topics.

OP did not say anything about other languages, be it Java or otherwise. OP is just describing experience with closure.

In order to understand clojure better, it would be good to not shut out feedback.

This is exactly the point, you can't just chose languages because of type systems there is a lot more to these decisions. OP has an experience with Clojure (which has closures) but these experiences are do not exists in the vacuum, there are tradeoffs that you take with every language.

>On the top, I quite often see null pointer exceptions in Java when in theory that should never happen because we are statically typed.

In this sense Java is actually dynamically typed: there's no way of knowing at compile time whether an object is actually an instance of that object or is null. In languages like Kotlin, Haskell and Ocaml, which specify in the type system whether or not something can be none and force the user to check before using something that can be null, null pointer errors don't happen.

> In this sense Java is actually dynamically typed: there's no way of knowing at compile time whether an object is actually an instance of that object or is null.

That is an extremely unusual definition of dynamic typing. If I have an Integer it might be null but it won’t be a BeanFactory (unless someone did a reflective call or wrote some bytecode or otherwise subverted the type checker).

Well in my ideal word nulls are not part of Integers or Floats or BeanFactory.

Static type systems have scopes in how much they type. Some newer ones add nullable annotations, like kotlin or swift, others don't have it yet.

I hope java gets native support for @nullable, but for now we have stuff like error prone and @nullable annotations.

Clojure goes to great lengths in the design-space to craft the puzzle pieces – immutability, lambdas, simplicity, blessed primitives for state and identity – in such a way that Clojure's idioms are centered on clarity of thought instead of guard rails. This is why Clojure works (for people willing to go deep) in many places where Js/Ruby/Python don't.

> finding the "joy" of dynamically typed languages and all the "freedom" they give you ... verifying those types at compile-time was actually pretty useful

Indeed one of the "joys" is that one can apply that verification a la carte, and possibly with different approaches [0][1]. Similarly, one may desire to dispatch on type [2], but might sometimes want arbitrary dispatch [3].

> After writing the 10,000th unit test that only exists to ensure that you spelled some function/method/hash-key correctly

This too is solvable with a la carte tooling [4].

Everything has trade-offs. Static type systems give a fixed set of benefits with a fixed set of costs. Some folks prefer to use their experience and judgement to choose when, where, and how to pay the associated costs (and that freedom of choice itself has a cost). It's okay to have different preferences.

[0] https://github.com/plumatic/schema

[1] https://clojure.org/guides/spec#_entity_maps

[2] https://clojure.org/reference/protocols

[3] https://clojure.org/reference/multimethods

[4] https://clojure.org/guides/spec#_testing

Function specs are not about compile time verification: they are about checking that the object you got is the thing you thought you had.

Compile time verification is nice if you know all your inputs at compile time. At some point, data from elsewhere comes in. Specs are an answer that preserve the composability goals of Clojure (who cares if I have a bit more information than I strictly need?) and allow you to validate them at runtime with useful error messages.

spec has incidentally made Clojure's compile term error messages better! But that's just because one program's compile time is the compiler's runtime, and the compiler is running spec verifications on your code (which is after all, just data!) during its runtime :-)

Would love to see some evidence of a mass migration to Haskell... anywhere.

Would not call that a massive migration but


This is a problem that is very well suited to Haskell, but it is a subset of general software development. Here it is worth the substantial overhead of making robust types as independent teams are making rules for this spam filter and typing can enforce order on it as they are probably easily categorized by type of filter. There are also quant developers that enjoy writing in OCaml, and AI devs too. But besides major companies using statically typed languages for some special teams, we don't see companies switching wholesale. Like I am still waiting for the day where $bigCompany says "we have hit so many type problems in our code that we have decided switching to staticaly typed languages will save us tons of money and are now requiring it".

It is really not in a companies interest to ever do a 'mass migration' excepting dire or takeover situatuons

I meant in the industry (as I believe the original comment intended), not one large enterprise migrating.

Well, in industry.. you know that clojure and haskell folks have more in common than different. A lot of the same folks move back and forth and care about the same things, just different ways to get there. I think the community size of clojure and haskell are sized very similarly and are also highly correlated. Any language graph you see that has both clojure and haskell, they always appear very close to each other.

Lest you forget, both fight against Java: https://trends.google.com/trends/explore?date=today%205-y&ge...

Industry is not doing a 'mass migration' to clojure, haskell, or any other language not in the top 5.

So the OP was wrong to claim a 'mass migration' in the first place

there is no such thing so don't get your expectations too high

Specs in Clojure are not static typing. Satisfying spec does not mean a value contains only items described in spec.

It is an implementation of much larger messaging idea - message receiver only takes care about data it needs and nothing else.

Rich Hickey said himself "If a delivery truck comes to deliver a TV to you, do you care what else is on the truck? No, you are only interested in your TV" so the messaging idea is exactly right, and they weren't looking to implement strict static typing. In Clojure, Rich Hickey and others like Stuart Holloway speak of not "complecting" your code by passing around data structures other than uniquely namespaced maps where functions can take what they need and operate on it without having to mess with anything but data.

so instead of

      data Person = Person String String Int Float String String 

     you have :person/name ....
By using simple data structures you can work on it much easier and even interoperate with other services and languages, while still checking to see if the data you are fed conforms to what you need.

You can get that with ad hoc structural typing or by transforming into canonical nominal types that contain the subset of data you care about.

I’m not sure what these comments are arguing against, but it’s not compile-time analysis.

Thank $DEITY that impracticality is always because you’re not using the One True Static Typing School!

Seriously: you’re describing a feature that only exists in a small subset of languages and only a small subset of contexts at that. OCaml and Go do this but not for data, just for methods.

Absolutely. I wish I could structurally type a db query function in every statically typed language that doesn’t have it. Instead I end up with the challenge of coming up with the smallest number of (what is essentially) User1, User2, etc. and hope my queries and subqueries can always fill one of the existing shapes which is a significant pain point.

The language used in this thread so far has made it seem like runtime specs are the solution to this “only care about part of the bag” problem rather than what I consider a general feature of dynamic typing that certainly can be expressed statically as well.

I can’t parse that second paragraph. Can you elaborate?

Runtime specs absolutely also validate. And you can run them at test time. And you can put arbitrary code in them if you want. And sometimes you can statically validate them.

What I’m saying is you get ad hoc structural typing on everything, including data, which you’ve already said is a good thing, so I’m not sure what we’re arguing about or if we’re even arguing :)

EDIT: ah! You edited the paragraph. Simple question: where does this ideal static typing thing ship? Like, how do I do this on my machine right now? What package do I install?

If I could go back, I would’ve rephrased my original post so that it didn’t seem like it existed just to contradict the upstream posts.

People were talking about using runtime specs to annotate and validate the subset of information a function may care about. Which is certainly something I wish I could do in more languages, though ideally as much at compile time as possible.

Of course, runtime specs in Clojure let you make more assertions than just structure, so structural typing as seen in TypeScript, for example, certainly don’t go all the way. It just helps address what I thought was being discussed.

What is the ideal middleground? I think one possibility would be if Clojure was statically typed to begin with yet had the syntax of spec/schema.

Though I’m not sure we’re talking about the same thing anymore.

We're not. But: good news: people are experimenting with verifying specs at compile time! And since specs come with quickcheck built in, you get a lot of "free" (in developer hours) "compile time" (actually: development time) verification. I appreciate that is not the same thing as a proof system :-)

You can use spec to verify that "a value only contains items described in the spec". The core library doesn't make it easy, but spell-spec [1] contains some useful helpers.

[1] https://github.com/bhauman/spell-spec

Almost every week a new article comes out from some language index saying Python is at the top or overtaking Java or C or C++ to reach the top very soon. I think the evidence is simply not there to support that static typing is somehow preferred by the average programmer.

There is plenty of evidence showing that the trend is toward static typing, not dynamic typing.

Starting with the fact that most dynamically typed languages today are progressively adding type checking (spec in Clojure, gradual typing in Groovy, etc...) while no statically typed languages are going the opposite way.

> Starting with the fact that most dynamically typed languages today are progressively adding type checking

At the same time, industrially-popular static languages have added dynamic escape hatches. The trend is really toward optional static type checking, with dynamic languages moving to opt-in typechecking and static languages moving to opt-out.

I agree, and interestingly I see it in my own Groovy code evolving over time. I look back at old code and it is full of needlessly dynamic types. Then I look at what I write now and I have just gravitated towards typing everything more and more. It's still great to be able to fall back to dynamic typing within local context where there is no loss of clarity from doing so. But putting a "hard shell" around the outside - function definitions, variable declarations, data types etc. makes me feel so much better about the maintainability and interpretability of the code. Sometimes I think I should just use Kotlin, but then I would miss the ability to drop seamlessly into dynamic mode when that makes sense. Especially in the REPL it is super nice.

> Sometimes I think I should just use Kotlin, but then I would miss the ability to drop seamlessly into dynamic mode

You can use Apache Groovy for just the glue code and Gradle build scripts, and use a statically-typed language like Java, Kotlin, or Scala for the actual system you're building -- that's what most programmers do.

When statically typed language C# added a dynamic keyword, that's a language going the other way.

Of course, no-one actually thinks of C# as being a dynamically typed language despite its dynamic keyword, nor does anyone think of Apache Groovy as being a statically typed language despite its @CompileStatic annotation. In the everyday world, C# is used for building performant systems and Groovy is used for glue code and build scripts.

Languages tend to go both ways nowadays. Go first shipped with both static typing (though without generics except for builtin datatypes) and dynamic typing (though using the wordy interface{} marker).

> while no statically typed languages are going the opposite way.

Static languages added dynamic capabilities with reflection and run time code generation a long time ago.

I think there's a very good chance that this is being driven by the flap around data science, and the big push toward teaching programming in schools.

Assuming, for the sake of argument, that that's true, it means that we've got to think about how you define "programmer". Does it mean anyone who uses a programming language, or does it mean something more like "software developer"?

TIOBE, for example, effectively measures popularity among the former group, not the latter. Which means that, in the case of Python popularity, it's going to be picking up on all sorts of things: analysts and BI people ditching Excel in favor of Pandas, ops people ditching piles of shell scripts in favor of Ansible, graduate students choosing Scipy over R or Matlab, 7th graders with Pi-tops, ...

I had a similar experience when I spent some time playing with Elm. Being able to (re)model the structure of your program and have the compiler verify it is absolutely freeing.

That is pure and utter FUD I'm afraid. My team has been working with Clojure for over 8 years now. We have tons of code in production, and we don't have any problems maintaining these projects. Furthermore, there's absolutely no empirical evidence to support the notion that static typing results in statistically significant reduction in defects, speed of development, or ease of maintenance. Considering how long both disciplines have been around, and the sheer amount of software written in each, that's quite the elephant in the room. I also have no idea who the people migrating from Clojure to OCaml, Haskell, or F# are. I don't think I've seen a single story about a company doing that.

My experience is that dynamic typing is problematic in imperative/OO languages. One problem is that the data is mutable, and you pass things around by reference. Even if you knew the shape of the data originally, there's no way to tell whether it's been changed elsewhere via side effects. The other problem is that OO encourages proliferation of types in your code. Keeping track of that quickly gets out of hand.

What I find to be of highest importance is the ability to reason about parts of the application in isolation, and types don't provide much help in that regard. When you have shared mutable state, it becomes impossible to track it in your head as application size grows. Knowing the types of the data does not reduce the complexity of understanding how different parts of the application affect its overall state.

Immutability plays a far bigger role than types in addressing this problem. Immutability as the default makes it natural to structure applications using independent components. This indirectly helps with the problem of tracking types in large applications as well. You don't need to track types across your entire application, and you're able to do local reasoning within the scope of each component. Meanwhile, you make bigger components by composing smaller ones together, and you only need to know the types at the level of composition which is the public API for the components.

[REPL driven development](http://blog.jayfields.com/2014/01/repl-driven-development.ht...) also plays a big role in the workflow. Any code I write, I evaluate in the REPL straight from the editor. The REPL has the full application state, so I have access to things like database connections, queues, etc. I can even connect to the REPL in production. So, say I'm writing a function to get some data from the database, I'll write the code, and run it to see exactly the shape of the data that I have. Then I might write a function to transform it, and so on. At each step I know exactly what my data is and what my code is doing.

Where I typically care about having a formalism is at component boundaries. Spec provides a much better way to do that than types. The main reason being that it focuses on ensuring semantic correctness. For example, consider a sort function. The types can tell me that I passed in a collection of a particular type and I got a collection of the same type back. However, what I really want to know is that the collection contains the same elements, and that they're in order. This is difficult to express using most type systems out there, while trivial to do using Spec.

Our team (not at liberty to say which company unfortunately) moved away from Clojure back to Scala, or more accurately we built a couple of production services in Clojure, sunsetted them and haven't used Clojure since; the most vocal Clojure enthusiast moved on to another project and the rest of us were kind of meh on using it in the future after about a year of experience and went back to Scala (although certain things were quite fun and it was a fantastic learning experience and I would highly recommend at least getting familiar with REPL-based programming).

It wasn't horrendous (and it's not like Scala is absolutely amazing for us), though maintenance was something of a pain point. We might use Clojure again, especially with some new blood who are Clojure enthusiasts.

These days I'm less and less certain of what proactive role a language plays in team success. That is to say I think the arrow of causation is usually pointed in the wrong direction. A well-functioning, excellent team will, given sufficient latitude, converge on a language that suits a team well. This language is often different for each team. Introducing a "good" language is unlikely to save a team.

I'm in complete agreement here. Ultimately you want to use the language that the team enjoys working in. Different people have differing experience, aesthetics, and pain points. You want to pick the language that most devs on the team prefer using.

>What I find to be of highest importance is the ability to reason about parts of the application in isolation

This is a huge factor I'd like to emphasize and write about in a follow up post, having worked on many Clojure projects at different companies now... Another Year in Clojure :)

I've maintained some large Clojure codebases I didn't write, and the easy ones shared this quality. The more painful ones were reminiscent of large OOP codebases and honestly I would've preferred they'd been written in Java.

OP was mistaken to claim 'mass migrations' of any kind - that was clearly made up.

alternatively, I'm always receptive to reports about experiences.

I'm very skeptical about demands to produce "empirical evidence" for or against any sociological programming practice, because it tends to be a method to place the burden of proof on the opposing side, while reserving the default position for own own side - which just turns out to be a way to normalize one's own beliefs.

The only reason I talk about empirical evidence is because I constantly see sweeping claims regarding efficacy of static typing. If somebody makes broad claims of one approach being strictly superior to the other, then the burden of proof is indeed on them to support said claims. The null hypothesis is that both approaches are effective, and it's in line with all the currently available research on the topic [1].

Furthermore, fixating on static typing as the one defining feature of a language amounts to concept space reduction. In practice, there are many complex factors that contribute to overall quality of the language that have to be considered holistically.

[1] https://danluu.com/empirical-pl/

> I can even connect to the REPL in production

Programming is not my day job, so please correct me if I'm wrong - but every project I was involved in so far - having access to a production DB is a big no no.

Or does the project you refered to not contain any sensitive data, so having direct access to the production system is deemed 'safe'?

Different companies do it differently, but I'm not aware of any that make it absolutely impossible to get at production data (that hasn't been encrypted with a key they don't have) and program instances, just some make one (reasonably) jump through many more hoops than others. It's very much a double-edged sword so access control, auditing, etc. needs to be done carefully. (Privacy By Design.) The value can be immense -- e.g. debugging and hotloading a patch on a spacecraft 100 million miles away (http://www.flownet.com/gat/jpl-lisp.html)

My team both develops and supports the applications that we build in production. When issues are reported, it's often necessary to check the database to resolve it. Being able to check what data the application is pulling during requests is very useful for troubleshooting.

The only example of a Clojure team migrating to Haskell is the Duckling team, a little while after Facebook acquired them. Among other things, they wanted a language with better support within Facebook.


Incidentally, that's a perfectly valid reason to migrate, but it's not the one the parent suggests. :)

> The other problem is that OO encourages proliferation of types in your code. Keeping track of that quickly gets out of hand.

I'm curious how Clojure avoids this. In my large Clojure projects, there aren't "types" in the sense of static typing, but the shapes of data still exist, the types are there, but they are in my head or in other forms than that which a compiler can check. I still have to remember that a map is a certain "type" of data with certain keys/values that are also of certain types, and a large project might have many of these "types", even if they are not explicit. That is equally hard to maintain for me -- harder, really, since it's too easy to make mistakes.

The main difference I find is that with Clojure approach you have a common set of data structures that you operate on using common functions from the standard library. I find that it's usually easy to tell what type of data you're working with by seeing how these functions are used. When I see functions like map, filter, iterate, and so on, I have context for what the intent of the code is, and this helps me know what type of data I'm working with. At the same time I'm mostly working in a local context thanks to immutability, so I don't have to worry that a piece of data might get changed from under you via a side effect.

I also find that you learn patterns for structuring code that make it easier to track the types. For example, I avoid renaming keys or restructuring data except at the edges of the application. Adding Spec or Schema at component boundaries is also helpful, as you can see what the shape of the data was at the API level and work your way through the helpers from there.

There's also a big benefit of working with data as opposed to objects. Each class is its own ad hoc DSL with custom behaviors. Knowing what one class does tells you nothing about what another does. This approach doesn't scale well for large applications where you have thousands of classes with each with its own behaviors.

I think I would definitely benefit from blog posts or whole books on Clojure style specifically, rather than the many constant re-introductions to its syntax and data structures (like the one in this article). Your thoughts have come closer to explaining a maintainable approach to the language than most of what is out there. In my experience, despite working for years professionally with Clojure, I've never successfully felt like I wasn't fighting the language because of the constant and unexpected nils and other odd wranglings of data that were not intended and not easy to track down at runtime. The flexibility of the dynamic typing and the many APIs for manipulating data can make it easy to restructure your values in unintended ways, and this to me is a meaningful problem unique to the language. I don't have the problem in a Java app, for example, of a Foo type accidentally becoming a Bar type or even some other "type" that was never defined but now has a unique data shape because I made a typo somewhere or used the wrong library function, etc.

Regarding nil, its considered bad practice to use it as values in maps.

Right, which is why I said “unexpected nils”.

This, so much. After some 3y spend doing Clojure, having a Haskell codebase is a totally new level of stress reduction.

All of a sudden you can stop worrying about "what am I getting in this function" and actually concentrate on the business logic.

(No, tests can't cover everything. Yes, we still have to write tests, but now they are all useful. Yes, the thing that types hinder you from doing changes is a lie, it's quite the opposite, much easier to change things since the compiler covers your ass)

> All of a sudden you can stop worrying about "what am I getting in this function"

I think what you're saying is something like "this function guarantees that the thing I get here conforms to this shape" and I do like that. Something I don't see a lot of people express is just the simple "I know what the shape of this thing is by looking at the signature". Whenever I find myself reading other people's Clojure code, it feels like I end up spending more time figuring out what is being passed into a function. I have to look around to find the origin of a value. I wanted to tear my hair out the first time I tried reading Ring's source code to figure out how it worked. With F#'s Suave, I can tell what each function gets and what can be done to it--it's self documenting. In Clojure, as the original author you can play around in the repl full steam ahead because the types are in your head at the moment. Those who come after you have to assemble the puzzle with less information.

I think you've really summed it up well here. I have exactly the same concerns when looking at large Clojure codebases. It gets incredibly tiresome having to do this and even at the end of it, I'm still left with that nagging doubt "did I miss some detail somewhere?". Hopefully there's a unit test or three to help put my mind at ease but, at least at my work, that's really quite the pipe dream.

Couldn't agree more, but what I don't understand is that Python is plagued by this problem and nobody seems to have any issue with it. Trying to understand large Python codebases drives me crazy.

I think a lot of Python engineers don't yet realize they have this problem. It's a classic issue in predominantly OO languages because you often get so caught up in defining objects you lose sight of control flow and data flow. Toss multiple inheritance and metaprogramming on top and it's basically hopeless.

I usually put these into the description so the next engineer has a better time tracking what is what.

It is funny, I usually know what I am getting in this function in my Clojure code. OCaml (or Haskell) gives entirely different comfort to me, it tells me what makes sense as input and output type for my function. The downside is that I need to define different functions for different types.

utop # let avg a b = (a +. b) /. 2.0;;

val avg : float -> float -> float = <fun>

utop # avg 2 4;;

Error: This expression has type int but an expression was expected of type float

utop # avg 2.0 4.0;;

- : float = 3.

user=> (defn avg [a b] (/ (+ a b) 2))


user=> (avg 2 4) 3

user=> (avg 2.0 4.0) 3.0

I think it is a matter of taste which one you prefer. With property based testing and specs you can go very far in term of pre-runtime checks and there was some data about the number of bugs per line published by Github where it turned out that Clojure did not do too bad in comparison to statically typed languages.

I don't know about OCaml, but Haskell has typeclasses (which look like "polymorphic interfaces"?) so you can define functions generically and put a typeclass constraint to make them work on a certain class of types. E.g. your avg function would be of type:

avg :: Num a => a -> a -> a

Where your "a" is a number (that is, a type that implements the Num typeclass)

This would work out of the box on floats, ints, etc

The signature is not really helpful. Can you show me how do you implement the actual function?

If you have operators that are typed like + and +. how can you make this work with Num?

Also the + operator is typed with Num; from a Haskell repl:

  λ> :t (+)
  (+) :: Num a => a -> a -> a
In case of average we need /:

  λ> :t (/)
  (/) :: Fractional a => a -> a -> a
So our type is going to be constrained by Fractional instead of Num). So an implementation could be (from here [1]):

  import Data.List
  average :: (Real a, Fractional b) => [a] -> b
  average xs = realToFrac (sum xs) / genericLength xs
[1]: https://stackoverflow.com/questions/2376981

Really like typescript. You could code an algorithm with very dynamic types then expose its API with very rigorous typing (TS's type system can go pretty advanced)

I think the happy medium is a dynamically typed language that gives you the option to type or at least have a way to check typing. Java isn't really statically typed at all since it is necessary to cast in daily usage. Truly statically typed languages seem really cool but you don't see a lot of major things being done with them due to the overhead.

I have found too that when you are programming functionally as in Clojure you don't get bit by shared state things not being what you expected as everything you are writing should be data in, data out and easily reasoned upon. Nil handling in Ruby is a huge problem as people are chaining methods on mutable objects which becomes complicated, and this is the source of most bugs in Ruby. This is somewhat mitigated lately with safe navigation operator.

That is why TypeScript is appealing as you can use typing when you'd like to but still take in "unsafe" JavaScript or write in JS. Unlike if you were using Elm and had to write a wrapper to conform all JS code coming in.

Spec in Clojure is very nice as you can enforce and conform types where you need it, and also make generative testing for free. If there is some mass migration to OCaml, F# and Haskell I have yet to see it because very few businesses use them and the communities are much smaller.

does typescript allow you to write less tests? does it have dependent typing abilities? or something like clojure spec? to me it looks more like a slightly better java but you still need to write lots of tests

Does anybody know, or (better) have any source on, how common typescript is on the backend?

I got the Clojure bug at some point not long ago. Decided I’d write a crawler and some data munging stuff directly in Clojure since it’s all about data processing.

Crawlers naturally want to be stacks with pipelines and expressing them as tail recursions over URLs curried into transducers, etc was easy enough conceptually, by difficult in reality because you want crawlers to be stateful. I think in clojure you end up hacking state by just attaching stuff to the outputs of your functions and building of more complicated objects down the line.

I discovered that stuff which is cake in SQL or a language with data frame support is often typically hard in Clojure. Eg eg joining data , aggregation, etc. The few clojure libraries for this sort of thing are complicated to use because they have a mechanical sympathy with how clojure executes. Eg https://github.com/nathanmarz/specter/

So I wrote some of my own libraries to get over this hurdle. I then realised that there isn’t a single decent statistics library that still maintained, and that the java bindings were far from simple to us.

At that point I gave up on clojure for my use case. That’s not to say that there isn’t something for which it’s awesone. But in my opinion it isn’t data processing.

For the data processing tasks my go-to tool is Manifold[1]. Depending on how the application is structured, you can keep state in atoms, the system map when using Component[2] or vars with Mount[3]. Manifold streams asynchronously join the state transformations together while allowing control over backpressure and threading.

On the data analysis side things aren't rosy on the JVM. I'd delegate that part to something which actually has all the tools built-in, e.g. Julia/Python+Numpy/R. Apache Arrow[4] is a nice project to facilitate the dataframe interop.

[1] https://github.com/ztellman/manifold

[2] https://github.com/stuartsierra/component

[3] https://github.com/tolitius/mount

[4] https://arrow.apache.org/

I think the Clojure answer to stateful data storage is "use Datomic". There's a big impedance mismatch between the functional world and the stateful (RDBMS) world. Datomic extends the functional world all the way down to data storage.

You can certainly use SQL, and it's not any worse than any other language (and in some ways a little better), but IME it's not Clojure-simple, either. It's still the most awkward part of all my Clojure programs.

Clojure excels when you can avoid mutating state, and I think that's why people especially love it for web apps. It fits almost perfectly into that model.

Can you explain a little more why you think state is hard to manage in Clojure? I've done a fair amount of data processing in Clojure, and I'd consider it one of Clojure's strengths.

> I then realised that there isn’t a single decent statistics library that still maintained, and that the java bindings were far from simple to us

I kind of found a similar problem with Scala. In theory yeah, you can use any Java library from Scala. But what was happening was I was continuously hitting the impedance mismatch between the scala collection and data types and the Java ones. Sometimes it was happening implicitly and doing automatic conversions of massive amounts of data without me knowing and I had to do all kinds of non-idiomatic stuff to get Scala to perform well. My Scala started off much prettier than my Groovy code usually is, but by the time I finished it was an abomination. At that point I gave up and went back to Groovy because while it lacks many of the functional features, idiomatic Groovy "just worked" and gave me most of what I was looking for in terms of a "better Java".

> My Scala started off much prettier than my Groovy code usually is, but by the time I finished it was an abomination

> Sometimes I think I should just use Kotlin, but then I would miss the ability to drop seamlessly into dynamic mode

Have you considered Clojure for this use case? That might alleviate your problems with using Kotlin and Scala, without having to resort to a single-target language like Apache Groovy. After all, unlike what Clojure, Kotlin, and Scala all offer now, we're never going to see an edition of Groovy that targets Javascript, Android, or native CPU.

I've always been intrigued by Clojure but it feels like a huge leap to dive into a fully Lisp-style, completely functional language.

I guess single target isn't a particularly big problem for me; the JVM runs everywhere I need it to.

I would love to read more about "code as data" in general. I would appreciate any resources you can provide. Thanks.

I took a course in Racket in first year (required for all first year math/CS students at my school) and I've spent some time in the Clojure community as well. "Code as data" is an oft-repeated selling point of Lisp and Lisp-like languages.

When you actually start using these languages, you find yourself tempted to leverage this property and try writing lots of macros. But then you go back to the community and everyone tells you "the first rule of macros is you don't write macros." It turns out that if you use macros all over the place then your code becomes impenetrable. The problem is that macros allow you to circumvent the expected order of evaluation and produce your own novel syntactic structures. This puts lie to the old claim "Lisp has no syntax." In reality, Lisp has tons of syntax, it's just informal and buried within the definitions of macros.

It parallels the problem put forth in Jo Freeman's famous piece, "The Tyranny of Structurelessness" [1]. Any group that purports to adopt a structureless organization ends up having a hidden, informal structure. This is the way of Lisps as well.

[1] https://www.jofreeman.com/joreen/tyranny.htm

There's also one very easy to understand part of the 1st rule. A macro is still a piece of logic, and functions do that very well, so do as most as you can as functions at the metalevel so you can enjoy ease of debugging and everything else; then coat it with macro sugar.

Clearly languages have common idioms. Lisp just has most of them implemented in terms of very basic idioms. Saying “macros are bad because you can overdo it” is a little silly. Sure: prefer plain old data manipulation over syntax manipulation. But when you actually want syntax manipulation, it’s pretty darn nice.

Equivalent arguments exist for Ruby. Ruby isn’t bad because someone wrote a DSL that should’ve been a method and maybe a hash.

That was a fantastic read, so much insight into group dynamics!

i think https://www.cs.kent.ac.uk/people/staff/dat/miranda/wadler87.... 'why calculating is better than scheming' also makes this point too; i'm going to check out this jofreeman thing... never saw it before thanks!

The author is one of the principal folks behind Haskell. But I can’t help but point out that I promise you you’ve never heard of the five or so lanaguages he mentioned as an alternative approach, and somehow this Lisp thing is still around. I suppose that which is dead may never die :-)

> But I can’t help but point out that I promise you you’ve never heard of the five or so lanaguages he mentioned as an alternative approach

I actually had heard of at least KRC and Miranda even before the first time I've read the paper, but even if I hadn't I would immediately be able to think of several languages with some or all of the additional features over the Lisps he was comparing them to for which he describes them as superior to Lisp for the purpose, including some modern Lisp derivatives.

Pattern matching, mathematical notation, and static typing with use defined types are all features that have become more, not less, common in languages since the paper was written.

To be fair, Haskell is the Common Lisp of lazy functional languages. You don't hear much about Maclisp or Zetalisp anymore either.

Was the Racket course based on How to Design Programs[1]? How was it?


Yes, it was! The course is called CS 135: Designing Functional Programs [1]. I really enjoyed it, though I think most of the first years hated the course. Most people come to school having experience with imperative languages like Python and Java. When they got to recursion in Racket they really struggled!

I happen to own a copy of HTDP which I bought for the course. It's a suggested textbook but not required at all. I haven't had time to go through it though.

[1] https://www.student.cs.uwaterloo.ca/~cs135/

Sibling comment provides a great resource, but I think a good place to start is explaining macros.

At the end of the day, every line of code you write is some data in a bespoke data format. Most programming language implementations leverage this, and academic compiler courses make you do the same work. A program lexes, parses, modifies... that data structure and eventually rewrites it to some other programming language, usually something lower level.

If you exposed that underlying structure to the programmer, you give them a lot of power! The simplest way to see this is that any time you write a highly repetitive bunch of code, wouldn't it be great if you could ... write some code that wrote the code for you? You know how to systematically express the thing you want: you can probably say it a lot more crisply in a few lines of (especially functional) programming languages.

Once you get rid of simple repetitive stuff, you realize that you can do so much more with this. You want advanced pattern matching? Sure: you can go implement it as a library. async/await? Library. Type checker? Library. New convenient way of defining functions? Library. New syntax for matrix math? Library. Et cetera :-)

People in other programming languages have figured this out too. That's why we have code generators. The difference is that code generators are far enough removed from what you're actually doing (because e.g. they're glorified string concatenation) to be quantumly less useful. Generally speaking advanced IDEs will do this sort of thing: actually parse the code you're working on and let you do fancy tricks with it. Lisp is like having that power, all of the time.

I messed around with Clojure for a couple of days to see what a Lisp is like but never got into any of the advanced features. What could I build that would illustrate the capability you are talking about?

It's hard for me to provide a good example without knowing some of the stuff you're in to but here's one of my favorite macros: https://github.com/lvh/caesium/blob/master/src/caesium/bindi...

It takes a symbol and figures out the name of a C function, how to most efficiently call it (with type hints for perf). If you screwed that up in one location it might have security consequences. But I _can't_ scre it up, because instead of copy pasting that code a gazillion times as I would have in Python, I just figured out what I want once and then use that functionality a gazillion times :)

As an exercise you could do on a REPL what a macro would do. Say you have a file containing “(1 + 1)”. Reading this (using slurp and read-str) would yield a list, eval-ing this list would however fail as it’s not valid Clojure. To make it eval-able you transform the list (all the operations of the language are at your disposal as you’re just handling data) to (+ 1 1). It’s this operating on data between read and eval that macros do.

Homoiconicity is the googlable term. The first Lisp book that blew my mind is SICP, nowadays free: https://sicp.me/sicp.pdf I think you need a big picture to appreciate the depth of homoiconicism and meta-circularity. It's a big one, but well worth it IMO.

I remember finding it pretty revelatory the first time I saw LET defined as a macro like

    (defmacro let (bindings &body body)
      `((lambda ,(mapcar #'first bindings) ,@body)
        ,@(mapcar #'second bindings)))
Seeing it use MAPCAR like that just made something click for me for whatever reason.

To add on, Racket in particular has some very slick syntax for writing these kinds of things:

    (let ([id expr] ...) body ...)
    ((λ (id ...) body ...) expr ...))

I definitely liked this about Racket but I did also find the elipses to be a bit magic to begin with...

As with all things, the trick to chase the magic away is to implement them. I recently made a little term rewriting library using racket/match syntax but with the pattern/templates specifiable at runtime. Ellipses were a bit confusing to begin with but when you break list patterns down into cons pairs + a cons-like pattern binary combinator to represent ellipses, it started making sense. The unexpected thing for me was that I thought nested ellipses were going to be the tough part, but it turns out they actually just... work.

Stuart Sierra has an interesting talk on homoiconicity and its history: https://www.youtube.com/watch?v=o7zyGMcav3c

Paradigms of AI Programming by Peter Norvig is a beautiful illustration of code as data.

I learnt loads translating the Lisp to Clojure.

Download for free. Thank you Mr. Graham.

The concept of code as data became intuitive to me after reading "The Nature of Lisp".


I’ve been about 6 months in Clojure, and I am finally starting to like it. Coming from decidedly non-Lisp languages, it’s been a struggle. For example, immutability is possible, but not natural to most other languages; for Clojure, it’s instrinsic. I also often scratch my head at the way things ate scoped and tbe whole concept of purely functional programming. Clojure is very powerful and terse, but what I dislike the most, just like with Ruby, is the culture that the code should be self-documenting and self-explanatory. In all the code I have been maintaining this whole time, comments are rare, whereas in Java, C/C++, JS, etc., comments are plentiful and encouraged. I feel like with Clojure, I am expected to understand the code just by reading it. What gives?

Consider comment inflation.

Comments should only be needed when things are especially complex and if they are everywhere their average value goes down. Most codebases of good quality that I've seen have not had particularly detailed comments--except where you really need them. The mere existence of a comment already hints that the code that follows needs to be carefully understood before you can touch it!

There might be boilerplate comments before each function to describe the function and explain the parameters but other than that comments are only reserved for the tricky interactions. But in many cases also this information can be embedded in the name of the function and its parameters!

Good code reads well because it's built on top of lower level blocks that are simple and well-defined. And those are simple and well-defined probably because they are also built on top of even simpler things. This removes the need for a lot of mundane comments.

I don't think Clojure is more exceptional with regard to this than other languages. But I think a higher percentage of Clojure code is of good quality, and that probably is because Clojure attracts good programmers who care about these things.

Most of the time, a comment can be substituted for a well-named function.

As a trivial example, something like:

    (int (* 1000 seconds))  ;; seconds -> milliseconds
Can be replaced with:

    (seconds->milliseconds seconds)
Without loss of clarity.

Ideally, comments should mostly be used to explain 'why' rather than 'what'.

> I feel like with Clojure, I am expected to understand the code just by reading it.

Yes, you are. Outside of example code, this seems reasonable to me. If you can't understand the code just by reading it and a short docstring, it's a sign the code is too complex. There are cases where such complexity can't be helped, but they're not common.

I don't really find that the community is against documenting code, and majority of popular libraries have doc strings for the public functions at least. In general, I do find that reading through decently written Clojure code tends to be easier than languages like Java or Js. The code tends to be a lot more declarative, there are a lot less syntax quirks, and functions are typically kept short. Meanwhile, majority of the code is written using pure functions, so you can figure out what they do without needing a lot of context about the rest of the program. I also tend to just run things in the REPL any time I'm not sure what a piece of code might be doing. So, while I'm definitely a proponent of having good documentation, I find that I have a much easier time reading through Clojure code than most languages I've used.

A lot of people agree that while Clojure has a lot of powerful things going for it, documentation is not one of them, eg: https://www.youtube.com/watch?v=1YCkOo5Y4Oo

It's definitely overwhelming in the beginning but after a while docstrigs + reading code seems to hit a very good tradeoff for me. And just using comments for weird or unexpected behaviour.

There are alos efforts to put more focus on documentation though: https://cljdoc.xyz/

? I am not sure why Moving to X from X is so popular?

Maybe someone would some more google fu or simply a way to query news.ycombinator could do a search for the string in the header...

    a = [C, C++, Closure, Python....] look below for a bigger list

    {Moving from <a> to <a>}

and while you are querying

    {<a> is considered harmful}

It seems like news.ycombinator is littered with posts like these.

I don't want them to stop or anything it just would be interesting to see someone with like an sql database of news.ycombinator posts to see how many are posted in the last year or so and look at them all so I can binge read them because quite honestly they are the better posts on this site.

Bigger list of A here https://gist.github.com/brianherman/64a2800ebd1907c6bba4f6eb...

I think well written posts in this category (I thought OP is an example) explain philosophical differences in an enlightening way, rather than evangelize or advocate that everyone else should also move.

This. I often wonder "Is it worth learning X? Will I enjoy it? Will I feel productive in it?" And reading these kinds of posts can be helpful in making that decision.

there is such a list for Go here


The best article of this kind that i have seen is by far


FromXToY posts are generally garbage - probably 1 out of 1000 are actually insightful and not simply cheerleading

Thanks man I’ll definitely look at this I’ll probably make a meta post but this is a good start

I remember reading that article man.. it takes me back they say nostalgia is a drug

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact