Clojurescript has been the real hero for me coming from a heavy Javascript background. The UI patterns are basically 1:1 with the functional standards in Javascript land, but you're working with a language that has immutable data structures by default (I can ditch Immutable.js) and the core library has all those goody functional helpers I'd include ramda or lodash for.
But in addition to that, shadow-cljs is truly incredible, kudos to Thomas Heller. When developing, my editor is connected to the browser app through a repl, and I can switch namespaces and sort of TDD new code, or debug an issue by cracking into the actual pipeline and interactively spelunking. In JS land, everything is transpiled, so really you have to put debuggers, refresh the page or action, catch the debugger, and do stuff that way. If you're developing new code you can't test out a function if it uses some transpiled feature, so you write a test or do the debugger thing.
It's takes some dedication to get there, and you don't need emacs even though it's really fun to learn and get proficient in. I use Spacemacs and am constantly learning some new package that's installed to help me. I recently switched from parinfer to paredit, and it's reaalllllly cool. With the repl driven development and structural editing you can achieve this kinda mind-meld with your development process. I don't think that necessarily makes this better than Javascript, but if you're into stuff like that there's a really high skill-cap with how you can optimize your development workflow.
And really, at it's core to me Clojurescript is like my perfect Javascript. There were not any new concepts for me to learn as I had been programming Javascript functionally for some time, it's just everything I wanted in Javascript without the friction and bolt-on libraries.
For developer happiness, it has a ton to offer, and there's always something else to dig in to.
ClojureScript is basically an s-expression version of the good parts of JavaScript with lodash built in. It’s pretty cool. That said, if you don’t mind losing the benefits of s-expressions and macros, you can get mostly the same experience in vanilla JS. And even then there’s sweet.js which can get you part way there.
Every few weeks I see a Clojure post on HN, and almost always there's a battle going on in the comments over whether the language is too clever, impractical, a toy, poorly designed, etc. My experience with clojure (a ~1500 loc library) has been quite good. I won't say it's intuitive for those coming from imperative/OO backgrounds, but when it clicked it really clicked. If a team decides to invest the time to learn clojure(script) for part of their stack I'd say it's a phenomenal tool. Please don't knock a language because of a team management failure.
The question is motivated by the observation that a set of REST webservices, and a set of React views that consume them, are so different that there isn't any code reuse. I'd ask you to note the parallel structure between them. They are two sides of the same data sync problem.
Yes, of course there are common bits that can be factored out. The problem is that most people have never so much as thought about it, because of the language barrier, and other barriers – for example teams are structured around this artificial divide of frontend and backend. You need a new architecture to take advantage of the parallel structure. GraphQL is just the beginning.
React.js, if you squint, is also a step in this direction – React.js on the client can bind to a DOM, or run in Node and stream strings to a socket. In other words React decoupled HTML rendering from platform APIs by making it data driven. Data is universal and can be manipulated on any platform, even pencil and paper. What other platform APIs can be replaced by something data driven? I know from my study of Haskell that the answer is probably: all of them.
I think that is a mistake, because data is universal but types and methods are not. Consider a value {"a": [42, 43]} - it could be mutable, immutable, persistent, optimized for ordered traversal, optimized for random access lookups, optimized for insertions.
From one perspective types and methods aren't universal, but data is, because once information arrives over the wire you aren't even initially sure if it is what you think it is.
However, when backend and frontend code exists in the same context for analysis, aren't there enough certainties to know the space of possibilities for what you're going to receive? How much uncertainty precludes the useful sharing of types and methods?
Well if I have EDN or JSON on the wire, and have the same portable Clojure library running on both sides, doesn't that pretty much accomplish the same thing as serializing actual object instances? Or is that what you meant all along?
You can have the shared source files in the same project with the CLJS front-end and Clojure back-end. This way, at development time, there is no friction related to updating different projects or release artifacts, you just live-reload the updated namespace in frontend & backend.
A common use case for the shared code is manipulating the application's domain data structures. Sharing the code for these manipulations is routine. In SPA apps you frequently do more than just render views for the backend, and then you typically want to have the backend's internal representation of the data at hand even at the frontend.
You can call it a library use-case of course, depending what unit of modularity you choose to call a library. But this way you don't have to update 3 projects for each change (app-backend, app-frontend, app-shared) and you can very flexibly refactor stuff into the shared namespace without breaking the normal flow of development.
It's pretty awesome now that graph query languages (graphQL, etc) are so popular: Lots of reuse possibilities when you use the same query language for updating the browser state and also updating server side actions.
Not really. Although Clojure and Clojurescript are both dynamically typed, Clojure is strongly typed while Clojurescript is weakly typed.
Clojure:
(+ "1" 1)
ClassCastException java.lang.String cannot be cast to java.lang.Number clojure.lang.Numbers.add (Numbers.java:128)
Clojurescript:
(+ "1" 1)
"11"
They may be close, but that is all the reason for concern. There are a million ways that small semantic differences like this can completely fuck you and leave you in a debugging nightmare. I would rather use javascript, AKA the worst language ever invented, than a language that claims to be cross platform but with semantics that change depending on the platform.
Here is just one example of what client/server code-sharing makes possible: Hyperfiddle apps are fully CLJC which means literally the same code runs in the browser for view things, the JVM for web service things, inside Datomic Peer for database things, also Node and soon lambdas. Since they all share the same code, they can coordinate I/O automatically and transparently. Never code network side effect ever again! http://www.hyperfiddle.net/
> literally the same code runs in the browser for view things, the JVM for web service things
No, it does not. The same Clojure code may be the source for both execution environments, but that doesn't mean that the code that runs (JVM bytecode or JavaScript in the browser) is remotely similar, follows the same rules, errors under the same conditions, or otherwise behaves similarly.
That's not an argument against code sharing. But please read the post you're responding to thoroughly before you post "I can write it once and run it anywhere" without a whole lot of asterisks after "anywhere".
You're correct that there is a difference between execution environments between the JVM and Js. However, in practice you're very unlikely to run into them. My team has been heavily relying on cross-compilation for about three years now. We have yet to run into a scenario where this kind of problems actually came up.
That clojurescript snippet will issue a warning, is that way because of compiler performance reasons if I'm not mistaken.
Still, you get immutability by default and a nice std lib of functions, think of lodash/fp and immutablejs together but better since lodash and immutablejs can't be use together anyway.
than a language that claims to be cross platform but with semantics that change depending on the platform.
This is not true, on the contrary, Clojure[script] claims to embrace the host semantics and platform, is just that there is this capability to share some code between Clojure and Clojurescript that might be useful for some parts of your application, I don't think this is a killer feature of Clojure btw.
you get a warning from the compiler in ClojureScript when type coercion happens:
cljs.user=> (+ "1" 1)
⬆
WARNING: cljs.core/+, all arguments must be numbers, got [string number] instead. at line 1
"11"
You can also use Spec and Schema to validate data at the edges, so that you don't end up with unexpected inputs. I highly recommend doing that for any non-trivial projects.
There probably is, but I haven't used clojure for at least 4 years so I'm not the person to ask. Regardless, a linter might flag things like that, but in my experience the most pernicious ways that it hurts wouldn't be found because they exist outside the application due to IO boundaries. Things like API or database calls, etc.
I like verbosity in programming language. It becomes pretty easy to read the code, compared to lambdas or other concise languages.
If you are not working for a startup, majority of the time, you will be maintaining legacy code or bug fixing. I would take easily understandable verbose code over "clever" concise code every time.
I've worked with Java for over a decade professionally. Most of the verbosity in it is just noise, and does not provide any meaningful information.
Clojure code is far easier to maintain for a number of reasons. The code is declarative, so it separates what's being done from the implementation details. The first step of code maintenance is to understand the intent, and it's much easier to do that with declarative code. Immutability means that the code is largely referentially transparent, so the cognitive load of understanding a particular piece of code remains constant as the project size grows. This is not the case for imperative languages where you pass references to shared mutable state all over the place. The syntax is much smaller and more consistent, meaning that you have to learn less rules and quirks to understand the code. There are less chances of code being misinterpreted. Finally, you have the REPL, so you're able to run any code you're not clear about right from the editor in the context of your application.
My team moved from Java to Clojure about 8 years ago, and we find that it's much easier to maintain Clojure projects than it was for similar scope Java projects. We deliver faster, we have far less defects, and we're able to make changes much more reliably than we ever could with Java.
> My team moved from Java to Clojure about 8 years ago, and we find that it's much easier to maintain Clojure projects than it was for similar scope Java projects
Could it just be that you are all better developers than you were 8 years ago, and the language doesn't really make a difference?
We're obviously better developers than we were 8 years ago since we've had a lot of practice in that time. However, my team has hired many people in that time, and we also regularly hire co-op students. We see that new employees are able to write better code in Clojure as well regardless of their experience. We've also found that it's easier to train beginners to be effective with Clojure than it was with Java. The language is smaller, more consistent, and encourages good patterns out of the box.
Studies have found that bug count is roughly proportional to program length, across languages. Saying you prefer verbosity essentially means you prefer more bugs. 500 lines is generally less understandable than 40 lines. There may be cases where terseness can be too extreme, but I don't see it here.
Is there some particular aspect of the Clojure code here that you think is overly clever, or hard to understand? This Clojure code uses only one lambda, and in a straightforward way.
I've written a lot of Java, and a moderate amount of Clojure, and if I had to place a wager on which version had fewer bugs, I'd definitely bet on the Clojure. Especially if there were the possibility that it was related to threads.
We could write this in assembly language, and it'd take 50,000 lines, and probably have lots of bugs. The salient point is not (just) the lower line count, but that when code is shorter, that's a good indicator that it's written at a level of abstraction that fits the problem.
I looked over the first example in the article and I don't think this assessment stands up. The Java example could easily be written with reflection and end up quite a lot shorter than the Clojure example.
Sure, you can trade off static typing for terseness. This isn't a Java vs Clojure issue. You can make code even terser by abandoning documentation. Even more terse by abandoning tests!
I don't have time to fully dive into the second example, but at first glance it looks like poorly written Java. You can write Java in a functional style! It can look a lot like Kotlin or Scala or even Clojure, and I generally prefer it to.
Badly written code looks terrible in every language.
And studies show that the number of lines of code a developer puts out in a day is basically constant across all languages. This is usually cited as an argument for more expressive languages.
But if bug count is proportional to LOC and LOC per day are constant across all languages, then bug per day will also be constant across all languages.
You can write shit code in any language. I used to think Java made it harder to write shit code, but the project I'm on right now has made be reconsider this opinion.
If bug count is proportional to LOC and LOC per day are constant across languages, then bugs per day might be the same. But you'd still expect features to be getting implemented at a higher rate, and you'd expect a lower bug count per feature. That's ultimately the metric that's most interesting from a business perspective.
IME, the bugs are also easier to deal with in the more expressive language. They tend to be things like faults in the business logic or gross edge cases that people are likely to catch in code review or QA. Whereas the bugs in languages like Java seem to typically be really annoying things like off-by-one errors, comparing Integers with ==, and goofy run-time type errors that sail past the compiler because of weak static type checking when generics are at play, and also past code review because people aren't expecting to have to review for type errors when they're using a static language.
> But you'd still expect features to be getting implemented at a higher rate
Yes, I'm inclined to agree, but I actually haven't ever seen a study which compares the same project implemented in different languages to establish in toto the variance in SLOC. It could be the case that in the main for the same project the differences between languages wash out, as different languages may have different advantages and disadvantages that are more likely to tell on a substantive project.
What you suggest seems reasonable, but I simply don't know it to actually be the case.
> But if bug count is proportional to LOC and LOC per day are constant across all languages, then bug per day will also be constant across all languages.
Of course -- but the important thing is that bug count per feature/app will be lower in a more expressive language.
> In my experience ease of reading code is not a function of verbosity, but a function of familiarity.
You would think so, sure. I used to write Perl code and I was extremely familiar with it and it used to be pretty easy for me to hack a script. Going back after couple of months and trying to understand it though, was a totally different animal.
You can argue that it was my fault for writing bad Perl code, but ask any old farts who ever had to write Perl.
Exactly. Java code looks artificially bloated (who uses 8 spaces for tabulation anyway?!). Class CookieMap is completely unnecessary, for example - it extends the j.u.Map with couple operations, which could be easily done with Java 8 streams (looks like it's older version of language - diamond operator is not used).
There could be a counterpart here that Java as a language is so complicated that people often come up with these design patterns because they THINK they are necessary.
The comparison of programming languages must be done on equal levels of competence: it simply doesn't make sense to compare a code written by a junior after completing some online course with a code written by an experienced and well-educated engineer, because no conclusions can be made of that.
The fact, that people come up with such code very often is the general characteristic of modern IT job market, where demand is so high, that it eliminates all possible qualification barriers. This particular code is written so not because Java is too complicated, but because the author did not know the minimum required for professional software development on this language or have written it this way on purpose.
Except nobody actually writes Java code like this. Nobody implements the Map interface. It's just stupid code. The myth of Java bloat only serves people writing silly blog entries and others hung up on "best practices" from 15 years ago. It doesn't seem to have any practical basis.
Sure they do. Every Java code base I've seen, including stuff written in the past year, using Java 8, with a mix of senior and junior developers (i.e., both those who cut their teeth on Java < 8, and those who learned > 8) was bloated. I'm thinking of one thing I'm dealing with now; 30 classes, 1800 lines (admittedly, using wc -l, so variable declarations and getters/setters are counted, but that is still visual space I had to skim through to figure out what it was doing. Oh, and that was -after- I removed some of the more obvious cruft), all of it proper OO...or I could have gotten it done in < 200 lines of Node in 1-3 files. All it does is accept a REST call, validate a bunch of parameters, check to see if the result is cached, if not, makes a request elsewhere and cache the result.
The myth of the myth of Java code only serves people defending Java academically, and those hung up on defending a language that enourages and defaults to bloat in any sort of shared coding environment. It doesn't seem to have any practical basis.
I've implemented the Map interface maybe a half a dozen times in my career in Java and many more in Clojure (using reify and friends). It's often very convient to offer a map interface to some remote service or data store as well as situations where I needed very custom caching or a specialized algorithm. The harder the thing I'm working on, the more motivated I am to present its API as something standardized if possible. This lets my users spend their complexity budget on what feature I'm offering and not on some random API I threw together.
Generally at least in Java or similar languages it's better to compose rather than implement or extend, i.e. HashMap works fine for most cases, and you may not really need to implement java.util.Map if you're building something "special" anyway. For frameworks and library authors, I can see more of an argument for it, I just generally have never subclassed collections except in certain cases such as building an LRU map which Java doesn't have.
There are places stuck with Java 6 or 7. Java 8 is still relatively recent for the Enterprise world.
Java 8 was a special version. Lambdas, stream API, diamond operator and new exception catching syntax together eliminated 50-80% of noise in Java code. The language is still verbose, but much less than pre-8.
(Still, I prefer reading Clojure. The form of Java language still causes too much structural noise in the codebase.)
> Java 8 is still relatively recent for the Enterprise world.
Most actual surveys put Java 8 penetration at 70 - 80%. Dig deeper and the <20% of projects not on Java 8 aren't under active development and are purely in maintenance mode.
This is what makes the entire exercise a myth. People may want to believe this stuff but again it has no practical basis.
How are those surveys done? I'm not sure if the type of companies I'm thinking about are answering surveys, or otherwise participate in broader developer community.
I was pretty much spearheading the use of Java 8 in one company ~2.5 years ago, but I know some teams actively developing there only upgraded a year ago, and I'm willing to bet the main customer still didn't...
Ah, ill will or using Java as a punching bag was not my intent at all. I was working on this task, and the Java code I referenced was based on a popular Gist to accomplish the same task that I needed to accomplish, with many comments of users adapting (copy/paste programming) the same code for their usage. I felt an 'in the wild' solution was comparable to my novice-level Clojure solution (those more well versed in Clojure would likely write a much better implementation of what I did).
This was meant to be a real world use case where I found my problem solved in Clojure turned out much better than it would have been solved in another language running my choice environment (JavaFX), instead of some arbitrary contrived examples (see Rosetta Code for tons of that).
Maybe the other languages should rename what they are doing as Read Eval Print Loop comes from Lisp; read, eval and print don't have the same semantics in any other language (read especially).
Anyone who writes Lisp without taking a moment to add a docstring to every definition would never get a job offer from me. (Not to mention comments as appropriate.)
I don't think I would not extend a job offer for lack of a docstring, especially given how trivial most of these functions are paired with their self-explanatory names. I'd actually rather a namespace docstring explaining intended scope and api than function docstring for trivial functions. I've found that function schemas/specs and reasonable names eliminate 90% of docstring material. Whether the remaining 10% is worth the string above and beyond the fn name is a case by case deal.
Comments are fine when the code is doing something unexpected or that is very terse. In other scenarios, comments are just a land mine to be armed when the code changes and the comment isn't perfectly updated. Bugs largely come from developer expectations being broken (mostly by one of: misunderstanding data shape, some API detail, some language feature, or miscommunication on the feature with the owner) and stale comments are a contributor to this which can be avoided in many cases.
Clear and descriptive function names, concise and non-clever code, and a judicious use of well-named and concise helper functions are all way more valuable than docstrings. IMHO docstrings should be used as a last resort if the problem domain is truly complex enough to warrant it.
Docstrings for trivial, well-named functions just get in the way, and if you feel the need to add a docstring to a complex function just after you've defined it, then I think that might be a hint that you should refactor the code to make it more clear and readable instead.
An ns docstring that describes the intent of the api and docstrings for the major public interface functions are a good idea, but a blanket "docstring for every function" rule is a crutch to make up for unreadable, poorly-written code.
If you have to write a comment to explain every definition, you've failed to make the code sufficiently readable and simple to understand IMHO. Code-level comments can become outdated without eagle-eyed code-reviewers. The code itself can never lie.
Comments are a useful for explaining the implementation logic of a function or method, for instance. Or for generating API documentation. And file-level comments explaining the purpose of the class or module are always handy.
LISPs seem to me the best candidates for an alternative to the traditional text editor. I could imagine a mind-map sort of view which would allow you to expand and collapse sub-trees etc. Does anything like this exist? To me, that would be reason to experiment with something like Clojure.
There is a long history (going back to the 1970s) of structure editors for Lisp and there was a discussion of them on HN recently. Interlisp and Interlisp-D were the primary proponents of this approach; I used to use D-Edit when I worked in Interlisp at PARC, though it wasn't really for me as it heavily depended on the mouse.
IMHO structure editing emphasizes the wrong part of code development. In practice collapsing graph structure is less important in code as opposed to, say, JSON. Whereas you want to be able to use your eye to hop around, and free-form alignment sometimes helps to show parallel constructs.
Remember also that in a structure editor even your comments have to be part of the code and can only be inserted in places where an expression won't change the flow of control (e.g. you can't do (if (* this is a comment*) some-condition result))
Similarly, I would whole heartedly recommend the Common Lisp macro system (having myself discovering how to use it a few days back). The power it gives to the programmer is unmatched by any other system I have ever used (except maybe forth)
The Clojure macro system is very similar to the Common Lisp one, arguably with some improvements- The Lisp languages in the Scheme/Racket family, on the other hand, have a different (more safety-oriented) approach to macros.
> very similar to the Common Lisp one, arguably with some improvements
Quite the opposite. Common Lisp macros are totally unrestricted, they don't auto-qualify names with the package. Also, there are user-defined reader macros in CL, unlike Clojure.
Clojure is something I have been wanting to experiment with, but not having types makes it very difficult for me to reason while coding. It's just me. I am used to thinking in types.
So speaking as someone who has been on both sides of that fence (and, honestly, prefers static or optional typing; Dialyzer for Erlang is probably my favorite approach there), I think a large part of that comes down to how types are used.
In a very OO language, where you're encouraged to create a complex type for every function/method contract (i.e., I have a type of RoomMeasurement, that internally contains a list of Measurement interfaces, each of which is in fact implemented as a MeterMeasurement, which wraps a double), static typing is very, very necessary, because it's not at all obvious what a function takes. And you need thorough API documentation because how another developer has chosen to represent things is not obvious (that is, you are trying to interoperate with a library that doesn't understand your RoomMeasurement, but does work with just a list of measurements, but they have to be in imperial, not metric, and how do you get the list of measurements from your RoomMeasurement, and convert them? Do you have to write a function, is there one already, does it take the RoomMeasurement, does it take the list of MeterMeasurement, does it just take a single MeterMeasurement? Etc)
When sticking with simple types, though, it becomes a lot easier to reason about, and you can get away with just comments, or very slightly more complicated types. A list of measurements is just a list of ints...but maybe dropped into a tuple where the first arg is the type (i.e., roomMeasurements = {meter, [4.22, 5.7, 3.1]} ). And all you have to find/write (and since it's just data it doesn't matter which because there's no hidden stuff that needs tweaking) a meterToFoot function, and apply it as a map. I.e. (pseudocode), ->
Now, is that perfect? No; even if you as a developer choose to apply type information, another developer can choose to ignore it. But I think that's rarer; more common is when people don't think to supply type information at all, and pass around just (per the example), an array of doubles. You can do that in a statically typed language too, though.
I think the key difference is that people coming from a statically typed, OO language, to a dynamically typed FP language, can either end up creating the same complex data types (which are a nightmare to deal with even with static typing, but doubly so without), or they see all the examples, embrace the simpler data structures...and then don't actually supply the necessary typing information.
The reason, then, that I think static typing -is- good, is because it makes it harder for me to ignore/forget to handle the typing I have provided. That is, I may have a 'metricToImperial' function, that takes in a type tagged array of doubles, and a desired type, and determines and applies the appropriate function. But I can still forget to include a necessary conversion in the resulting case statement ('whoops, I called it with a lb to g conversion over here, and I forgot to implement that one'). It's times like that I really like optional/inferred typing; many places I don't need to check for typing, because it's obvious, both to the developer, and to the compiler if it can infer types...but I can still make mistakes, per that. Of course, that brings me to unit testing...
For web projects, the benefits provided by Clojure on the backend side may not be very large; there are already good languages with extensive libraries available (Django, Rails, etc.). It arguably may seem hard to choose Clojure with its minimalistic libraries when these frameworks provide an "easy" [1] way to get running with a full-blown admin interface. Furthermore, backend code by itself tends to be dependency-heavy in the way that you need dependencies which you'll definitely not write yourself: You need a library to interface with your database, you need something for cryptography / passwords, etc.
Looking at the frontend (React) side however, things are different. The JavaScript ecosystem is a mess. From a viewpoint of a React developer, there are lots of libraries which vary widely in quality. react-router is an interesting example here, it had 4 (?) breaking changes so far by replacing the entire api. There's a ton of mental overhead for the normal React developer trying to write a "simple" app.
Ironically, developers start rolling their own stuff. Instead of using a form library which tightly couples your components to your redux state (redux-form), you start writing your own. Instead of coupling your entire views to graphql via apollo, you start doing it differently, your way.
This is where ClojureScript is a game changer. If your app differs just slightly from a (very) vanilla CRUD app and whipping some libraries together doesn't do the trick, you start writing custom stuff. When writing custom stuff, you want a programming language which is a) well thought through (great standard library, immutability, sane concurrency) b) predictable and c) productive. ClojureScript has all three while JS has none.
We (Merantix) are currently developing a medical image viewer in ClojureScript and had prototyped two separate versions: One in JS with React, another one in ClojureScript with reagent and re-frame. Even though it is dependency-heavy (webgl stuff), ClojureScript turned out to be the superior choice: Immutable data structure at its core which ironically perform better than Immutable.js and way higher developer productivity due to less random bugs and a more interactive development (REPL).
Using Clojure on the backend now seemed like an obvious choice: We can reuse and share code from the frontend and more importantly, all our developers are "full-stack" in the sense that everyone can at least understand what's going on "on the other side" (backend / frontend) of the stack as it's literally the same codebase.
The learning curve is significant but the advantages are tremendous. I wholeheartedly recommend learning Clojure even if you're not allowed to use it at your job. It sounds cliché, but it will make you a better programmer for sure.
I recommend everyone to once in a while try to debug Clojure code written by somebody else. Afterwards you will understand that this is a write once, read never language. It is an unmaintainable mess of overly clever recursive subroutines. It has some nice experimental ideas for concurrency. But basically all useful ideas are also available in Java nowadays. I wouldn't waste my time on it.
I really like and use Clojure professionally, but I have become wary of the extraordinary time I spend dealing with runtime issues because of the dynamic typing. I hope the future of core.typed is bright. I know it is being very seriously worked on. It can't come soon enough for me. Nowadays, I prefer to write in any statically typed language even if it is more line counts, just for my own sanity.
For Clojure, check out Ghostwheel [1] - a lightweight DSL for writing specs.
If you want proper static typing though, ReasonML might be a good choice. Static, compiles to js and native, super easy to learn, and there’s an experimental Lisp frontend with Clojure-like syntax if you can’t live without paredit.
Author of Ghostwheel here – clojure.spec is certainly not a replacement for static typing, but it goes a long way to covering many of the same use cases, in fact longer than one might think at a cursory glance.
With Ghostwheel you write your function specs similar to how you'd write a type signature and you get automatic generative testing (including higher order function support) and side effect detection which – when combined with spec instrumentation (+ the upcoming evaluation tracing for the test execution) – can often tell you quite precisely where you screwed up in a much more immediate and granular manner than a simple unit test or mucking about in the REPL could. It really is a quite different experience from plain Clojure.
That being said, I'd love types in addition to this and I'm keeping a keen eye on ReasonML.
> Clojure is a dynamically typed language and always will be
I don't entirely agree with the "always will be," as Clojure is a very creative language that inspires a wide variety of experimentation, and core.typed is currently being actively worked on (as a PhD dissertation no less by the original author of core.typed), so there is ample room to think the future will offer good static typing abilities for Clojure.
If you also add the line count for the additional tests needed to check the behavior of passing not what you expected, then I'm not sure it will be more.
On the contrary, nil punning seems exceptionally well suited to a dynamic language. In general, I'm only burned by nil propagating into Java, almost all Clojure code seems to handle nil appropriately.
I'm unsure of a good way to represent optionality in a dynamic language without a static type system. Or rather, doing so with the tools available -- core.match vs real pattern matching -- seems rather un-ergonomic.
I'm curious about your thoughts of what would be better.
You are right regarding the dynamic language, well, dynamic. I haven't put any thought into it and I can't come up with a viable solution of eliminating nil without some type system strategy (Maybe/Either Monads in Haskell, Option in Rust, etc). It's kind of a bummer, because I love the Lisp dialect of Clojure, but really appreciate software projects that I've written in the past that don't crash due to a forgotten potential nil value handling.
Yeah, I think many Clojurians would mildly agree some changes to how nil is handled would be nice... but this is such a fundamental behavior that it's very hard to justify modifying the language at this point for just a minor benefit.
I love the clojure community efforts, but clojure.spec is so confusing and bloat-y to me. :(
It reminds me of Frama-C and specification of C programs, which isn't exactly what I would want to do all the time to have some safety guarantees on my program. I feel that a strong type system would provide way more benefits.
Much as I love Clojure, I agree that clojure.spec, while very powerful, has a mess of a UX. That's why – shameless plug – I wrote Ghostwheel [1], which, to me, turns it into a whole other thing, especially when gen-testing, spec-instrumentation and tracing are used together.
Having inferred types in addition to this would be even better, but types are no replacement for generative testing or vice versa.
They really complement each other quite nicely, but also have a large overlap in terms of how much they can reduce the need to do manual debugging and enable clean changes/refactorings with minimal effort.
The claims regarding JVM bloat are largely exaggerated, especially now that the JVM introduced modules. However, if it's not your thing it's worth noting that ClojureScript happily runs on Node. Here's an example of how easy it is to get up and running with https://github.com/yogthos/mastodon-bot
I'm working on my first Clojure project and find JVM to be painful (it starts very slowly and eats lots of RAM). I'd appreciate any tips on making JVM non-bloated.
If start time and memory usage is important to you, go with ClojureScript on Node instead.
If you do stick with the JVM, it tends to trade memory for speed, unless you tune it otherwise. For startup speed, start it once, connect a REPL, and keep it running. A started REPL on a JVM instance can be flushed very quickly when you need a blank slate.
Don't know about bloat fixes, but what does your workflow look like that JVM starts are annoying? My Emacs CIDER REPL comes up in 4 seconds (MB Pro 16G) and it's often many hours (even days sometimes) between restarts.
But yes, slow starts are annoying if you're starting a JVM frequently. Can you change your workflow so that's not needed? Check out Component, Mount or Integrant.
We're using Integrant already so yes, I can mostly work in a REPL without restarting. But the restarts are more like 15 seconds. I'll have to measure and see what's so slow. My rdev platform is Cursive on a 16G MBP.
And while I'm commenting, yes, we have a reason to run on JVM (a lot of legacy Java code we probably want to interface with later on). So thanks to other commenters for suggesting Node.js and Common Lisp, but not possible. :)
Don't underestimate Erlang's capacity for eating-up all the RAM on a machine in very quick time. Those lightweight processes start to add up if left unchecked. At least the JVM establishes an upper limit on memory use from the outset.
Thankfully Clojure will go nowhere beyond a few edge places. I saw a decent sized project written in Clojure script that had to be rewritten once the original authors moved on as new hires struggled to get anything done. Small features took enormous amounts of time.
Your experience appears to be an outlier. Lots of companies are using Clojure for large scale projects, and my own team has been happily using it for nearly a decade now. The fact that your team struggled to get anything done probably says more about your team than Clojure to be honest.
Thanks to the rise of SOA, basically any language no matter how obscure can claim multiple corporate users for for what are ultimately inconsequential projects.
But as long as we're using personal anecdotes, I saw several teams using clojure at Amazon when it was more popular. Less than two years later, all of them that I knew of (I was a clojure user too, interested in its use in the company, so I was keeping track) were at some stage of abandonment or rewrite into more boring languages. If you're lucky enough to keep your team small and ideologically aligned, it might work for your team, but I have yet to see large company make a significant bet on it (as in more than a few one-off teams) and come out ahead.
Most companies don't really publish their tech stacks in the first place. However, there are a number of consulting companies, such as JUXT and Metosin, who built their entire business on Clojure consulting. Clearly there is a market for it, and if you look at the clients of these companies it's quite clear that it's not just some one off projects.
Again, my personal experience is that my team grew from 3 to around 20 people now all working exclusively with Clojure, and we've never had any problems onboarding, or being ideologically aligned.
Just because it allegedly didn't work out at Amazon doesn't really translate into sweeping assertions you're making about it.
The fault here is not the language. No new technology should be introduced into a team by a single person. It must be a group decision and everyone must participate.
Clojure and Clojurescript are great but they have a steep learning curve. You have to weigh the advantages and the inconvenience of every tech
That can happen with pretty much any technology if you hire people who can't learn new things.
The current de facto tech stack for building SPAs in ClojureScript seems to be Reagent + Re-frame (or something built on top of them). It's conceptually so close to React + Redux that I have a really hard time imagining a Redux guru who wouldn't be productive with Re-frame in two weeks.
Then of course you can have jQuery developers who absolutely cannot work with React and probably never will, and possibly vice versa (never seen that tested). Back in the day AngularJS was cool but much of the code written with it was horrible, because people wouldn't work through the tutorial which explained how to use it as a tool, not a footgun.
I believe that if a company wants to do software by throwing as much bodies at it as possible, they need to embrace it fully and openly. Going back to pure Java might be a good idea, as the language enables progress to be made in a "factory coding" environment pretty much by design.
(Not a judgement; this is a legit use case, even though not an environment I like to work in.)
You would think but they measured what they need and they are sure that Elixir is the right fit for their use case. They aren't a CRUD company, like 99% of others. They are doing freaky things with multiple freaky orchestrations with multiple freaky OLD-WORLD institutions. Rails aint gonna cut it.
Fair enough. I personally think that companies for which the choice of technology is gonna be the primary factor to success are as common as unicorns. I have never seen a company fail because they picked the wrong language/tech stack. But hey, if they've measured, they probably know what they are doing.
But in addition to that, shadow-cljs is truly incredible, kudos to Thomas Heller. When developing, my editor is connected to the browser app through a repl, and I can switch namespaces and sort of TDD new code, or debug an issue by cracking into the actual pipeline and interactively spelunking. In JS land, everything is transpiled, so really you have to put debuggers, refresh the page or action, catch the debugger, and do stuff that way. If you're developing new code you can't test out a function if it uses some transpiled feature, so you write a test or do the debugger thing.
It's takes some dedication to get there, and you don't need emacs even though it's really fun to learn and get proficient in. I use Spacemacs and am constantly learning some new package that's installed to help me. I recently switched from parinfer to paredit, and it's reaalllllly cool. With the repl driven development and structural editing you can achieve this kinda mind-meld with your development process. I don't think that necessarily makes this better than Javascript, but if you're into stuff like that there's a really high skill-cap with how you can optimize your development workflow.
And really, at it's core to me Clojurescript is like my perfect Javascript. There were not any new concepts for me to learn as I had been programming Javascript functionally for some time, it's just everything I wanted in Javascript without the friction and bolt-on libraries.
For developer happiness, it has a ton to offer, and there's always something else to dig in to.