Sometimes I do wonder why people decided to use a new language for a critical/important piece of their product while learning on the go as well.
"My experience is that when you tackle big problems, that go beyond simple execution but require actual strong engineers, hiring will be a problem, there's just no way around it. Choosing people that fit your development culture and see themselves fit to tackle big problems is a long process, integrating them is also time consuming. In that picture, the chosen language isn't a huge deciding factor."
As to the issue of change, you go with something else when the current stack is bad and/or unable to do something you need. Colin Steele wrote about this when he decided to switch Hotelicopter to Clojure. He first wrote about how the existing PHP stack needed to be re-written:
"For example, at that point, the site ran out of one ginormous subdirectory with hundreds of PHP files scattered like chunks of gorgonzola on your salad, sticking to one another with tenacious glee. There was a “lib” directory, which you think would hold much of the supporting library code, but a good fraction of that actually lived in “site”, and some in “server”. The previous programming staff had felt it good and worthwhile to roll their own half-assed MVC framework, including a barely-baked library for page caching (which broke and took the site down at regular intervals), and components for database abstraction that only worked with - wait for it - MySQL. Every single goddamn file was littered with SQL, like bacon bits on this demonic salad. There was a “log” directory, but the search logs weren’t kept there, they were in “server”. Etc., etc. It made you want to eat a gun."
There are certainly situations where the codebase calls for a complete re-write, and, as the writer says in the quote above, finding good engineers is hard, and the choice of language is a side issue compared to the difficulty of finding good engineers.
Language may have not be a huge deciding factor until it does.
My question is focused more toward the existing team that probably has some pretty good skill in, let's just say, Java and OOP and now they're turning 180 degree to pure functional like Clojure as opposed to Scala (not that I'm a big fan of Scala or anything like that, but just as an example).
If it were that easy to learn new languages, we wouldn't have issue with hiring as most of the job opening requires the candidate to know specific language however we would like to believe it was not the case.
"He didn't do that because he doesn't like Java "
Your words imply that somehow the preferences of the top engineers or CTO should not matter. But why should those preferences not matter? If we are talking about people who are talented, then we can start with the assumption that those preferences are probably the distilled wisdom of many years of experience with particular styles of development. We don't need to re-enact the millions of debates that have happened regarding whether Java is good or bad (One side shouting "It is verbose!" the other side shouting: "Type enforcement is good", etc, etc, until the last syllable of recorded time). It is enough to know that good engineers have preferences and those preferences probably have some benefits.
You also wrote:
"My question is focused more toward the existing team..."
What if the current team is terrible? Again, referring to Colin Steele, he fired/allowedd-to-leave the entire team that existed when he first arrived.
What if we turn your question around and ask it from the other side: is change ever justified? I'm guessing you would say "Yes".
"What if the current team is terrible" is not the focused of my question hence providing an example from Colin is arguably not in the right context of this discussion and I'm going to leave it to that because there's no point to discuss as anyone could have switched the underlying technology from Java to Ruby and fire all Java developers and hire Ruby developers.
Colin is a very specific example that is written by the man himself. The justification is sound. Having said that, I could pick some obscure language tomorrow (not that Clojure is these days) and forces my own preferences to go forward as long as I can reach the goal and claim in an article how my personal preferences were the best thing as well even though it may not be the case.
But nobody knows the truth... right?
I'm not trying to be negative here but at the same time no human willing to admit his/her mistakes to be honest. Especially when there are plenty at stakes.
The reason why some companies have "hiring problem" is that they require X years of experience with framework Y in language Z, although that language Z takes 1 month to master, and framework Y another two weeks.
Why are they doing it? Because HR is doing hiring, and because "hiring is problem" they get bigger budget for hiring, instead development department getting bigger budget for training.
This article motivated me to consider Clojure for a new business project. Clojure can use the JVM infrastructure, that's a big advantage. Java has become too boring for me (C# also).
Because flying by the seat of your pants, making shit up as you go and putting something into production without any real understanding of what the fuck is about to happen is one hell of an adrenaline rush.
I keep hearing this, or statements along these lines. But I never see an explanation for this. Performance? Managing memory from other processes? Bit twiddling required to create device drivers? Something else? Are these insurmountable?
- ability to manually access memory
- ability to use assembly and translate said assembly into the high-level language
- ability to install interrupts
You probably also want the ability to precisely constrain the amount of memory a block of code uses.
I suppose there are other nice to haves as well: e.g., GC will need to be fairly controllable.
I see no practical reason why a Lisp can't be used to write an OS, but that Lisp will need a few extensions from the usual modern Lisp capabilities.
Not at all, lisp machines used custom silicon implementing lisp in hardware, I believe even the microcode was lisp.
> I suppose there are other nice to haves as well: e.g., GC will need to be fairly controllable.
If you're coding the hardware to the language, parts of the GC can be hardware-supported directly.
> I see no practical reason why a Lisp can't be used to write an OS
Especially since it has been used to write OSes, multiple times.
Can you elaborate as to exactly what you mean here?
int variable = 0xdeadbeef;
ldw eax, $(variable) # load from variable
int 0x3 # trip interrupt.. maybe read from a IO port or something.
stw $(variable), $eax # get result, put into C variable
What I would like to be able to do in Lisp is to execute the semantic equivalent of the above code fragment. I've considered hacking it into SBCL, but I have had higher priorities so far.
(mov :ax :ds)
With all the new excitement around Clojure, I'm just trying to understand the theoretical and practical limits of the language.
It's primarily useful for sovereign applications (e.g. precludes utilities) which do not require high memory density (e.g. precludes databases).
I'm quite fond of Clojure's model, but some tradeoffs had to be made.
Is this solely because it runs on the JVM? Because per the alioth comparison tests, Java 7 uses much less memory than Clojure too.
That said, Clojure is quite fast compared to Python, Ruby et al, but this is primarily a side effect of running on the JVM. FWIW, Java is even faster.
Only after detaching it from the JVM. Forcing all apps to run on JVM would obviously be too limiting.
> And could Clojure be fast enough to power the actual OS itself too?
It is definitely possible to get C-grade control over machine code with a Lisp, as evidenced by GOAL/GOOL. Not sure though if it is worth it.
Well you could also have machine code be a lisp (as did many lisp machines, which used lisps from the microcode up).
I'd go one level above: make a HIL bridge between standard x86 drivers and your lisp machine code.
Why not simply add support for concurrent programming as a library instead?
Clojure's concurrency primitives are libraries. You don't have to use them; it's perfectly OK to use, say, kilim or java.util.concurrent, and many do.
Why not provide both immutable and mutable versions of the same data types?
The mutable variants of these structures are already there in java.util, and the stdlib is designed to interact well with java interfaces like List, Map, etc. 
Also when writing real-world software, what about the effort required to align the multitude of Java libraries that assume an imperative environment with Clojure?
I've used nontrivial Java libraries in Clojure: it looks about the same as the Java code, only shorter and with fewer parentheses. Yes, fewer.
Can these libraries be used easily, safely and with the same performance?
Yes. Obviously the Clojure parts won't run quite as fast as the Java parts, but interop is complete, concise, and well-designed. InvokeVirtual is InvokeVirtual either way.
Meanwhile Clojure makes obvious improvements over CL in reader forms for readability: the vector, set, map, and regex literals make it much easier to write and understand. It may not be as pure as some other Lisps, but it's a competent addition to the family.
 A property I miss in Scala. :-(
I would beg to differ. The features you note are pretty subjective and not obvious improvements at all. I've done a few small non-trivial prototypes in Clojure. I cannot stand the syntax for literals. Or the mangled hell that is the literal for lambda. At least CL lets you define your own reader macros -- something that Clojure cannot do (yet). These features you mention are not improvements over CL IMO.
In general, Clojure code seems to have less nesting, which makes it easier for me to read and parse. You're honestly the first person I've heard express a dislike for the reader forms, so I thought it was universally liked.
I think you're right: a lack of configurable reader macros is a problem. I'd also point to the lack of tail recursion (and consequent mucking about with (recur) and (trampoline) as a notable flaw in Clojure. Its error messages are pathologically malicious. On the other hand, I think Clojure's packaging environment, thanks to lein and clojars, is quite good. There also seems to be more consistency in Clojure coding... style? preference? than in CL, which I attribute partly to its young age and small user base, but also, perhaps, to a more opinionated set of attitudes around mutability and types.
Regarding the #() form, I almost never reach for it, partly because it never seems to work as expected. (fn) and (partial) feel more natural to me, so I've never taken the time to understand how #() works.
I also prefer Lisp-1s in general, although I understand that's a more contentious differentiation.
Either way, my CL experience is minimal, so it was wrong of me to claim these as obvious improvements. I'll defer to your expertise here: it sounds like you've used CL enough to understand it better.
If I hate all food but pancakes, then my opinion on steak is hardly relevant.
I dislike Clojure because it is a cheap knock-off of Common Lisp, with no upsides that I'm aware of - besides trendiness. The article mentions one serious flaw - Clojure barfs Java stack traces, instead of serious debug information (the way Common Lisp does - with restarts, etc.) Another flaw is the lack of reader macros. But what is the point of listing said flaws? I could go on and on, and no one will care a whit. Why? Because Clojure is a product of immature minds who piss on the past work of serious people (the Common Lisp community) simply for the sake of faux-novelty. In this, it resembles Newlisp and other backwards steps disguised as progress. It is, in fact, best understood as yet another product of the idiotic language-of-the-day mentality which gave us Dylan, Python, Ruby, and the many other shoddy "infix Lisps."
> hates all languages that aren't the Symbolics flavour of common lisp
This is patently untrue, as anyone who actually bothers to read my articles knows well. Symbolics (or rather, the MIT Lisp machine architecture their products were based on) was simply an example of a Lisp system done well. Clojure, a thin veneer on the Java cesspool, is not. To run with your analogy, I am a lover of steak, who is upset by tofu peddlers' success in passing off their cheap trash as real meat.
In my opinion, Clojure doesn't try to one-up Lisp. Rich Hickey recognizes the great value of Lisp and wants to share these features with a wider audience -- I can't say I would have ever taken an interest in Lisp if it weren't for Clojure in fact.
Check out Clojure's rationale page (http://clojure.org/rationale). Rich says he wanted a language that is:
*for Functional Programming
*symbiotic with an established Platform
*designed for Concurrency
I think you misunderstand the designers of these new languages. Most of them aren't trying to make something "trendy" (that should be obvious by how many esoteric hobby languages there are). They are trying to solve problems. And if some people other than the author of the language can get some use along the way, well then that's just great.
Clojure isn't the product of "immature minds" either. It's the product of one mind, and one that's spent many years thinking about good language design and many years working with software in industry.
Gratuitous reference: http://news.ycombinator.com/item?id=80662 (checkout pg's first comment on that post )
Symbolics was the future. CS has in many ways regressed since then, mostly because of the stupendous success of worse-is-better unix/c mindset coupled with lack of DARPA funding ,the AI winter & the rise of cheap wintel cpu's. At Goldman Sachs, I was incredibly fortunate to work alongside one of the original Symbolics guys ( listed here: http://www.asl.dsl.pipex.com/symbolics/legacy.html ). He complained long and hard about how the c/c++/java family of PLs were "absolute shit" and OOP was doomed from the get go and why functional will be back someday. This was back in the 1999-2000 timeframe when Sun was in the driver's seat and everywhere you looked, it was OOP or bust. Now, ten years hence, functional is all the rage, and you've got clojure, functional java, scala, several functional JS libs...all that's old is new once again! Who knows, with all this VC $$$ searching for a home, we might actually see the resurgence of a LISP Machine. Symbolics 2.0 FTW!!!
See this thread on my site, the story of how I kicked the Mathematica habit:
What tool do people use to write Lisp Web code? Is it GNU Emacs or something else? How compatible are Emacs List and SBCL?
Programming Clojure (2nd edition) by Stuart Halloway http://pragprog.com/book/shcloj/programming-clojure
Clojure in Action, by Amit Rathore http://manning.com/rathore/
The Joy of Clojure, by Michael Fogus and Chris Houser http://joyofclojure.com/
I found it to be the best beginner book on Clojure and one of the best written programming language books I've come across in a while.
Programming Clojure would be a good followup. I found that Clojure Programming gives the "how" of the language and Progamming Clojure gives the "why", if that makes sense.
Clojure in Action is good, but it is definitely an "In Action" book. Depends on your learning style. I've only skimmed the Joy Of Clojure. It looks really good too but I don't think it's targeted at beginners. Could be wrong.
That said, I'm personally of the opinion that compile-time static type verification isn't quite as powerful as is oft conjectured, in terms of safety and performance. Sufficiently expressive type systems pay some of the same costs as run-time verified types in terms of indirection, and in practice the difference rarely seems significant.
To be more explicit: I assert that C (and, for that matter, Java) do not have strong type systems, in the sense of power, not rigidity. There are huge classes of bugs in kernel code that could be caught by static type verification and can't, because the C type system simply isn't powerful enough to express the invariants required. Yet, somehow, people write successful operating systems in C, and they work not because of the type system but because of careful thought and thorough testing.
Even more to the point: We have written operating systems in dynamically-typed Lisp before, even down to the microcode level. The Symbolics Lisp machines should illustrate that it is perfectly possible to do what the parent asserts is infeasible.
All this said, I certainly wouldn't write an OS in Clojure; though I love the language, the JVM just wouldn't be an appropriate substrate. The instant someone announces an LLVM Clojure compiler, though... ;-)
Although, one thing is not clear to me. It reads to me that you are you saying that indirection incurs only some of the cost of dynamic typing, so why do you follow with "and in practice the difference rarely seems significant"? I can't connect the latter part, clearly I have missed something.
I am of the opinion that you can write complex systems in C because, although as an abstraction of a complex system it is itself complex, there is no incidental complexity as is found in C++. Its small size makes it simple for those who have understood the system it abstracts. I still believe that a more expressive type system would reduce some amount of effort and uncertainty when testing and using C.
Do you consider Clean, ATS, Haskell and OCaml type systems to be expressive? These languages are expressive with powerful type systems and tend to be very fast, often without leaving idiomatic code. In my experience I have been saved many times by helpful type systems and follow them like gradients, allowing me to hold more without getting lost in the minute details of correct piping, invocation and application. I find I am much slower with dynamic typing and pressed with a feeling of paranoia.
I can't explain why preferences are often expressed so viscerally, my only guess is that there must be something biological that influences what type of type system you prefer.
I also love the concision and flexibility that comes with Clojure's idiomatic eschewing of type information: it helps me focus on functional composition instead of the particular data. Both have their advantages. Java just makes me angry. ;-)
Regarding the difference in cost: I meant to say that many "strongly typed" compilers are not yet smart enough to elide many of the run-time indirections and safety checks that dynamic languages must use. Really good type systems, like Haskell, are different: precomputing finite-domain functions to lookup tables, finding fixed points, etc.
This is not a domain I understand very well, but the comments I've read from language folks (for instance, the Dart VM designers) suggest that type checks in particular have a relatively small impact on performance. Polymorphism still leads to things like vtables, etc, and as I understand it modern x86 is pretty good at handling these cases.
Again, I know very little about actually writing compilers/vms, so if you have further comments I'd be interested to hear 'em!
 That said, Haskell's type system drives me nuts in its proliferation of types which are almost but not quite compatible; nothing worse than trying to use two libraries which will only interact with their own particular variant of a String or ByteArray.
The slow down in dynamic languages is in more than type checks though. The slow down is because dynamic languages are like some crazy awesome dream world where anything can change at any given moment and things are not necessarily what they seem. So VMs must be extra vigilant, checking many things like assignment, for exceptions, if the object is still the same class, if it still has the same methods etc. The term dynamic is almost an understatement! Add to that boxing and the possibility of heterogeneous collections and slow downs are the price to pay for all that flexibility. This makes things very hard to predict both on the compiler level and also at the CPU level in terms of branching - which alone is very costly. Dynamic language programs are not easily compressed. There are ways around that and languages like Clojure offer a sort of compromise, by being able to lock certain parts down in a solid reality, structures (in the sense of regularities) coalesce and can be used to speed up parts of your code. You can choose where to trade flexibility for speed.
You're right (Subtype) Polymorphsism do incur a cost but modern CPUs can well handle these. But with parametric polymorphism and value types you get no runtime hits and also get some free theorems to boot.
That said, how you think has to have some influence. I have never found static types to be constraining, I actually feel like the allow me to more easily plan future consequences. I suppose I trade implementation freedom for the ability to create consequence trees of greater depth and quickly eliminate unproductive branches.
One thing I don't quite understand is how expensive the protocol system is; e.g., if I extend a type with a new protocol at run time, perhaps concurrent with the use of an object of that type, how does the compiler handle it? IIRC protocols are handled as JVM interfaces, so it may just be an update in the interface method table which is resolved... by invokeVirtual, right? I imagine you could pay a significant cost in terms of branch misprediction for the JVM's runtime behavior around interfaces...