And didn't tail call optimization only become standard in Lisps with the advent of Scheme?
In any case, at Standard Chartered we didn't have tail call optimization with our Haskell dialect either. It didn't matter too much in practice, because you should be using combinators anyway. And when you are calling foldr or map, you do not care that somewhere hidden away they are implemented with a loop in C++, as long as they behave right.
Well SC's proprietary compiler is probably different, but GHC is self-hosted (the runtime system is C but the compiler is implemented in Haskell) and the map and fold functions in Prelude are recursive. Here's the source code for `map` in Prelude:
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
But it is definitely still true that explicit recursion is discouraged as being too 'low-level' for most Haskell code and it's preferable to use higher-order functions instead.
Indeed, and as far as I know Scheme JVM implementations, which I haven't looked at in a long while, either do it per the spec and are slow, SISC reputedly, died about the time I might have started using it, or go through contortions like Kawa to do the best you can. Don't know about JScheme, it was dead before then, and I just noticed Bigloo will compile to the JVM, adding one to the list of 4.
I seem to recall it depending on compiler settings whether sbcl does tco. Which means you probably don't want to rely on it in general unless you are okay being locked to a specific implementation and specific optimization settings that may-or-may not seem magical to the uninformed user.
And really i think the "Classic" lisp way to loop is loop, not recursion.
Both, I think (from long ago memories). The "modern" loop macro (which is probably Turing complete like I seem to remember people saying format is :-) is I think a relatively new thing, I overhead a lot of discussion about its design in the early '80s.
Although it probably had precursors, mainline Lisp, now Common Lisp, is decidedly multi-paradigm, there were even sops thrown to FORTRAN programmers as I recall, probably back from when there were only a very few computer languages in existence (heck, LISP's first implementation, on a vacuum tube computer, was as FORTRAN subroutines). So overt things like loop are in theory idiomatic as well as recursion.
The --full-tail-calls flag is not on by default, partly
because it is noticably slower (though I have not measured
how much), and partly I think it is more useful for Kawa to
be compilatible with standard Java calling conventions and
tools.
Well, for some definition of just fine. Well implemented TCO is a performance boost, not hit on most platforms. The lack of TCO built into the JVM means that JVM languages like Scala and Kawa generally have to roll their own on top of the JVM, resulting in a performance hit.
Performance is irrelevant I think to the discussion. The claim is being made that Clojure and ABCL don't do full tail call elimination because of stack restrictions in the JVM. That sounded unlikely to me. Kawa does full tail call elimination and it's a JVM language. Hence, this claim can't be true, right?
Kawa fakes it. The JVM doesn't support it so if you use a normal function call you don't get tail recursion.
You could avoid calling functions and instead do your own stuff but that doesn't change the fact that the JVM doesn't support it.
That is what they mean by performance, you lose performance because you can't do naked function calls in those cases.
Since except mutual recursion (which is difficult to detect tail recursion for correctly) the benefit of tail recursion is a performance boost it is ignored when the cost of implementing it kills your performance.
Originally, though, it didn't. See the current language in the link you provided, emphasis added:
Kawa now does general tail-call elimination, but only if you use the flag --full-tail-calls. (Currently, the eval function itself is not fully tail-recursive, in violation of R5RS.) The --full-tail-calls flag is not on by default, partly because it is noticably slower (though I have not measured how much), and partly I think it is more useful for Kawa to be compilatible with standard Java calling conventions and tools. Code compiled with --full-tail-calls can call code compiled without it and vice versa.
The fact that it didn't originally is irrelevant. And the fact that you have to provide a flag because of performance is irrelevant. It still remains that Kawa appears to do full tail call optimization on the JVM. So it can't be that Clojure doesn't do it because the JVM doesn't permit it. So what's really going on here?