Hacker News new | comments | show | ask | jobs | submit login
Is Scala really more complicated than Java? (artima.com)
38 points by fogus 2499 days ago | hide | past | web | 32 comments | favorite

As a personal project, I've built a simple search driven website using Scala + Lucene. The article doesn't make a fair comparison of Scala vs. Java complexity. The examples in the article are focused on Scala strengths, e.g. for-comprehensions, closures, Range classes, etc. It is true that these features are superior to Java equivalents. However, Scala is still too awkward in implementation of annotations, generics and type inference in general. For example, Scala 2.7.5 annotation problems make integration with Java Persistence API (JPA) an adhoc hack. Scala 2.8 still has serious bugs open on annotations. Generic type are fundamentally complex and while Java attempts to abstract some of the complexity related to co/contra-variant types, Scala requires a deep understanding of nuances in practical application of generics. ML type inferencer in Scala is great when it works (in simple situations) but it is a headache to figure out _why_ it causes problems in the cases when type inference doesn't work as intended. Unfortunately when the type inference doesn't work as expected one is forced to hard code type casting which defeats the whole purpose of a type inferencer.

"ML type inferencer in Scala is great when it works (in simple situations) but it is a headache to figure out _why_ it causes problems in the cases when type inference doesn't work as intended."

I've heard this many times. I think learning some formal type theory might help. It worked for me, helping me get my head around Haskell and make sense of type error messages.

"learning some formal type theory might help" -- understanding formal type theory is not a prereq for understanding Java. If you are you suggesting that knowledge of formal type theory is needed to be an effective Scala programmer, then as I pointed out, Scala is more complex than Java.


" then as I pointed out, Scala is more complex than Java."

I didn't comment (either way) on this.

I was just trying to say learning type theory worked well for me when I was learning languages with powerful type systems (Haskell in my case), especially in understanding type errors. It may or may not work for you. It is just an experience report/suggestion.

I don't care if Scala is more complex than Java or not. Apologies if you thought I was making a claim to that effect.

No apologies needed, I thought your comment about learning formal type theory and your book recommendation were valuable. My comment was based on the premise of this thread (per the title) and the premise of the article.

An interesting thing about this article is it makes me think about just how natural the english-fragments within mainstream (c-like) languages are.

Loosely speaking, a c-like English would be: "For each tree in the grove, paint on the stem"

But what would a Scala-like sentence be??? "trees in the grove - paint circles on their stems"???

That might show that idioms from C got into your english, not the other way round:

"For every cookie in the jar; I ate that cookie" vs. "I ate all the cookies from the cookie jar" vs. "The cookie jar is empty"

(the last one assumes that the cookie jar was type-safe and contained nothing but cookies)

Using something that requires you to understand a couple big ideas ("formal theory", as you put it) is usually a lot less complex than using something that doesn't, but requires you to remember a bunch of inconsistencies and exceptions to rules. It's easier to remember concepts than details.

I haven't used Scala, but it's certainly a more general rule. Prolog or Make, for example, are damn simple once you understand the model they're based on - they're just fundamentally different from (say) Java or Python. Make has a reputation of being weird and complicated* because people assume its complexity adds to whatever model they're familiar with, but actually it replaces it entirely. Once you understand make's underlying model, everything you need to know to use it would comfortably fit on one page.

* Except the GNU make extensions added to bend gmake into a more 'conventional' tool / make it work with the autotools. The original, non-GNU make is vastly simpler. (Same with gawk vs. regular awk.)

"make it work with the autotools" ? The auto* family of tools have a ton of complexity to make things work with non-GNU tools. Automake would be vastly simpler if there wasn't the specific design goal of working with non-GNU make.

That didn't come out right - I meant that sometimes people get the impression that make is incredibly complex because they look at makefiles generated by the autotools. Those makefiles are typically very different from hand-written ones.

Those makefiles wouldn't have to be as complex if automake only targeted GNU make. A lot of the messiness is there to support conditionals.

That's because conditionals are implicit in the underlying selection of targets. Trying to force explicit conditionals into the language gets cumbersome and ugly. You don't see explicit 'if's in Prolog, either.

I disagree. Writing multiplatform makefiles without explicit conditionals is cumbersome and ugly, mainly because the problem space contains so many exceptions. In this case, purism isn't really practical.

Unfortunately, many many essays have been written attempting to describe e.g. monads, which are actually a really simple concept, just very abstract.

The evident difficulty the general programming public has had getting to grips with even this topic would make me pessimistic as to the efficacy of your advice.

That is why I said formal type theory (which is very different from describing monads in the essays for the "general programming public").

Working through Pierce's book "Types And Programming Languages" (aka TAPL) is more what I had in mind. It is very clearly written and not that difficult(I have no formal Computer Science education and had no trouble with the book).

Monads aren't really very difficult when learned from the Type Theory perspective (vs from weird essays comparing them to elephants, space stations and so on).

I was specifically addressing this bit.

" but it is a headache to figure out _why_ it causes problems in the cases when type inference doesn't work as intended"

In my experience, knowing basic type theory helps this "figuring out" process.

'would make me pessimistic as to the efficacy of your advice.'

Fair Enough! I just posted what worked for me. YMMV :-)

I do recall hearing about some mismatch between Scala and Java generics (both ultimately use type erasure, due to JVM limitation). Scala generics on their own are fine.

2.8 is supposed to improve both, I think.

The type inference is local (and unification based), and not Hindley-Milner (ML). I'm pretty sure Scala's type inference is more complicated (and less awesome as a result).

I know that's totally beside the point, but I'd have coded the example like this:

    print """
        Happy Birthday To You
        Happy Birthday To You
        Happy Birthday Dear XXX
        Happy Birthday To You

Your solution is:

print"""Happy Birthday To You\nHappy Birthday To You\nHappy Birthday Dear XXX\nHappy Birthday To You""" -> 103 chars

This came up on the Clojure mailing list:

(dotimes[x 4](println"Happy Birthday"({2"Dear XXX"}x"To You"))) -> 63 chars

40 less characters.

A nice trick! And can be derived directly from the "definition" of the song (that is, from the way the song is constructed). Great!

Code that makes use of features unfamiliar to the reader appears more complicated. Of course, it may be that Scala is more complicated than Java.

I don't know Scala, but it is my understanding that it provides all of Java's functionality (possibly with different syntax) for the purpose of interoperation, as well as its own features. If that's true, then the language is more complicated than Java, even if the code you actually write is not.

great concise summary at the top of the article- More writers should do this!

People complain about OCaml because it doesn't use parens around function arguments, and { } around blocks. Basically programmers don't like to learn anything new, which is very sad for people who are supposed to be technology-led and love embracing new ideas.

I don't think that's it. I love learning new stuff but if the new stuff is somewhat like but still different than the old stuff I have a much harder time giving it its spot.

For instance, in python I'm not usually tempted to put ';' at the end of a statement, except on lines with function calls. For some weird reason that f(x); goes completely automatic.

And every time I see my fingers do that I have to remove it again, and plenty of times I simply miss it.

It's like a violin that you've gotten used to over the years, the next violin may be a very good instrument but it will never be the one that you have this finely tuned connection with that you built up over the years. That takes years!

So true ! Loved u'r violin example. Let me state my experience. I used to code in BASIC when I was in school, when shifting to C++ in college it was a pain in the ass. Later I learnt python by choice, although initially indentation stuff freaked me out. I had gotten used to C syntax. Recently while learning scheme I felt handicapped. Once we'r used to certain languages, familiarity makes the task seems easy. I found that Java code easier than scala because may be I'm unaware of scala syntax. The fact that almost everyone in the college codes in C++ because they are using it and they are so used to it that they feel problems while switching languages !!

PS. I'm much happier now that python had replaced C++ as my primary language :)

You do realize that f(x); is a valid statement in python?

Sure, but the ';' can be lost without penalty.

It's not about 'not learning anything new'. There are plenty of new concepts in languages that don't use archaic or oddball conventions just for the sake of being different, especially if there isn't a technology driving the adoption (like there is with Objective C).

Especially in things like Scala; ':=' for assignment is from Pascal, and Ocaml, to me, feels like a really verbose Lisp.

Programmers are much less likely to pick up a new tool if its syntax is so radically different from what they normally work with that there's a steep learning curve just to become accustomed to it.

Sure, both languages have some interesting features, but there really isn't anything driving the change beyond that. Scala lacks a decent web application framework (Lift is rubbish, I'm sorry), and they seem to be targeting that space. OCaml is interesting, but I don't see any major new technology driving its adoption.

OCaml's syntax is denser than C. There is less to type when you write f 1 2 3 vs f (1, 2, 3), and it gets even better once you look at the larger features like function definitions.

Bit surprised you think Lisp is more dense. I just wrote a Lisp compiler in Lisp, and found it more verbose than ML (although only by a little bit). I find the lack of syntax in Lisp very annoying, particularly since it's not necessary -- you can write macros over an AST, which is how camlp4 works.

> Lift is rubbish

Could you elaborate?

I would be highly interested in the background of the friend motivating this article. Is he a java-only-person? Does he develop/know multiple languages?

I am asking this, because just looking at this snippet, I had no problem understanding it in a glance (which rates it as not complex), given that I know and was reminded of haskells and pythons list-comprehensions, a rubys block-structure and printf (don't hit me if these things are entirely different in scala, but this is how I just read it, and apparently, it was close enough to understand what this snippet does).

Complicated how? The number of unfamiliar features (Scala has stuff that Java does not)? The size of the language specification (Scala's is much smaller than Java's)?

I think there are more abstractions available compared to java. That means there are possibly more things to understand before you can know how a piece of code will exactly get to be executed. Now, having more abstractions also mean the code can be more explicit and demand less "simulating in your head" in order to understand what it does.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact