Hacker News new | comments | show | ask | jobs | submit login
Why we love Scala at Coursera (coursera.org)
134 points by saeta 1100 days ago | hide | past | web | 180 comments | favorite



It's good to see Scala getting some love after all those "i hate scala" rants (many of which seemed to be written by people who haven't spent much time with the language). We're using it at work for essentially all new development. Although I have complaints, I can't think of another language or environment that I've used that's been so satisfying and free from overall irritation.

In practice, we don't get code that's unreadable, we don't wait ages for compilation, and we gain all the functional and typesafe goodness.


We've had the same experience with Scala. It's the most enjoyable language I've ever coded in. We switched away from Java for most new development a while back already. The anti-Scala rants are just confusing to us. Most of the people complaining just seem to be random non-Scala developers recycling other peoples' criticisms.


I've gone Ruby->Java->Scala->Java->Scala over the last five years.

Scala was terrible two years ago. All the tools were broken, and every point release broke binary compatibility, so that jar dependency hell was amplified immensely.

Now, there's things I don't like, but it all mostly just works. It seems like this started around 2.10. So I just appreciate a language that is more expressive than Java, with all the things I like about the jvm.


I hear Java people say "I love scala" a lot. I do not hear people coming from other environments saying this.


I've come from lots of environments, and you're correct, I won't say "I love Scala". What I will say, is right now, if you are targeting the JVM there aren't any better alternatives (I believe in strong typing).

Maybe someday Kotlin or Ceylon or Java 9 will be better but for now Scala is the best choice.


At a guess, base rate fallacy and typing. The java community is huge, so even a small percentage of happy scala converts translates into a large absolute number. People from dynamic languages (Ruby, Python) are probably averse to Static Typing which puts them off.


It certainly does sound like a step up from Java, while retaining most (all?) of the good things.

Being a "better" alternative to Java (edit: as kasey says, targeting the JVM) doesn't sound like a bad place to be :) .

It's on my "to-try" list but the little development I do nowadays is on the .NET platform.

Which other environments do you think would benefit from trying Scala, or are bashing Scala? C / C++ ? Node? Edit2: it seems Ruby or Python people wouldn't like Scala. Me, I like static typing.


My team at a previous job went from perl / c / python to scala, and liked it better.


I was a python guy before I found scala FWIW


They looked at it once, couldn't understand it, and smashed their screen.


What is the point of articles like this? I'm not trying to be snarky; I really am trying to understand why a company would go out of its way to advertise the technology its using.

Sometimes we see blog posts from devs at companies (often pretty senior) who write things like "We went with Clojure, and we've been really happy." That's clearly a geek-to-geek thing. But what is this? I mean, it's also geek-to-geek, but it's more "official." Users don't care -- they might have no idea what's even being discussed.

Honestly, it's hard not to read this kind of thing as, "We had a whole bunch of meetings where we debated what language/technology/platform to use. There was bitter, acrimonious warfare, but we don't want to be a company that is about warfare, so we're writing this post that makes the case for why the people on the side of the angels won the argument."


For one, it's a good way to attract hires. As they mentioned, Scala developers are not as common as say, Java developers, so a post like this is a way to announce to Scala developers looking for work that Coursera is a potential employer - many people would view being able to use a language they like as a positive about a job.


I see it also as a good way to repel people hating Scala. For example, after reading this article I'm sure that I won't even try to apply for a job at Coursera: zero time wasted for both me and them.


Ah, of course. And of course to crb200 below as well.


When I first approached writing this article, I had a hard time. We have learned a lot from other companies sharing their experiences and we wanted to give back. We are doing this now because we think we have enough content to regularly write posts that will be interesting to other developers. Although we are very interested in open source projects and giving back, open source projects require significant amounts of time and energy (something at a huge premium). Because of this, we have instead adopted a policy of open source snapshots (see the README at: https://github.com/coursera/js-libraries-snapshot) for code contributions, as well as sharing what other lessons we have learned through other channels. I have spoken at AWS re:Invent, as well as written about other successes at betacs.pro, my personal blog.

As for your warfare speculation, I am happy to say it is just that. We have not had any warfare whatsoever here at Coursera. That simply is not our style. Instead, we tried out a variety of different programming languages. Through organic internal growth, we have come to adopt Scala and Play! not because of a decree but because we have found it helps us get the most done quickly, and reliably.


>> As for your warfare speculation, I am happy to say it is just that. We have not had any warfare here at Coursera. This is not true. I have it from multiple independent sources within the company that it was largely an acrimonious coup, led by a couple of engineers. It's one thing to defend your position. It's a wholly different matter to willfully lie.


It is in their best interests to attract other developers and companies to the Scala community and ecosystem. The more that well-known companies advertise their adoption of [x] technology, the more comfortable others will feel in doing the same.

Coursera said as much that the Scala + Play community is modest in size.


Because the engineers are in charge.

Coursera was founded by programmers and is dominated by programmers.

I'm not saying that's always a bad thing - Google is dominated by engineers and it is an engineering company.

But I have been surprised by how few people (zero, from what I can see) they have hired who specialize in education and learning, which I thought was the business of Coursera.


"Coursera was founded by programmers and is dominated by programmers."

I doubt Daphne Koller or Andrew Ng were (production) programmers. They probably wrote enough Matlab or whatever for their teaching research needs like most academics. (imo, glad to be corrected).


> I really am trying to understand why a company would go out of its way to advertise the technology its using

Simple - hiring.


The point is to attract new developers who may be interested in Scala to Coursera as well as showing their expertise in the language. I definitely think this works, but it's a very obvious strategy as many others have called out. I think a better tactical recruiting strategy would be to open source some Scala based projects. If you look at Coursera's GitHub repository[1], they aren't giving anything back compared to a company like Mesosphere[2] that offers multiple open-source Scala projects. As a developer, I want to work where people are building cool things, not where a team is married to a language or platform.

[1] - https://github.com/coursera

[2] - https://github.com/mesosphere


Marketing. They want everyone to take Ordersky's courses on their platform.


Everyone should take a Functional Programming class (I'd dabbled in it, but took Odersky's class last year). I'm not programming in a functional language, but it's changed the way I think about certain coding tasks (for the better).


The course is not planned for 2014 yet. If it was for purely marketing reasons, they would have at least given some indication on a new upcoming session.


many reasons:

1. "a company" is not monolithic entity, it's a collection of people. a lot of the people at coursera are clearly scala enthusiasts, and blog posts about how well your (especially non-mainstream) language worked for you are a popular way of participating in the programming community. likewise, someone with decision making power decided that posting articles like this on the official company blog was part of the culture they were trying to foster at coursera (or, alternatively, that the company is small enough that people just collectively decided that it would be a nice idea for developers to do that).

2. even viewing it in pure utility terms for coursera-as-an-entity, it makes a lot of sense to encourage scala adoption. the economics of open source make it so that the more people are invested in your particular ecosystem, the better it works for everyone.

3. as other people have pointed out, it makes a great hiring advertisement for people who like using scala and want a job where they can do that. also, tying in to point 1, it advertises coursera as the sort of company where developers can post articles like this to the company blog, which implies a company culture that values both its developers and their "do things the right way" mentality.


These articles are advertisements. It's a combination of growth hacking, seo, recruiting, etc for companies that wouldn't otherwise be relevant here so they force it.

Exploitation is a good word for it - the submitters often have no actual interest in this community except for what they can take from it. We get a lot of it now. Flagging helps keep them off the front page depending on how many coworkers or friends the company has jacking their votes up.


"Refactoring a statically typed language is easier than refactoring an interpreted one; modifying existing PHP, and even Python, is a difficult chore that engineers shy away from because modifications are likely to create more bugs than they fix."

Over the years I've worked pretty deeply with both static (C++/Java/Scala) and dynamic languages (Python/Ruby). I simply don't agree.

The biggest thing that contributes to refactorability has nothing to do with type safety. Instead unit test coverage, by a mile, is the thing that makes code easy to refactor. In my experience issues of type safety are rare. The same types of bugs are going to crop up in Scala, Python, Java, or even Fortran unless you have tests to help you discover those issues.

I've long felt that type safety is largely a solution to a problem that doesn't really exist. Not in a hugely meaningful way at least. Logic errors are generally independent of type issues, and as such it is code quality and unit test coverage, more than any specific language feature, that lead to highly flexible (and therefore refactorable) code.


> The biggest thing that contributes to refactorability has nothing to do with type safety. Instead unit test coverage, by a mile, is the thing that makes code easy to refactor

Type Safety is Unit Test Coverage.

Typing is a statically checked test of correctness of your program.

Languages like C give static typing a bad name. Types are not things like 'float', 'int', 'double'. Types are 'degrees', or 'radians'. Types are 'buy', or 'sell', 'price' or 'yield'. Types guarantee that your code executes as you expect, and prevent you from writing incorrect code in the first place.

Is it 100% effective? No. But there's a reason that high level statically typed languages (Scala, Haskell, OCaml) usually work correctly after they compile.


> Types guarantee that your code executes as you expect, and prevent you from writing incorrect code in the first place.

If you write a Rational class and implement it's + method as

    def +(other: Rational):Rational = new Rational(0)
then no amount of static typing will save you. At least without dependent types.

Static typing gives you a knowledge about what kind of object you're dealing with without the need to run the code. It's almost completely useless if the "kinds" supported in type system are as broad as 'int', but it does help quite a lot if the type system let's you differentiate between cm and inches.

It still won't help you at all if you have a logic error - and this is where unit tests are valuable.

Personally I find a mixture of Typed Racket and Racket with contracts and unit tests to be the right thing when it comes to producing as error-free programs as possible. I was also pleasantly surprised with Smalltalk, which has no static typing, but because it runs all the time and every piece of code becomes "live" the moment it's written you can use introspection in place of static typing with very similar benefits.

Anyway, static typing is a valuable tool and one way of making software less wrong but it's important to know there are other tools and to use them all when it makes sense.


> then no amount of static typing will save you. At least without dependent types.

Sure. Static typing also doesn't help with misunderstanding requirements, or a faulty test in a recursive function. But strong typing, especially in languages where there is no such thing as a NullPointerException, does eliminate entire class of problems. On the other hand, with dynamic languages, you are often a typo away from a runtime error.


> usually work correctly after they compile

I still don't buy in to this trope.

Q: What is true of every single bug in production?

A: It passed both your type checker and your tests.


Look at it the other way. In fact, you can do a blind test. Try writing in a strongly, (well[0]) statically-typed language for a while, and count the number of times your code fails to compile due to a type error.

Each of those would likely have been a runtime error in a language like Python.

Just earlier today, I had to write a short Python script (I normally write Go), and I was bitten by this. I'm used to the compiler telling me up-front when I try to concatenate a string and an integer, or when I misspell a variable name.

Yes, there are other ways around this (and proofreading your code before you write it is a good practice, in addition to tests). But static typing can make development much faster[1].

Even if you claim that you would have caught all of those bugs in unit/integration/system tests (which I simply don't believe - no real-world project has test coverage that's that good), it's much better to catch bugs at compile-time, because (unless you're writing C++) your tests take much longer to run than your code does to compile.

[0] I qualify this to rule out C, which has weak typing as well as memory management issues to complicate things, and Java, which just has a terrible type system period.

[1] In some sense this is all a moot point, because any difference in development speed and/or code correctness between statically-typed and dynamically-typed languages is dwarfed by one's familiarity with the languages in question. The only real way to test this directly would be to use a statically-typed language that allows disabling of all compile-time type checks as a compiler flag or pragma, but few languages do this in a way that'd be straightforward to do a meaningful blind test.


Look at it the other way. In fact, you can do a blind test. Try writing in a strongly, (well[0]) dynamically-typed language for a while, and count the number of times your code fails at the REPL due to a type error.

Each of those would likely have been a compile time error in a language like Haskell.

Just earlier today, I was writing some C++ (I normally write Clojure) and I was bitten by this. I'm used to evaluating code in the REPL as I work, telling me up front when I try to concatenate a string and an integer, or when I misspell a variable name.

Yes, there are other ways around this (and proofreading your code before you write it is a good practice, in addition to tests). But dynamic typing can make development much faster[1].

Even if you claim that you would have caught all of those bugs in a type system (which I simply don't believe - no real-world project has types that are that precise), it's much better to catch bugs at run-time, because (unless you're writing Go) your compiler takes much longer to run than your REPL does to eval.

[0]: I qualify this to rule out Python, which has weak typing as well as mutability issues to complicate things, and JavaScript, which just has a terrible REPL period.

[1]: In some sense this is all a moot point, because any difference in development speed between statically-typed and dynamically-typed languages is dwarfed by one's familiarity with the languages in question. The only real way to test this would be to use a dynamically-typed language that has an optional type system as an external tool, luckily quite a few languages do this in a way that'd be straightforward to do a blind test (Typed Racket, Typed Clojure, Erlang w/ Dialyzer).


> Each of those would likely have been a compile time error in a language like Haskell.

Why do you consider this a hindrance? Haskell has a lot of issues, but that's really not one of them. You can write a Haskell program and be relatively confident that it's going to work right in a way you simply won't be with, eg, Java. And you don't have to wait until compiling. With ghc-mod and vim/emacs configured, you'll be notified upon saving if you blew up something in the current file. You don't need to run your code in a REPL to find out about it, although GHCi is pretty good. You spend a lot less time figuring out what goes wrong at runtime, although I'll grand you that the tooling is poor enough that you have a bad time doing when it happens.

> I qualify this to rule out Python, which has weak typing

Excuse me? You rule out Python due to weak typing and mutability but disqualify Javascript due to its REPL? This must be sarcasm. I have no comment about Javascript's REPL (which one? node's?), but Python is most certainly not weakly typed, not in the way "weak typing" is commonly understood. As for mutability, they are comparable (except that Python has at least the good grace to throw an exception at runtime if you attempt to access an attribute which does not exist).


You can have a REPL in a statically typed language too---Haskell has one.


> You can have a REPL in a statically typed language too---Haskell has one.

And so does Scala...


GHC's REPL wraps all expressions in a giant do-block with an IO monad. It's practically useless without the meta-language of the colon commands. That said, truly modern versions of GHC have deferred type checking for REPL use, which is impressive, useful and, ultimately, a simple emulation of a dynamic language.

Never mind that not even Python or Ruby, and especially not JavaScript, provide sensible useful REPLs with sane code reloading semantics.

People who shit on crappy type systems turn around and then shit on crappy REPLs in broken dynamic languages. I'd take a statically typed language over a language with a bad REPL any day.


I tried Go after a few years of Ruby, and compile-time errors is what annoyed me the most. Bzzt that variable is defined but not in use, fail. Bzzt that function is defined but not used, fail. Bzzt that library is included but not used, fail. Felt like I was getting a rap across the knuckles with each failure as I was trying to learn Go.

The try/fail/modify/repeat cycle of REPL-driven-development is what appeals to me. I "get" the value of strongly typed languages, but I'd prefer to opt-in to strictness, rather than have it forced upon me.


Haskell will give you REPL-driven development and strong typechecking (an optional one in the last haskell compiler).

BTW, Go is not that strongly typed. It seems Google engineers don't like strongly typed languages.


Interesting, felt the same way, and then something changed, rather than feeling punished by the type checker, the type checker became my ally, teaching me about my mistakes at compile time.

Other than for quick scripts I have no interest in dynamic languages now, fully converted to the Typed side ;-)

Ruby and Scala are quite similar in some ways: terse syntax, functional, MOP capable (implicits), mixins (traits), etc., but yes, you have to prove to the type checker that your intentions are correct/sound.


>In some sense this is all a moot point, because any difference in development speed between statically-typed and dynamically-typed languages is dwarfed by one's familiarity with the languages in question

I dunno, I have 11 years of python experience, and almost a year of haskell experience. I am more productive in python when doing small (<100 lines) scripting type tasks. But I am noticeably more productive in haskell when writing bigger things. Haskell has a relatively crappy web ecosystem, yet I'm still more productive with snap+postgresql-simple+digestive-functors then I have ever been with any python framework (from zope way back in the day to django and flask). Tasks that are more skewed to haskell's strengths like highly concurrent network servers make the difference even more pronounced.


So yes, they would be runtime errors in Python. But that's not such a catastrophe, actually. The way I write Python is by being in an IPython shell, writing short functions, unit-testing them as I go. So the development-time cost of those runtime errors is not very high, and in my experience offset by the flexibility and interactiveness. Writing in a functional style without much mutable state is really the thing that I find saves development/debugging time.


> The way I write Python is by being in an IPython shell, writing short functions, unit-testing them as I go.

One can write Scala the same way in a Scala shell.


That's a tautology. By that reasoning, don't bother writing any tests, because all your bugs in production will get past them anyway!

My bugs got past my type-checker, but only 100 of them. You have 1000 because the 900 my type checker caught, your dynamic language didn't. Then you go and write unit tests for them, and spend a day writing code that my compiler generated automatically for me.


> By that reasoning ...

My reasoning in no way implies that. I simply said that I'm sick of this argument that "if it compiles, it works!" Nobody who knows what they are talking about actually believes that, especially not people who design and build statically typed languages!


If you can provide statistics on bugs being an order of magnitude higher in dynamic languages to back up your figures here, it would be a far more convincing argument. I'm sure type safety catches a few bugs and prevent some from being made, but does it catch as many as you think?


Not to start a needless back and forth, but I've worked in both static and dynamic languages on projects large and small as well. I do believe that unit tests are the biggest improvement by a long stretch for re-factorability, but a good type system simply eliminates whole swaths of error cases, and modern systems don't require a lot of overhead either.

Another advantage for some strongly typed languages are the complex, yet safe refactorings they enable automatically by a good IDE. This is not universal though, and is a serious downside to picking Scala over Java.


Type safety dramatically reduces the number of unit tests required. Since unit tests often depend on the structure of code, that reduces the amount of rewritten and modified test code and the stability of the tests over refactors, which in turn must make refactoring easier.


If you write unit tests that make assertions that would be caught by a type checker, you're writing bad tests.


If you don't understand why a dynamic language needs more tests than a typed one, you probably haven't worked with typed languages. The difference is dramatic and clear.

Just one example, code coverage must be 100% for a dynamic language in order to avoid runtime catastrophic failures due to simple typos - like letter transposition. Sure, 100% test coverage is a nice goal, but it's rarely met. In a typed language, you can be 100% sure this won't happen with zero tests.


That's a great point!

It's also what makes refactoring so much easier. If I need to rename a method obj.f() to obj.g(), then I can do so safely knowing that no lingering references to f() remain in my code base (else they would fail to type as f() is no longer a valid method).

As you say, 100% code coverage is unrealistic. Having maintained large applications in both static and dynamically typed languages, I feel that statically typed programs are far easier to manage for this reason - that if my code compiles then it means there exist no errors of a certain trivial class, such as typos and references to classes or methods that have been removed.

There are still other errors to worry about, but mitigating those errors can be done effectively without needing to approach 100% code coverage.


All unit test assertions can be caught by a type checker.


Assuming you are sarcastically dismissing typed language defenses, as if people promoting typed languages have the position that tests are not necessary.


No, I'm simply stating the fact that there are type systems capable of encoding assertions.


Your type checker catches array indexing bugs? Impressive.


No, but Scala encourages FP, so when you:

    val fooList = List(1,2,3)
    fooList.map(...)
you'll _never_ hit the types of bugs you're referring to.

If you fooList(3), then of course you're hosed, but that's not the type checker's fault ;-)


I don't agree even with the sentiment of the original post, but there is an argument that while you can't get rid of array indexing bugs with a type checker (contracts might help here) you can design your collection types such that things like indexing above or below an array are compile time caught, as is indexing into the "wrong" array location.

Whether it is worth the trouble to do this in the type system probably depends on how good your language's type system is and how expensive array indexing bugs are to catch/fix.


Actually, in the original (circa 1968) Pascal, it did.

In Pascal, the size of an array was part of the type of the array. IIRC, You simply could not write an array index out of bounds bug. This made it a somewhat reasonable choice for a medical device, where an array index out of bounds could be catastrophic. However...

You also could not create a variable-sized array. There was no way even to talk about the type of such an object. (Yes, I ran into that professionally once.) I think this is why Turbo Pascal (and maybe others?) softened that.



I worked deeply with both static and dynamic languages as well. I do feel that it's easier to refactor in scala than ruby, especially in large projects.

In ruby, you have to have close to 100% test coverage because if you don't you can't have any confidence when you refactor. For example, in ruby project, even a simple delegation call needs to be tested, otherwise you may break it without knowing it when you rename the method delegated to.

You don't need to write such test in a Scala project because there is no business logic in a simple delegation and the compiler will check the rename for you, actually the IDE will do the rename refactor perfectly for you.

In general, some of the unit tests devs wrote in Ruby projects are not to test against business logic but to test type matching. For type related refactoring, they actually have to refactor both the code and unit tests. The dilemma here is that if you are changing your unit tests, how confident you are that they are still covering your refactor?


I've only used static languages and therefore never had to make unit tests. Unit tests should exists for certain business rules, not every line of code.


...sorry to have to ask, but this was sarcasm, right?


Lol, never? Types won't check your whole business logic, it's irresponsible to nog write them (unless you're doing something really small).


> I've long felt that type safety is largely a solution to a problem that doesn't really exist.

Sure, assuming you never have to evaluate, convert, merge or do anything with that data. Then you might have some issues when you have expectations on the behaviours of those actions.


Am I missing a big problem with dynamic typing? Wouldn't clearly defined naming conventions solve part of the problem?


They't need to be not just clearly defined but enforced; wouldn't it be cool if we could write a tool that automatically enforces them ? hmmm ... :)


To put some meat on the discussion, what would be an example of a logic error which would be introduced by a refactoring?


My preferred language these days is Clojure, which is dynamic, but I don't agree that most errors have nothing to do with type safety.

Most errors are fairly random. There isn't a pattern to them. Misspellings, misconceptions. Whether your type system catches them depends on the strength of your type system in addition to how you represent things. (Hashmaps vs. records, whether the get call on a Hashmap returns a nil value or an error when the key's not found.)

A strong type system makes it likelier that errors are caught earlier. This may or may not be justified, depending on the application. For some problems, the reduced error-to-failure distance is considerable and it's a huge win.


>Over the years I've worked pretty deeply with both static (C++/Java/Scala) and dynamic languages (Python/Ruby). I simply don't agree.

How did you use scala? Did you write java code in scala as most people with a C++/C#/java background do? Because that would completely explain the rest of your post. In particular, this statement:

>In my experience issues of type safety are rare

I've been doing pretty exploratory, "refactor the hell out of it every other day" kind of coding lately. I would literally rather not program than have had to do it in a language other than haskell. If I were to record this process, I am sure I would come up with several hundred type errors being caught over the course of a week.

>The same types of bugs are going to crop up in Scala, Python, Java, or even Fortran unless you have tests to help you discover those issues.

The obvious example of that being a bad assumption is that static type systems can completely eliminate unexpected NULL errors. It can eliminate "oops I used the count as the X co-ord by mistake" style errors. It can eliminate a huge class of errors that java programmers don't recognize are in fact type errors.

>Logic errors are generally independent of type issues

Which is why it is so nice to let the computer deal with type errors and be able to concentrate on logic errors instead of constantly having to worry about type errors manually.


>How did you use scala? Did you write java code in scala as most people with a C++/C#/java background do? Because that would completely explain the rest of your post.

Nope.

My issue here is that types, once defined, rarely change in any significant fashion. Refactoring is (generally) an exercise in mutation of logic, not so much the underlying types. When types do change the code that operates on them drastically changes as well. We're no longer refactoring at that point, we're re-writing and then the issues more or less disappear.

>The obvious example of that being a bad assumption is that static type systems can completely eliminate unexpected NULL errors.

What? In Scala it is perfectly possible to pass an uninitialized reference to a piece of code. The type checker most definitely isn't going to catch that. It can merely verify that the reference itself is of the right type. This is actually a common runtime error in those languages.

>It can eliminate "oops I used the count as the X co-ord by mistake" style errors.

Just as long as the count and the X co-ord are of different types. Which I'd guess isn't always the case.

> It can eliminate a huge class of errors that java programmers don't recognize are in fact type errors.

I will concede that static type systems do catch some errors that aren't caught by dynamic languages. My belief is that the class of errors they catch are among the most trivial, and most easily caught in testing (unit-testing or otherwise). Thus I simply don't see big gains in either runtime correctness or refactorability. Maybe slight ones, but nothing worth the loss of expression provided by more dynamic languages. This is true of even softer type systems such as Hindler-Milner (although type inference continues to improve, there might be some middle ground here to explore in the future).

There is no doubt some philosophy at play here. The discussion here is much broader than refactoring alone. I for one find that giving up some formal proof about the correctness of the program is worth the freedom dynamic languages provide.

I'll leave my favorite quote on the subject:

"Static type checking limits programs to only expressing things that the static type system can prove are OK, as opposed to expressing things that the human programmer believes are OK. That's the source of both the expressiveness loss with statically checked languages, and the class of runtime type errors which are unique to dynamically checked languages." - Anton van Straaten


> What? In Scala it is perfectly possible to pass an uninitialized reference to a piece of code.

NULLs are a non-issue; that's what Option and FP constructs (map, flatMap, etc.) are for.

> I will concede that static type systems do catch some errors that aren't caught by dynamic languages.

Dynamic languages catch NO errors at all, period. You yourself must maintain your own adhoc type checker (unit tests, with 100% coverage) in order to prevent the class of errors that statically typed languages catch from manifesting at runtime.

That's a great quote, actually backs Scala given that you give up pretty much nothing in terms of expressiveness (compared to Ruby, Python, Groovy, etc.) while retaining all of the safety ;-)


>My issue here is that types, once defined, rarely change in any significant fashion. Refactoring is (generally) an exercise in mutation of logic, not so much the underlying types. When types do change the code that operates on them drastically changes as well. We're no longer refactoring at that point, we're re-writing and then the issues more or less disappear.

That is the exact opposite of my current experience. I'm changing types constantly. As I flesh out more code, I realize I needed to pass an (Account, Client) not just a Client. Now the compiler tells me every single line in every single file where I need to fix the code to handle that.

>What? In Scala it is perfectly possible to pass an uninitialized reference to a piece of code

Because scala has to make bad concessions to java compatibility. Scala itself is fine, but java opens the door to problems. I did not say scala accomplished this, I said static type systems can. Use ocaml or haskell.

>Just as long as the count and the X co-ord are of different types. Which I'd guess isn't always the case.

But you control the types, that is the point. So if you choose not to use the type system, then obviously it protects you from very little. That is not a problem with the type system, it is a problem with you. You could write a program using only strings if you really hated yourself, and you would thus get absolutely no benefit from the type system. But nobody would be silly enough to blame the type system.

>My belief is that the class of errors they catch are among the most trivial, and most easily caught in testing (unit-testing or otherwise). Thus I simply don't see big gains in either runtime correctness or refactorability. Maybe slight ones, but nothing worth the loss of expression provided by more dynamic languages.

I do not believe your earlier claim to not have been writing java in scala now. If you think there is a "loss of expression", you do not have experience with a modern statically typed language.

>"Static type checking limits programs to only expressing things that the static type system can prove are OK

And if you have no idea how much you can express with the type system, then you draw bad conclusions from that.


> Logic errors are generally independent of type issues

Eh ... what?! Is that some kind of joke?

https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...


I enjoyed reading this article, but does anyone who doesn't already understand what they're saying understand after reading something like:

"Play's reactive core and asynchronous libraries (e.g. WS) integrate seamlessly with other powerful concurrency primitives in the ecosystem such as Akka’s actors. Combining for-comprehensions with composable futures makes asynchronous concurrency look like straightforward synchronous code."

I get the gist, but I had to go and search for what for-comprehensions were. According to [0], they're a syntactic sugar over map. Then, if I search for composable futures (I know those words separately, but can only guess about them used together) I seem to only get results for Akka 2.0. The first result I click on is for a 167 page book for $23 USD. Maybe that is my fault, as the first result is actually to Akka documentation, but when I click there there is no mention of the word composable. The next two links are to slides of a presentation by the author of the first link I mentioned, and then to a video of the presentation. The fourth is to an early access of the same book, the next a blogspam to the video presentation. I gave up here.

For an article that is arguing for scala in comparison to PHP, Python, and Go, I didn't walk away with anything more than "sounds nice, but is it really?"

Code example would've helped.

[0] http://tataryn.net/2011/10/whats-in-a-scala-for-comprehensio...

Edit: Calling it "an article that is arguing for scala in comparison to PHP, Python, and Go" is a misrepresentation. It's really just about why they like scala, but they do do plenty of hand-wavy comparisons with other languages.

Edit 2: Okay, I see the Akka documentation does have a section "Composing Futures" that I missed.


A for-comprehension from our production code:

  for {
    createResult  <- Graph.createPerson(person) 
    _             <- Search.putAll[Person](createResult \ "updated_nodes")
    _             <- Search.deletePersons(createResult \ "deleted_nodes")
  } yield {
    Created(createResult \ "node")
  }
That code asynchronously updates both ElasticSearch and Neo4j. Those functions all return Scala futures containing JSON values. First the new object is created in Neo and then the index in ES is updated accordingly. This is in a controller action in a Play project where it's critical to keep everything non-blocking. If everything is successful, Play returns a 201 result to the requestor, otherwise there are error handling functions elsewhere that kick in.


Scala derail: You might have an unintentional happens-before relationship in that example: The deletePersons call will end up being in the flatMap of the putAll[Person] call, which means it wont be started until the putAll finishes, when by the looks of it you could do both at once (only the createPerson needs to happen first).

You might want to do something like this:

  for {
    createResult  <- Graph.createPerson(person) 
  } yield {
    val a = Search.putAll[Person](createResult \ "updated_nodes")
    val b = Search.deletePersons(createResult \ "deleted_nodes")
    for ( _ <- a; _ <- b ) yield Created(createResult \ "node")
  } 
(notice starting the computations outside of the second for-comprehension, then just waiting for them inside)


Good catch. :) Our production code actually does handle the relationship correctly, I just wanted to simplify in my example.


What happens if the process dies in between createPerson and the search stuff? Are the Neo4j results kept around w/ the search index left stale, or is there some sort of wrapping transaction around it all?


    val plusOne = for(x <- List(1,2,3)) yield x+1
desugars to:

    List(1,2,3).map(_+1) // or longhand, map(x=> x+1)
and:

    for(x <- List(1,2,3); y <- List(4,5,6)) yield x*y
desugars to:

    List(1,2,3).flatMap(x=> List(4,5,6).map(_*x))
So, composable futures means you can chain a bunch of futures together in a for comprehension and yield the successful or failing result without blocking.

The essential point is that if you have some Thing that implements map and flatMap, it can be chained through a for comprehension.

Only just dipping my toes in Haskell, but I must say that Scala is a truly wonderful language, one that makes my daily dev a whole lot of fun.


Would this be equivalent to:

    plusOne = (x+1 for x in (1,2,3))
And then the second example:

    mult = (x*y for x,y in zip((1,2,3), (4,5,6)))
In python?


yup, not sure what Python does with your examples under the hood, but they're equivalent, and, I must say, nicely concise ;-)

In Scala I guess we could go zip-style and do:

    List(1,2,3) zip List(4,5,6) map{case(x,y)=> x*y}
but I was trying to demo for comprehensions and their relation to map/flatMap for the OP.


It made perfect sense to me, perhaps because I code in Scala for a living (even though I don't use either Akka or Play (yet)). It reminds me of the old comment about how unreadable French is because it looks nothing like English.


I don't think the target audience of the blog post was experienced Scala developers (that would just be preaching to the choir!) so the use of unexplained jargon should be kept at minimum to actually reach the audience.


This is a thing about Scala which gives me pause: it looks like gazing into the guts of the compiler puts one on a course towards a nervous breakdown.

http://www.youtube.com/watch?v=TS1lpKBMkgg


Please don't miss the larger point of this talk (and others by the same speaker); programming languages in general are not what they should be according to this speaker. If anything, though, he seems to agree that Scala is a good choice compared to the plausible alternatives:

    "At the very least, I can endorse Scala relative to the set of existing
     alternatives pretty strongly."

    "I judge things compared to what could be, and not what is. [...] It's not
     good enough to be the strongest of the little kids brawling when you could
     be the big kid next door."


Actually Haskell is quite nice and does a better job. Practical too.


I am a great fan of Haskell.

But the tooling and libraries of the JVM is a significant real-world practical benefit. Scala can get away with a lot of nasties for that. (In fairness Scala is very well designed, nastiness is normally all to do with Java compatibility)

Plus it allows Java programmers to start writing code almost immediately.


And yet the whole first part of his talk was problems with scala that are not problems in haskell. And many of those problems exist precisely because of java compatibility, which he acknowledges and says is not a good enough reason to do things poorly. By his own measure it seems like he should be using haskell.


I'm curious, what languages do you consider impractical?


Assuming practical is some kind of bar for "I'd use this at a company where I'd be deploying the project into some kind of online production" - there are a number of experimental/research languages that are better for offline research/analysis and don't have the ecosystem Haskell has.

I'm not going to mention them by name because if you don't know of them, then it doesn't matter and I don't want to slur them.


I asked someone in the know, and apparently Paul Phillips is always like that. He has an interesting background -- was a pro poker player for a while.


Also happened to be the guy who wrote the most Scala code on the planet, and he quit Typesafe last year.


Well, so what? Simon Marlow, Haskell's lead compiler developer in recent years, quit, leaving arguably a much larger gap than @paulp leaving Tyepsafe since SPJ is wrapped up in leading some English education initiative.

Anyway, shit happens, gaps are filled. Languages, once established, are larger than any one developer, no matter how brilliant the individual may be.

It's worth noting that @paulp is as active as ever in Scala, working on his new collections library and contributing to Scala now from the outside -- he may have left Typesafe but has not in anyway left Scala. As he says, he has a "sickness" for language perfection, and fortunately for we Scala users, Scala is the target of his illness ;-)


>Simon Marlow, Haskell's lead compiler developer in recent years, quit, leaving arguably a much larger gap than @paulp leaving Tyepsafe

Quit what? He is still a ghc developer. He quit his job at Microsoft Research to go work at Facebook, but that doesn't seem quite comparable to quitting a job working at the official scala company on the official scala compiler.


Marlow is _full-time_ at Facebook, bye bye. There's a difference between being a ghc developer and THE ghc developer. He's more of an advisor now than a committer, look at his ghc commit history, lots of comments, few commits.

Not saying he's joined the dark side like Erik Meijer, but he's also not actively _working_ on the compiler as he did before; that's for the new guy(s).


>Marlow is _full-time_ at Facebook

And? Before that he was full time at Microsoft. He was never an employee of any sort of official ghc company working on ghc full time.

>Not saying he's joined the dark side like Erik Meijer

What dark side and what is wrong with Erik?


> Also happened to be the guy who wrote the most Scala code on the planet

I love the way he assumes that. I think he may be surprised to learn the sheer scale of some production Scala systems which aren't part of the Scala class library or compiler.

Besides, he put together a list of obscure corner cases that no practicing Scala developer actually seems to care about.


I think the Paul Philips stuff is both overblown by people outside of the Scala community and dismissed too freely by those inside of it. Paul has a very pessimistic world view, a very high standard of perfection and he's spent tons of time in the guts of an extremely complicated system. Listening to some of his talks it is easy to make the assumption that he thinks Scala should be nuked from orbit.

On the other hand, while the ParSeqViewLike example is a bit of an inside joke (I think). It doesn't take a whole lot of doing to push the Scala collections library into weird and terrifying behaviour. I'm a practising Scala developer and I do it at least monthly. For people with experience with other better designed collections systems the Scala collections library feels heinous to use, and if you ever have to dig into the code, good luck.

If the Java stream api is implemented well and has a high take up rate, I expect lots of people to come to Paul's point of view with regards to the terribleness of the standard Scala library.


> It doesn't take a whole lot of doing to push the Scala collections library into weird and terrifying behaviour

I'm genuinely curious to see examples you've seen in production code. I've used the collections for years coding full-time and haven't encountered anything like that.


An example I use a lot because it is terse:

From MapLike.scala def apply(key: A): B = get(key) match { case None => default(key) case Some(value) => value }

From HashMap.scala def get(key: A): Option[B] = { val e = findEntry(key) if (e eq null) None else Some(e.value) }

That is to say that the default behaviour of a Scala hash map is to create a new object for every access (notice I say access, not for every insert) even when I ask it to pretty please give me the one that doesn't have the null safe Option code involved.


Doing an extra object allocation (which will likely be elided by escape analysis in the JVM) is hardly "weird and terrifying behaviour". It is an implementation detail which has absolutely no bearing on the behaviour or functionality of the map.


In my use cases escape analysis does not elide these accesses out (and I was unable to reason about why not). Further, I had an immutable map that had ~hundreds of items in it that was generating hundreds of millions of objects. I found that weird and terrifying especially given that I was specifically asking it not to give me Options back.

I frequently hear that memory/performance are "implementation" details and don't have bearing on functionality. For my use cases that isn't true.

Even if it was, the surprising upside down nature of that API (de reference the option, instead of wrap the reference) is of itself weird.

This is just the most terse example of oddities in the library I've encountered. Other's include streams being created (and not GC'd) when they weren't necessary, collections losing performance characteristics due to the monadic calls (IndexedSeq's passed into functions as Seq's use LinkedLists for builders instead of Vectors), etc.

Finally, I fundamentally disagree with the idea that eager evaluation should be the default in the collection library. Views mitigated this somewhat, but after working with more sane libraries, have to remember that every time is tedious (though I'll grant that is a debatable point).

If your software is not performance critical, or you aren't implementing your own collections libraries, maybe you don't encounter these problems. But for the standard library, it is a problem.


So to summarize the collections library has "weird and terrifying behaviour" because there are cases where the performance is not quite as good you expected?

You might be interested in the AnyRefMap work being done here: https://groups.google.com/forum/#!topic/scala-internals/R4fT... Perhaps you could add your voice to the discussion if you have concerns around performance.


Another way to summarize might be to say, "If you discount the many examples of incorrect behaviour that paulp mentions, and you don't care about performance, and you are comfortable enough looking through the extremely deep collections hierarchy code to diagnose the reasons for these problems, then the standard scala collections library is almost as good as collection libraries available in other languages".

The AnyRefMap may be a good solution to many of my issues with the default map implementation once it is widely available. It won't help with the more general lack of cohesion in the library though.


apply?


apply is the function called when you use brackets and nothing else e.g. val x = myMap("myKey")


I encountered them on a daily basis when I was doing Scala, and you see these Scala collections WTFs pretty much on every other line when you are prototyping in the REPL. I've come to the conclusion that the only way the Scala collections library will give me confidence is when my head can reason more than 100 types at the same time while having distinct types for every value in the universe.

Scala's problems are real and structural. There's very little you can do about it now that all these innocent newcomers buying into the lies of the vested interests.

Speaking of lies, besides those Paul Phillips points out, you hear these nonsense about how great Scala's explicit type declarations are, that they make your function's signature clearer and etc. These are all lies, the truth is the type inference algorithm can't unify a type because of all these type variance, type bounds and type views. All these funny emoticon symbols are actually hints to tell the type inferencer to go up or down or sideways when looking for the most generic type that satisfies a type signature.

Another lie is that Odersky will keep showing you kiddy pictures and extolling how small Scala's grammar is while sweeping under the rug that Scala's many features are orthogonal, or halfway in-betweens GCDs or have surfaces of interaction with other features that are too large.

To list a few, these are my favorites:

1. implicits. Explicitness in function decl is good but when you call them it's better to hide all these unknown implicit params/type conversion from you so you can't reason your code.

2. _. There are 12 different ways you can use them. They are not shortcuts, they are conflation of concepts.

3. The interplay between classes, case classes and traits. It's very hard for me to put this one in word, there are just so many corner cases.

4. Java/Scala interop. There's no interop. There's only 1 way op from Scala to Java.

5. case classes are just ADTs. Nope, not letting me have a param-less case class or subclass a case class doesn't make them ADTs.

6. Companion objects are sold as singleton replacements. They are only singleton replacement as a side effect of having no other suitable place to put your implicit kludges.

7. Type safety. You can't guarantee type safety if you allow mutability. Period.

8. Java compat is simultaneously sold as an advantage and blamed when problems arise. Why can't Scala just use the damn bytecodes and avoid the entire Java standard lib?


It would actually be more convincing if it wouldn't sound exactly like the last dozen "I never used Scala, let's just point out some things I read on the internet about it" people.

Why not just try Scala for a while, instead of making things up?


> I encountered them on a daily basis when I was doing Scala.

Your assumption that my criticism over Scala is because I hadn't tried it is as valid as my assumption of your disagreement with me is because you haven't tried hard enough.

I had tried Scala on multiple occasions since 2.8 came out. Every time for about 1 month to 5 months. It's impossible to explain how messed up Scala is without writing a whole book about it so I admit it's hard for me to convince you. God I miss SML and Haskell.

Maybe this guy can:

http://yz.mit.edu/wp/true-scala-complexity/


So how would you solve his "issue"? (Assuming that you have understood his problem at all.)


I wouldn't bother. I'd just go ahead and use 2 distinct APIs directly, and then after a while, I'll realize my code look more and more Javaish, and then I'll switch back to Java.


Ah, ok. So you haven't understood the problem, but wanted to say something. Great.


I mean I wouldn't bother to shoehorn a Scala wrapper on Java's array.


Yes, but that's not the point of the article, right?


Thanks for the link; if you can ignore the somewhat hysterical delivery, that talk is really fascinating. It's more about what a programming language should be (in his opinion) than what Scala is not. He quotes my favourite aphorism from Wittgenstein as well:

The limits of my language define the limits of my world


Here are the slides if anyone is interested. I love the ParSeqViewLike trait on the 3rd slide. There are tons and tons of these Scala WTFs throughout the talk. It's really interesting.

http://www.slideshare.net/extempore/a-scala-corrections-libr...


Wow that talk is an eye-opener. Thanks for the link.


All of the praise here for scala can be applied, verbatim, to Java itself. Some of the concerns are no different than Java (UTF-8), or are actually not a problem there (intellij IDE vs eclipse).

While the comparison to dynamic languages is valid, this is a rather shallow discussion of scala itself.


"Combining for-comprehensions with composable futures makes asynchronous concurrency look like straightforward synchronous code."

I would really enjoy seeing this in Java.


I'd like to see the Scala code too. I've played with both and in the end, both codes are pretty complex, just complex in different ways.

It's a complex problem with a lot of edge cases and tricky error cases that need to be handled properly, which is where the elegance of the language or the programming paradigm usually goes down the drain.

I'd like to see a fully functional snippet of code that gracefully handles errors in both languages.


OK. Something like:

  val fNotifications: Future[List[Notification]] = // get notifications from the database
  val fPosts: Future[List[Post]] = // get posts from a different database
  
  // If getting notifications fail, log the error and pretend there are no notifications
  val safeNotifications = fNotifications rescue { t: Throwable => log.error(t); Future(Nil) }

  // If getting posts fails, log the error and try retrieving stale posts from the cached post server
  val safePosts = fPosts rescue { t: Throwable => 
    log.error(t)
    // If the cached post server fails too, just pretend there are no posts
    cachedPostsDatabase.getForCurrentUser rescue { t: Throwable => log.error(t); Future(Nil) }
  }

  val facebookPage: Future[Page] = for {
    notifications: List[Notification] <- safeNotifications
    posts: List[Post] <- safePosts
  } yield {
    // Code here executes when both/all outer futures are ready
    renderFBPage(notifications, posts)
  }

  // We can return the future or call more rescues on it or combine it with more futures
is a thumbnail sketch of how you'd implement the facebook timeline in Scala if you were doing so. There are even better abstractions that I've started using (functional reactive programming specifically) but none of them come with the standard library, so I thought that was a bit unfair.


for { a <- doSomething1() b <- domSomething2() } yield { foo(a, b) }

Use a Future + \/ monad.


Just curious, but have you used Java8 much? Or barring that, something like quasar? Qasar in particular is about writing async code as if it were sync code...


Java's future API is inadequate - you can only block on a future or poll it, not ask to be notified when it completes, so you end up needing a thread for each future which loses a lot of the benefits.

I haven't used Qasar specifically but I did use other java extensions that add some scala features for a while (Lombok and the Checkers Framework). The thing is, once you're using those you're effectively using a different language ("extended java") with its own bugs - and the community for that language is much smaller than that for scala.


It is a weird omission, not least because it seems that anyone that wants to use Futures ends up extending it in the same (but incompatible) way. Guava has ListenableFuture, Netty has its own version, etc.

I'm not sure how this happened, but I just use Guava's ListenableFuture and am happy again.


It was omitted because it's a complex design issue. Someone using the future would assume that if they added a callback, it would run on the same thread that they called it from. That's not possible in the general case: the thread they called it on may not have an event loop and so could not be directly notified.

It's better to leave this to frameworks or libraries that have their own event loop mechanism to handle callbacks.


> not ask to be notified when it completes

This is trivial to add (Guava offers it as ListenableFuture), but either way, it's a library concern, not a language one, so I don't think it makes sense to pit Java and Scala over this.


The standard library matters, particularly because Java doesn't let you smooth over these incompatibilities with implicits.

But you're right, I should've said that the syntax (while a vast improvement over previous Java) is not as nice as for/yield, and the lack of higher-kinded types in Java means you can't write functions that handle Futures and other types with similar semantics generically.


Fair enough, the syntax is rather pleasing.


more than syntax, immutable functional idioms are pretty good defaults, shorter both syntactic and semantic wise


> All of the praise here for scala can be applied, verbatim, to Java itself.

Ah, but users of plain Java cannot write verbose articles using made-up jargon that sounds very scientific and deep and conveys an aura of competence and significance about a website that could work just as well using PHP or Perl and CGI.


No, not really.


My favorite quote from the article: "Python probably takes the cake for being regular. PHP gets cake in the face… Scala is somewhere in-between."


If you have complete control over your programmatic infrastructure and you are getting PHP 'cake in the face' you shouldn't blame PHP. You're just a shitty programmer.


I helped develop a PHP application used by about 30,000,000 people. It consisted of well-organized objects that followed proper design patterns. Our API was easy to use and understand.

Although PHP was an effective, decent language, it wasn't the best tool for our project. It lacked certain functional features, brevity and syntactic sugar that one might find in Python.

The Web is full of emotional, profanity-laden debates pitting one language against another. I don't think those are interesting or useful. But it is productive to critique the languages' relative strengths and weakness based on how they can be applied to a business problem.

Like the engineer from Coursera, I determined that PHP was, indeed, a "cake in the face," considering my needs and circumstances at the time.


Yeah, for example: PHP's accidental broken support for octal numbers is definitely the fault of all the people who use PHP, not a problem with the language.


> Scala’s compiler is very sophisticated–it runs over 25 phases

Sophisticated is not the first term that comes to mind.


The quote "I trust my team with this power, but can you?” resonates well with me and also why I similarly insist on removing restrictions and permissions on most tools etc. I trust my team. They are smart and reasonable people (whom are allowed to f-up sometimes).

There is an unrestricted handbrake in every car, that doesn't mean people pull it unless necessary.


It would be interesting to read a similar article from a company that uses python as their core programming language (Dropbox for instance) to contrast it with this one. Python might not be as popular as Java but I was under the impression that it offered a rich enough platform/ecosystem to accommodate for a wide spectrum of software projects. Concerning the issue of type checking, I understand that it's great to have a compiler that can take care of that for you but I'm pretty sure unit-tests and a bit more care also goes a long way when using a scripting language like Python; besides how hard can it be to sprinkle a bit of "type(<object>) is" in your code where type checking is required? (smile)

I wonder if vert.x was also considered. I don't see any mention of it in the article.


I'd say having automatic compile-time checks and vs having to be "extra careful" + necessity for more umit tests is a world of difference.

AFAIK, Google doesn't use dynamic-typed languages for any of its projects (except where their hand was forced - they inherited the codebase in an acquisition) and they sure as hell are experienced in large-scale software development (for small to medium projects, the drawbacks of python etc. aren't as apparent).


I'm not sure what you consider google 'projects'; from what I heard, a lot of their 'sysadmin' type of scripting is in python (probably moving to go now)


By "projects" I mean code delivering actual software products (like for example the search, gmail etc.), not some auxiliary admin scripts.


"pretty sure unit-tests and a bit more care" - it's better to focus on unit-testing behaviour and care about things that need human's input though.

"where type checking is required" - it's better to single out cases where it is not required, this is how it's done in C# for example - quite useful for integration with other ecosystems.


Good to see some love for Scala after all the recent rants. Unfortunately, I think it's going to be short lived. Most shops where Scala would be an option will soon have a production release of JDK8 available to them, at which point Scala's tradeoffs really only become acceptable to those who already have significant investment in the language or who think category theory is a reasonable companion to an industrial language.


JDK8 is a very nice release and I hope it will bring Java a few years ahead, but there are so many things still missing from Java that every time I shriek. From big things like no type class support to tiny things like 1 file per interface/class the Java experience continues to be frustrating in the extreme.


Which is why I said "Scala's tradeoffs" would become unacceptable. Java 8 is basically "close enough" to "The Good Parts" of Scala that most people probably won't be willing to tolerate all of Scala's warts.


> Java 8 is basically "close enough" to "The Good Parts" of Scala

Ehh ... not really. And I honestly don't see the gap between Java and Scala closing. It's widening at a frightening rate.

Java's warts are far worse than Scala's warts. So I'm not seeing why that should be a point either.


Java 8 would make a very poor Scala substitute, and only offers a small subset of what is provided by Scala.


I enjoyed this article because it stated without a whole lot of bias why Scala was the right tool for the job.

Python could have fulfilled some of the requirements such as the lighter concurrency; however it would have come at the expense of the ease of deployment.

I disagree with type safety being integral to refactoring, but each team has their own culture and this seems to fit theirs well.


From an experience standpoint, isn't the fact that you'd have to teach people a new language coming in the door detrimental to the use of that language?

Comparing a company that takes in people that have been coding Ruby/Python/Java/whatever for 5+ years, versus one that takes in people that don't have Scala experience but teaches them on the job, wouldn't the former move faster and generally have more experience, making it better to hire from a large pool of experienced people (in a popular language) than a small pool (in a more niche language)?

This, of course, assumes (rightfully, I think) that the "carryover" effects of programming experience aren't substantial enough to claim that 1 month of Scala experience is better than 3 years of Ruby experience, and nor is the "advantage" that Scala gives over these more popular languages (real or imagined).

I love Coursera and all, but I'm curious what people think about this.


Teaching someone a programming language (e.g. Scala) is not equivalent to teaching someone to program.

If you take someone who has 5 years Ruby/Python/Java experience, and then spend a month teaching them Scala, they don't miraculously forget everything they know about functions, algorithms, data structures, encapsulation, interfaces, problem-solving, maths, etc.

I started my current job (at a Financial Institution where I write 99% Scala) with no Scala knowledge, but good programming fundamentals and experience with functional programming. A month later, I was a competent Scala programmer.

This isn't unique to Scala - if you have a language you like, which you think offers advantages over the competition, then it is worth training people to use it and then using it.


That makes sense. This is the kind of "carryover" effect that I talk about in the parent comment, so maybe I'm wrong in that there isn't a lot of carryover where in actuality there is.

I just doubt that learning a new language automatically gets you up to anywhere near 50% competency as someone who has been using it for 5+ years (would you agree with that?), and the ability to hire people who can apply that full level of competency (i.e. how big the hiring pool is) is important in deciding what language to use.


I partially agree to you.

I have been using Scala as primary programming language for more than a year. I have been using Java for almost 5 years before that.

Yes, the learning curve for Scala is a bit high, but once I "got it", I felt much more productive to cover for the time I lost learning new language.


Think about it like this: If you switched to a Dvorak keyboard and suddenly were typing 90% faster, jumping over to a co-worker's machine with a QWERTY keyboard doesn't suddenly render you ineffective.

It might take a moment to switch gears, but it's all there.


I think you're right in the short term, but long term you may be more productive with Scala. -Devil's Advocate.


I agree and disagree. The inherent qualities of a more recent language (especially one like Scala which seems to be built upon a good foundation) may make a team more productive in the long term despite the initial cost.

However, what also counts in the long term is the structure of the codebase, and I'd argue that people with more experience in a language are better suited to build more long-term-viable codebases with less technical debt just because they know the language better. But perhaps that's simplistic.


I've used them all: PHP, Ruby, Java, Scala, Node and Python. I'll stick to Python because the syntax is awesome and it's easy to read. Scala looks so weird to me. If I was really dependent on the JVM (which is indeed a great environment) I would honestly just stick to Java and enjoy the superior IDE support compared to scala.


> Refactoring a statically typed language is easier than refactoring an interpreted one

Compiled vs interpreted is very different than dynamically typed vs statically type! Yes, I can't name any statically typed language that is interpreted (or that has an interpreter actually used in production, to be more exact), but we're still talking about different things.

I guess this is some kind o "semantic typo", but... really? Am I supposed to trust the opinion of whoever wrote this?!


"Refactoring a statically typed language is easier than refactoring an interpreted one ... even Python is a difficult chore that engineers shy away from because modifications are likely to create more bugs than they fix."

Well said, I think refactoring being "fun" is more important then coding being "fun" for projects that are not a piece of throw-away code (unlike data analysis for example).


Weird, why say all that about how it's better than Python/PHP/etc but then not say why they chose it over Java?


I suspect most of their target audience has a dim view of Java. Also, scala's official website has the advantages over Java.

(If you're asking what Scala's advantages over Java are: less verbose to read (general lack of syntactic noise, case classes, UAP), covariance/contravariance for generics, code that's much more explicit about mutability (thanks to val/var), pattern matching, more flexible syntax (particularly for/yield as used with e.g. futures), more powerful/flexible type system (e.g. the typeclass pattern))


But one of the Scala drawbacks they cite is "arcane syntax". Another one is all the "advanced features" than can "weave a tangled web of code undecipherable to even its authors".

That's very much against the usual alleged benefits of Scala vs. Java that you've listed.

That's why I was very interested in the article, a "real world" take on Scala that really felt doctrine-free.. but found it odd that they didn't mention anything about Java. Java would seem at first blush to solve all of the concerns they listed: compile time, arcane syntax, counterproductive advanced features, primitive IDE support, training needs. While at the same time preserving most of the benefits: type safety, concurrency with Play/Akka, JVM ecosystem.

So it was just a little weird to not mention the elephant in the room.


They probably didn't even consider it. At least, I hope not.


The quote about refactoring ease in Scala is interesting, since I imagine the experience would be the same in plain old Java - refactoring in any statically typed language is much easier when you are comparing to PHP, Python and Javascript.


In fact it is a little dishonest. Refactoring in Scala is considerably worse than refactoring in Java due to the complexity of the language.


True, I'm curious what (if any) IDE the quote-author is using for refactoring. Or perhaps they are just referring to the simplicity of something like being able to rename/edit a method signature and have the compiler catch all the references to the old name immediately.


Anyone knows when Functional Programming and Reactive Programming classes are reopen?


No, they haven't released any new information about them yet.


I think they really love it because they have plenty of time to take courses online while they wait for their Scala code to build.


I didn't know ascii architect was a job.


I think this is a nicely balanced article. Good to the see the pros and cons clearly laid out.


the whole "live reload" capability of play is a gimmick because in real life making a code change in a medium to large project is like watching grass grow - every change you make takes minutes to compile and reload.


SBT 0.13.2 will ship with a smarter incremental compiler: https://github.com/sbt/sbt/issues/1010

This avoids pessimistic recompilation in many cases, in particular when editing the routes files in Play.

Test results:

https://gist.github.com/jroper/6374383


The Scala compiler is slow. However with some care over code organisation you can limit what needs to be recompiled in most cases. Read and heed this blog post: http://www.chrisstucchio.com/blog/2014/bondage_and_disciplin...


If @1 second recompile entails watching grass grow, you have some serious miracle grow on hand ;-)

SBT sub projects are the ticket to sane Scala development. If you just lump everything under a single project, sure, then it's grass growing time o_O

Modularize your code and then play> ~run (i.e. incremental compilation) and relax -- the "Scala is slow" mantra is so 2011.


How large? We have well over 100k lines of code (Scala), and it's not a serious issue. The sbt 0.13 incremental compiler is much better than 0.12's. Also, using sbt multi-projects helps quite a bit (even when running all at once).


I hope you're right. I really do.


much smaller - around 25k+. i gave up after becoming frustrated by the atrociously compile times. haven't used sbt .13 though but i don't expect it to solve miracles given how inherently complex the scala compiler code is. according to paul phillips, core scala committer, there are large portions of the compiler code that nobody touches because nobody understands it. and as per him this along with bunch of other limitations guarantees that you won't see huge gains in compile times unless they rewrite the whole compiler...


> there are large portions of the compiler code that nobody touches because nobody understands it

Could you link to one of those "large portions"?

> and as per him this along with bunch of other limitations guarantees that you won't see huge gains in compile times unless they rewrite the whole compiler...

You mean like https://groups.google.com/forum/#!topic/scala-internals/6HL6... ?


Nice find! Didn't know they were so far along with Dotty.

Will be interesting to see how a potential merge works out (assume Scala 2.13 at the earliest).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: