

Scala Best Practices - century19
https://github.com/alexandru/scala-best-practices/

======
brudgers
Scala affords imperative place-based Von-Neumann style programming by design,
because it is intended to ease the pain of working Java programmers not create
more. See Odersky's _Scala by Example_ [1].

He follows "show don't tell"[2] to meet people where they are. Odersky
recognises that marginally better is still better Scala was designed
pragmatically.

To an outsider, lists like this make it appear that entering Scala's language
community is hard. There are just so many mores. [3] This is unfortunate
because via Odersky's Coursera class, Scala has one of the best onboarding
processes among contemporary languages.

[1] [http://www.scala-
lang.org/docu/files/ScalaByExample.pdf](http://www.scala-
lang.org/docu/files/ScalaByExample.pdf)

[2]
[https://en.m.wikipedia.org/wiki/Show,_don%27t_tell](https://en.m.wikipedia.org/wiki/Show,_don%27t_tell)

[3]
[http://learncodethehardway.org/blog/AUG_19_2012.html](http://learncodethehardway.org/blog/AUG_19_2012.html)

~~~
bad_user
Hello @brudgers,

The intent of this list isn't to scare people or to make them follow _my_
conventions.

When working within larger teams, regardless of language, you need a list of
do's and don'ts. And powerful or unfamiliar languages are especially
problematic for rookies - because they need to make choices (e.g. what
abstraction should I use to model this or that and before anybody mentions
Python's philosophy - as an ex-Python developer I must say that it has led to
horrible non-orthogonal choices).

I do not believe in imposed choices - mimicking what other people's are doing
without understanding the reasoning behind it can lead to really awful
hairballs, as anybody that worked within an enterprise-ish team can attest to.
For example in this rule I'm trying to undo the damage done by another "best
practice" that people are mindlessly following -
[https://github.com/alexandru/scala-best-
practices/blob/maste...](https://github.com/alexandru/scala-best-
practices/blob/master/sections/2-language-rules.md#24-should-not-define-
useless-traits)

Also many rules in that list are not necessarily related to Scala - does a
language protect you from horrible, convoluted and big functions that do too
many things? Of course not. Does a language keep from storing dates without
timezone info? Of course not.

It's a work in progress though, feedback appreciated.

~~~
brudgers
I know you are, as Hanselman says, "helping people fall into the pit of
success". I guess the question is, can this list really be highly opinionated
and canonical and remain practically useful?

The criticism of "best practices" lists is that everything becomes a "core
principle". By which I am suggesting that "best practices for Scala"
encompasses both Scala and everything in programming. Everything except,
unfortunately, the prime directive: get something working first. Which in the
moment often means ignoring best practices and just writing code.

There are places where Java date/time are appropriate because that is both
sufficient and the convention...and because the ways in which JODA time can
accommodate obscurities derived from esoteric abstractions are manifest. JODA
is a best practice for people who know JODA well and who can reasonably
predict that future maintenance will be done by qualified persons, ie. people
who know Java well, and not necessarily Scala.

Being a library on top of Java, Scala inherits best practices from it.
Acknowledging these as such and handling them with a pointer will keep the
list it DRY...or rather DRO [don't repeat others].

I am thinking that in programming, the form in which best practices are
embodied is not a list because lists evolve to wiki's as corollary to
Greenspun's 10th rule [1]. I think best practices are better implemented as
frameworks because frameworks can be highly opinionated without being open for
debate.

~~~
bad_user
I agree, this is why I placed: Rule 0.1 - MUST NOT follow advice blindly (and
I meant it)

> _get something working first_

Yes and no. Getting something working first introduces technical debt. If the
developers are responsible enough to go back later and fix it, it's OK, but in
larger teams (over 3 people) this tends to not happen, then you're left with
shitty components that become a liability and for which the reasonable thing
to do would be to throw them away and restart from scratch, which also never
happens because management doesn't agree.

In the context of a startup or personal projects, of course, things get
shipped fast, mistakes are made, components are fixed later or rewritten,
etc... but btw, some things are easy to do once you get used to it (e.g.
immutable data-structures) and some choices are more natural to make once you
know what you're doing (e.g. Actors vs Future vs Rx).

This is what I'm trying to pass on to colleagues - do stuff, but don't be
superficial about it, because there's nothing worse than slapping an "if" in
there that will fix a bug and cause 3 others. As in, if you need to do
compromises, do them, but at least know why you did that and have a plan for
fixing it.

~~~
brudgers
None of those things are Scala specific. Nor are they solved by reading a
list...even reading _Code Complete_ is just a starting point. There's still a
big gap to synthesis and execution.

Philosophically, isn't a list of best practices a form of technical debt? I
mean all the burden of tooling it up has been kicked down the road. Then
again, building an entire framework might be premature optimization.

------
ajones
I'm a big fan of the guide that Twitter put together titled Effective Scala:
[http://twitter.github.io/effectivescala/](http://twitter.github.io/effectivescala/)

For those who want to learn Scala, Twitter has also put together a set of
lessons called Scala School:
[https://twitter.github.io/scala_school/](https://twitter.github.io/scala_school/)

~~~
bad_user
Useful resource, but as often happens with lists of best practices, I do not
agree with some things Twitter is giving advice on.

For example -
[https://twitter.github.io/effectivescala/#Control](https://twitter.github.io/effectivescala/#Control)
structures-Returns

They are saying that "return" can be used for "reducing nesting". First of
all, that's a lie - if a return exists in that code, doesn't mean that the
branch no longer exists, it only means that it isn't explicit and far more
effective for reducing nesting would be to break those functions in smaller
ones. I happen to believe that usage of "return" in Scala is never appropriate
and it was a design mistake; at the very least they abstained from adding
"break" or "continue". Scala is an expression oriented language and really, if
you're going to write Scala as if you'd write Java, then just pick Java
because Scala, contrary to other people's opinions, is not a better Java in
the sense that it actively works against you if you're doing things like that.

I tried outlining the rationale in this rule, which I consider to be non-
optional (hence the usage of MUST, instead of SHOULD :)) -
[https://github.com/alexandru/scala-best-
practices/blob/maste...](https://github.com/alexandru/scala-best-
practices/blob/master/sections/2-language-rules.md#21-must-not-use-return)

------
bad_user
Hello, I'm the author of this list - I compiled it primarily for my
colleagues, but hopefully it will help others.

It isn't complete yet and I have more stuff to add and some stuff still needs
clarification, like Rule 4.3 (just created an issue for it).

If you have suggestions, feedback, things you disagree with, I'm happy to make
edits or to accept issues and pull requests :-)

~~~
century19
Hi bad_user,

I seen this on Twitter and just by scanning it quickly I learned some things,
so that's why I posted it here and thanks for sharing.

Can I ask about rule 5.2. SHOULD mutate state in actors only with
"context.become"?

How would you use become to avoid using vars to keep state in an Actor like
this?

class Account(var account: AccountDetails) extends Actor { def receive = {
case msg: UpdateAccount => { account = msg.account } } }

~~~
wernerb
Actors are good at encapsulating state, therefore using mutable variales
inside an actor through is very natural, and one might even question if an
actor is needed if it does not have any mutable state inside it.

However, 5.2 does not concern the elimination of vars, but rather better
management of multiple states (Finite state machines)

Your example, just like the example in 5.2, only has a single state, so no
context.become is strictly necessary. The author makes clear that when you
have multiple states, the collection of top-level mutable vars will be hard to
maintain. I had this problem a few times and I am glad this gets mentioned in
the guide.

I agree though, that a better example would include at least two states.

~~~
bad_user
I'll add a better example, thanks for the feedback.

------
jfb
Just as C++11 looks worlds different (and better) than C++ circa 1990, I
imagine that the same dynamic will play out with Scala, which I tend to
analogize as Java++. Certain idiom will develop, certain language features
will be deprecated but by nature of the platform will not be able to be
removed; &c.

I find some idiom used by the more experienced Scala devs on my team to be
distasteful: I hate hate hate the semantic confusion between the Option type
and sequences that leads people to map over Options (or the dual use of
is/nonEmpty). I would always prefer pattern matching, or getOrElse, for
instance.

I think I'm just a fan of smaller languages; I certainly understand -- and
commend -- Dr Odersky's approach to a pragmatic extension of a (bad) industry
language, just as I appreciated Stoustroup's; but given my druthers, I'd just
as soon be working in something perhaps less pragmatic but certainly smaller
and less warty.

If anybody knows what language that is, please tell me. Or invent it.

~~~
bkirwi
You might prefer Kotlin?[0] It's going in the same direction as Scala, but not
quite so far.

By the way, I'm curious what you mean by "the semantic confusion between the
Option type and sequences that leads people to map over Options". Mapping and
'flatMapping' are very general things in Scala, and apply to many types that
are not collections, like Futures / Observables / etc.

[0] [http://kotlinlang.org/](http://kotlinlang.org/)

~~~
jfb
It's unfamiliarity with the idiom that makes me uncomfortable, to be perfectly
frank. I'll get used to it.

I won't get used to how alien and ultimately weak Scala's support for
algebraic data types is, and that's not changing. This comes down to the
affordances that Odersky and the Scala team have decided to favor, which is OO
flavored in a way that I don't like. I vastly prefer the ML style:

    
    
      data Foo = Bar s | Baz
    

to the sealed class/case nightmare of Scala. Again, this is not a technical
nit -- this is an idiomatic and personal one.

~~~
bad_user
Yeah, but the cool thing about Scala is that you get to pick the kind of
polymorphism you want, depending on whether the list of Nouns will evolve
versus the list of Verbs.

Like, those case classes are also classes that can have polymorphic methods on
them. So for example, you can do this:
[https://gist.github.com/alexandru/2604948978c497adc8e2](https://gist.github.com/alexandru/2604948978c497adc8e2)
\- instead of doing this:
[https://gist.github.com/alexandru/41acda9bdd694bec38b2](https://gist.github.com/alexandru/41acda9bdd694bec38b2)

Which variant is better, it depends, but both versions have tradeoffs,
depending on direction. Also, Scala's OOP blend with FP is best in breed. In
Ocaml for example is like having 2 type-systems in the same language, whereas
in Scala it's more like a turtles all the way down kind of thing which I like.

~~~
jfb
_Also, Scala 's OOP blend with FP is best in breed. In Ocaml for example is
like having 2 type-systems in the same language, whereas in Scala it's more
like a turtles all the way down kind of thing which I like._

This is a perfectly reasonable stance to take; I think my Objective-C
background would have me argue that it makes more sense to keep two distinct
paradigms syntactically distinct, but that's a matter of taste. I haven't done
anything serious with OCaml -- perhaps that's a direction I'll explore after
my Haskell toy project is finished.

Thanks for the replies, you've shed much light. And the OP is great, too; I'll
definitely be keeping it close at hand.

------
bkeroack
Seems like an awful lot of rules to hold in one's head. Scala just looks like
a giant tumor of complexity to me.

~~~
bad_user
What other people find complex, I personally find refreshing. Could you please
describe why it looks like a giant tumor of complexity?

There's a difference between true complexity (i.e. complecting or that lead to
accidental bugs) and things just being unfamiliar. I try to stay away from the
former, while not shying away from the later - since I've picked up Scala
about 3 years ago, my knowledge expanded with a lot of useful abstractions
that help me do my job better, abstractions that weren't natural or popular or
properly implemented in the other languages I worked with. This isn't to say
that Scala doesn't have true complexity in it, but then again, what language
doesn't?

~~~
bkeroack
a) Martin Odersky has admitted as much[1].

b) The compiler internals are reportedly a mess[2]--likely due at least in
part to the extreme number of different syntax variants.

c) I find rule 4.1 in OP interesting ("SHOULD avoid concurrency like the
plague it is"). According to Odersky's Coursera course[3], one of the main
advantages of functional programming as enabled by Scala is easy, safer
concurrency (reduced/absent mutable shared state). Yet here we see a
presumably seasoned Scala veteran basically saying that this is a failure and
we should avoid concurrency whenever possible because it results in too many
bugs.

1\. [http://www.infoworld.com/article/2609013/java/scala-
founder-...](http://www.infoworld.com/article/2609013/java/scala-founder--
language-due-for--fundamental-rethink-.html)

2\.
[https://www.youtube.com/watch?v=TS1lpKBMkgg](https://www.youtube.com/watch?v=TS1lpKBMkgg)

3\.
[https://www.coursera.org/course/progfun](https://www.coursera.org/course/progfun)

~~~
frowaway001
It's always funny to see how these things get interpreted. :-)

a) There is a lot of work going on at the foundations of the language – most
of the stuff is unlikely to be felt by users.

b) Compilers are hard, there is very likely no "nice" compiler out there
(except for toy languages). This has almost nothing to do with syntax
variants. Nevertheless, there are multiple groups which are working on
improving the codebase.

c) Concurrency is hard to get right, especially "traditional" approaches like
synchronizing, locks, etc. If it's not necessary, don't use it. Otherwise use
the right abstractions for your problem, as mentioned in the next paragraphs.

------
tempodox
I'm not so sure what to think of Scala as a language, but I think the naming
and formatting recommendations in this document have merit beyond a single
language.

 _There are only two hard things in Computer Science: cache invalidation and
naming things._

Truer words were seldom written. Just compare the naming culture in Common
Lisp with that of OCaml, for example. Worlds of difference.

And don't even get me started on the “standard” way of formatting in C family
languages...

~~~
rtpg
I've heard this a lot but is cache invalidation as hard of a problem as , say,
distributed systems like bitcoin?

~~~
tempodox
I thought of a single process as the frame of reference. Once you enter
distributed systems, complexity (and consequently hardness) can achieve
arbitrary levels.

------
century19
Hi again bad_user,

Glad your collection of best practices got the attention and discussion they
deserved today. I notice you posted them to HN 8 days ago and nothing.
Unpredictable eh?

One more request, with your best practice "3.2. MUST NOT put things in Play's
Global" \- could you add an example of the correct way to "come up with your
own freaking namespace"?

I assume putting authentication in there like SecureSocial does is ok?

------
vkjv
"And if a public API is not thread-safe for some reason (like the usual
compromises made in software development), then state this fact in BOLD
CAPITAL LETTERS."

I prefer to use the `Unsafe` prefix for this, to make it abundantly clear
without looking at the call. E.g., `updateUnsafe`.

I also use the `Unsafe` suffix when defining an HTML template rendering method
that does not do escaping.

------
jxm262
This is so awesome! I've been slowly learning Scala and was looking for a
"Effective Scala" type book.

Thanks for making this!

~~~
rbehrends
Just keep in mind that it's an opinionated list. I routinely and intentionally
violate several of these rules and have no intention of stopping that.

Much of it will depend on your application domain and needs. For example, if
you're dealing with matrices, avoiding destructive updates will only give you
a performance hit without an increase in maintainability.

~~~
jxm262
Which section are you referring to? 2.1, 2.2 maybe?

Sorry of for the simple question, but I think I'm missing something. Wouldn't
avoiding a destructive update give better performance? And the trade-off would
be higher maintainability? With a mutable structure, you're 'destroying' just
reusing the same locations in memory and just replacing, versus say an
immutable structure returning a completely new structure (new memory
allocations).

But to your main point, yes I won't take this as the canonical source of
coding standards. I just like to read through stuff like this because it gives
me alot of areas to explore and research. Like why do this vs that, what's the
tradeoffs, etc..

<<edit>> Whoop's looks like i have my logic/terminology flipped! Saw this on
SO which cleared things up a bit.

[http://stackoverflow.com/questions/6964233/what-is-a-
destruc...](http://stackoverflow.com/questions/6964233/what-is-a-destructive-
update)

Also just noticed the + vs += in some of the mutable collections, which helps
to explain.

~~~
rbehrends
> Which section are you referring to? 2.1, 2.2 maybe?

Much of section 2 and 4. Also, I'm not using Play, Akka, or any other web
frameworks (since I don't do web stuff), so those don't apply to me and I
don't have a basis for evaluating them.

> Wouldn't avoiding a destructive update give better performance?

No. In the absence of destructive updates, you either need to copy matrices or
use a data structure that typically incurs a log(n) overhead with a fairly
large constant.

~~~
bad_user
> _In the absence of destructive updates, you either need to copy matrices or
> use a data structure that typically incurs a log(n) overhead with a fairly
> large constant._

That is not true - first of all, _log(n)_ in our industry means _log2(n)_.
Scala's Vector for example is a _log32(n)_ , which given that the max size is
Int.MaxValue, can be treated as constant access as far as algorithmic
complexity is concerned. Persistent HashMaps are also implemented (instead of
self-balancing binary trees), which again means less than _log2(n)_. And
Scala's Queue for example is _O(1)_ for both enqueue() and dequeue() - it's
just that every "n" operations it has to invert a list of pending items, which
takes "O(n)" but given that it happens every "n" operations it is amortized.

It's not algorithmic complexity that bites you, algorithmic complexity is
totally fine. The real problem is more low level, with the indirections
happening due to usage of pointers. Depending on memory usage patterns,
persistent data-structures could stress the GC enough to increase the number
of stop-the-world pauses. Also in a multi-threading context, when you keep the
pounding on the same reference holding an immutable data-structure, you drive
the contention to the root of that tree and that is less efficient than a
specialized mutable data-structure that can distribute those locks in multiple
buckets ... but still far better than synchronizing on a standard mutable
data-structure.

Actually, which is better in terms of performance depends on context. For
example if you have shared reads, then an immutable data-structure is better
because you do not need to synchronize those reads.

Also - I did work on a web service that was hit with tens of thousands of
requests per second and because of the architecture we've built (asynchronous
single producers, multi consumers), usage of immutable data-structures wasn't
an issue and it was much saner. AND most people working on most project never
end up with such hard optimization issues.

Therefore fear-mongering about something that in practice really is not an
issue is not helpful, especially given the sanity that usage of immutable
data-structures brings. See this other rule I added -
[https://github.com/alexandru/scala-best-
practices/blob/maste...](https://github.com/alexandru/scala-best-
practices/blob/master/sections/3-architecture.md#33-should-not-apply-
optimizations-without-profiling)

~~~
rbehrends
> That is not true - first of all, log(n) in our industry means log2(n)

Which, for a 1000x1000 matrix, means a factor of about 20, not to mention the
significant constant overhead from not using a contiguous memory layout.

> It's not algorithmic complexity that bites you, algorithmic complexity is
> totally fine.

I didn't say anything about algorithmic complexity. Note that I was writing
log(n), not O(log(n)) in particular and pointing at the constant factor, too.
A computation that takes two days instead of half an hour does make a very
real difference to me.

> I did work on a web service that [...]

That's why I was careful to point out how different application domains
matter. Web services are not numerical computations are not computer algebra
are not big data.

~~~
bad_user
> _A computation that takes two days instead of half an hour does make a very
> real difference to me_

IMHO, such a big difference only arrises when big differences in algorithmic
complexity is involved.

Again I must mention that I'm of the opinion that if you know what you're
doing, then it's fine, but you must be able to defend your choice, as an
argument like " _gosh, heard from the Internet that immutable data-structures
are slower_ " is not acceptable given the extra sanity that it brings. Plus,
depending on context (e.g. web services that are dealing with parallelism
combined with I/O) immutable data-structures might actually improve
throughput.

Your use-case is entirely legitimate, therefore that rule on usage of
immutable data-structures is a SHOULD (optional), rather than a MUST
(required). If you wanna do it, then do it - after all, the ability to get
down and dirty is one thing I like about Scala, too bad that it doesn't have a
GC-less version :-)

~~~
rbehrends
> IMHO, such a big difference only arrises when big differences in algorithmic
> complexity is involved.

No, this arises as the result of large constant factors, too. As I pointed
out, log2(1e6) is approximately a factor of 20 alone and you pay additional
constant overhead for having suboptimal memory layout and allocations.
Remember, while functional programming languages love trees and linked lists,
the hardware loves contiguous arrays. This is what the L1 and L2 cache and
prefetching logic of your typical CPU are optimized for.

------
danielbarla
I'm not very current on the Scala tooling - is there a reasonable lint tool
which enforces these types of suggestions?

~~~
adriaanm
We've been working on [https://github.com/scala/scala-
abide](https://github.com/scala/scala-abide). Our aim is to provide a nice
platform for others to develop and distribute style checking rules. We (the
core scalac devs) intend to focus on keeping the abide platform up to date
with the compiler, letting the community evolve the set of rules.

Due to resource constraints, we haven't made a big push in advertising it yet,
but we're happy to work with you if you'd like to start implementing rules.
Have a look at recent PRs for some examples.

~~~
danielbarla
Thanks for the info, and kudos on the good work in general!

Abide looks quite nice, although I'm a very occasional user of Scala so far,
so I'm not sure I'd be very good at coming up with and implementing rules.
I'll definitely spread the word to some friends who use Scala more frequently,
and may be able to contribute.

~~~
adriaanm
Thanks! Please don't hesitate to ask (ideally on scala-internals) if you'd
like some pointers to get started with abide.

------
chrisloy
This is a really helpful guide, which I can see myself coming back to a lot.

I can't express how relieved I am to see someone else directing Scala
developers away from the Cake Pattern. I've only ever seen it result in messy,
confusing code.

------
jrobn
Scala Best Practices:

1\. Don't use Scala 2\. Rule #1 is blatant opinion.

~~~
bad_user
Hello, I'm the author of this list, which I hope to grow to be something more
than a single-author thing.

Until now I've done non-trivial amounts of work on the job in Java, C#, C++,
PHP, Python, Perl, Ruby, Javascript and Scala and I've played in my free time
with half a dozen others, like Scheme, Clojure, Rust and Haskell.

Take what you will from this, but I found Scala to be the most sane thus far.
My opinion may change in the future, since I don't really have loyalty for
programming languages, but regardless - I find it more constructive and better
for your knowledge to verbalize your dislikes with clear arguments, maybe
there's something you're missing.

~~~
jrobn
I don't have any particular loyalty either. But, in my opinion, and after
taking considerable time looking into scala, it's not my preferred language.
The syntax can quickly devolve into soup. It's entangled with Java too closely
(unlike say clojure). And last time I tried it compile times were...awful. All
of this is my opinion and if scala works for you that's great. This guide is a
great start for scala devs not to shoot themselves in the foot.

~~~
bad_user
> _It 's entangled with Java too closely (unlike say clojure)_

In some ways it is saner than Clojure btw ...

For example Clojure devs pride themselves that their language doesn't have
variables, but oh wait, it has Vars and function definitions must be
dereferenced on call sites and those Vars can be modified wherever, whenever,
as Clojure does not have the notion of " _val_ " or " _final_ ". And of course
you can do thread-local bindings for those Vars and that's why people have
used it for things such as dependency injection, as a poor man's implicit
arguments that do not appear in the function's signature. I don't even know if
those persistent data-structures could be implemented in Clojure, as you need
final's semantics for that and Clojure's data-structures are actually
implemented in Java.

I also like Clojure's protocols a lot, but they've got limitations as compared
to type-classes from Haskell or Scala. For example in LISP lazy by-name
parameters are modeled by means of macros, which is OK, but then if you want a
macro as part of an interface, tough luck (this is a LISP vs ML thing).
Protocols aren't doing interface inheritance either. For example in Scalaz (a
Scala library bringing in many goodies from Haskell) the Monoid type-class
inherits the interface of Semigroup, because a Monoid really is a Semigroup, I
don't think anybody can argue against it and it isn't "complecting" anything.

The process of learning Clojure has been very frustrating for me. I get it
that the language doesn't have loops, I don't do loops in Scala either, but
the absence of loops is the least interesting part of going functional -
absence of local mutable state is not solving a big problem. The more
interesting part for which I found little guidance is for modeling changes (in
user input, or the various data sources that we use). In Scala we use more and
more concepts and design patterns coming from Haskell, whereas I found Clojure
to be maybe a little too pragmatic for my taste - for example Clojure devs
aren't modeling Monads, a fact that is apparent starting with the standard
"map", "filter" and "mapcat", builtin functions that work only on the builtin
collections; Clojure doesn't even have some kind of Numeric protocol for
building your own things that work with the numeric operators; and I hate
things like the sorted-set using java.util.Comparable (what's up with the lack
of abstractions in the standard library btw?).

Of course I could go on - and I'm sure Clojure developers will jump to rectify
me - Clojure has a lot of cool things in it and that's why I keep on learning,
because some day I might have an _aha_ moment, plus everybody should learn a
LISP, just like everybody should learn a functional statically typed language
(be it Haskell, Ocaml or Scala).

~~~
willismichael
I'm somebody who enjoys tinkering around with Clojure, I really wish that I
could use it more at my day job, and I'm not particularly interested in Scala.
Still, I have to thank you for this well thought out response. So often we
have language wars that consist of two sides each with only a cursory
knowledge of the other side's technology.

I like how pragmatic Clojure is, but I think I can understand when you said
that it's "maybe a little too pragmatic". The more I learn about Haskell
(which is still very little, to be honest), the more I wish that Clojure had
more in common with Haskell. On the other hand, the new work on transducers to
me seems to solve some of the problems that you bring up about map, filter,
etc., in that transducers work with any manner of data sources and sinks.

~~~
bad_user
Yeap, I also like the idea behind Transducers and I hope seeing it in other
languages as well. As I said, Clojure has some really sweet things in it,
which is why I keep tinkering with it as well, because I want this exposure.

For example I like how Clojure people are doing what they call "light-weight
data modeling". In Scala and Haskell we get too focused on types, sometimes we
go overboard, forgetting that the data should stay reasonably decoupled from
short-term business needs.

So there's something really refreshing about Clojure's approach (which is in
general LISP's approach to doing stuff). On the other hand I really like
having a potent compiler that can help me deal with accidental complexity,
which is why people will never agree on which is better, because it depends a
lot on context (i.e. the kind of problems you're working on).

------
devondjones
First best practice? Don't use Scala.

