Hacker News new | past | comments | ask | show | jobs | submit login
Making the move from Scala to Go (movio.co)
325 points by maloga on Jan 24, 2017 | hide | past | favorite | 367 comments



Since they cite me (and my essay from 2014) as part of their decision making process, I want to throw in 2 cents here. They ended up deciding on Go, whereas I have ended up preferring Clojure, yet I agree with a lot of what they say, so I'll try to clarify why I ended up with a different decision than what they made.

I understand what they mean when they write:

I think the first time I appreciated the positive aspects of having a strong type system was with Scala. Personally, coming from a myriad of PHP silent errors and whimsical behavior, it felt quite empowering to have the confidence that, supported by type-checking and a few well-thought-out tests, my code was doing what it was meant to.

There are times when I appreciate the strict type-checking that happens in Java. I do get what they mean. But there are also a lot of times when I hate strict type-checking (in particular, when dealing with anything outside of the control of my code, such as whimsical, changing 3rd party APIs that I have to consume (for some business reason), or even 1st party APIs that feel like 3rd party APIs because they are developed by another team (within the same company) or for some reason we can not fix the broken aspects of some old API that was developed in-house 6 years ago.) Because of this, I have become a proponent of gradual typing. If I am facing a problem that I have never faced before, I like to start off without any types in my code, and then, as I understand the problem more, I like to add in more contract-enforcement. This is what I attempted to communicate in my essay "How ignorant am I, and how do I formally specify that in my code?" [1]

I think everyone who works with Clojure sometimes misses strict type-checking. Because of this, there have been several efforts to offer interesting hybrid approaches that attempt to offer the best of all worlds. There is Typed Clojure for those who want gradual typing, and there is more recently Spec. Given what I've written, you might think I am a huge fan of Typed Clojure, but I've actually never used it for anything serious. The annotations are a little bit heavy. I might use it in the future, but for now, I am most excited about Spec, which I think introduces some new ideas that are both exciting for Clojure, and which I think will eventually influence other languages as well.

Do watch the video "Agility & Robustness: Clojure spec" by Stuart Halloway. [2]

I also sort of understand what they mean when they write this:

No map, no flatMap, no fold, no generics, no inheritance… Do we miss them?

There are times when we all crave simple code. Many times I have had to re-write someone else's code, and this can be a very painful experience. There are many ways that other programmers (everyone who is not us, and who doesn't do things exactly like we do) can go wrong, from style issues such as bad variable names to deeper coding issues such as overuse of Patterns or using complex algorithms when a simple one would do. I get that.

All the same, I want to be productive. And to be productive in 2017 means relying on other people's code. And, in particular, it means being able to reliably rely on other people's code -- using other people's code should not be a painful experience. Therefore, for me, in 2017, one of the most important issues in programming is composability. How easy is it for me to compose your code with my code? That is a complex issue, but in general, those languages that allow for high levels of meta programming allow for high levels of composability. Both Ruby and Javascript and Clojure do well in this regard, though Ruby and Javascript both have some gotchas that I'd rather avoid. In all 3 languages, I find myself relying on lots of 3rd party libraries. I use mountains of other people's code. Most of the time, this is fairly painless. But there are some occasionally painful situations. With Ruby I run the risk that someone's monkeypatching will sabotage my work in ways so mysterious that it can take me a week to find the problem. And Javascript sometimes has the same problem when 3rd parties add things to prototype, perhaps using a name that I am also using. I so far have had an almost miraculous time using Clojure libraries without facing any problems from them. It's this issue of composability that makes me wary of Go. While I sometimes crave a language that simple, I can't bring myself to give up so much of modern languages best features.

[1] http://www.smashcompany.com/technology/how-ignorant-am-i-and...

[2] https://www.youtube.com/watch?v=VNTQ-M_uSo8


> If I am facing a problem that I have never faced before, I like to start off without any types in my code, and then, as I understand the problem more, I like to add in more contract-enforcement.

This seems to be a widespread sentiment.

In practice, I've found that prototyping anything remotely complex without types is so painful that I'd rather settle for an inferior design than trying to come up with the best possible abstraction.

In Haskell, I come up with a coherent skeleton without having to implement mundane details, hit a wall in the design space because of some case I didn't think of, come up with a better idea and then go back to the code and refactor with confidence. The type system always guarantees that my prototype is coherent as a whole. And I can do that dozens of times.

In Clojure, even with spec, I'd have to implement all my functions fully before being even able to test the design as a whole (sure, testing individual functions works fine in the REPL.) And after hitting a wall, having to reimplement everything every time is just too much work.


> In practice, I've found that prototyping anything remotely complex without types is so painful

Just to offer a counter-point, I have a Clojure project here with 3k LOC, without using spec/schema. All I have is 700 LOC tests. The tests enforce semantic meaning, along with (some) contracts. I miss types from time to time, but it is no where near as bad as you mention. Against me is the fact is that my app is mostly self-contained, and written all by me. I am fairly certain the project would be atleast 30k LOC if I wrote it in Java.

I think value of types only comes into being when there are many people working on a single code base. Repl/tests/integration tests will take one a long way before it reaches its limits.

I don't buy the argument that types are useful for large codebases, because 1. Types don't enforce meaning. Meaning is far complex than type. Haskell style types only work as far as they enforce meaning. 2. Types have limits too. More accurately, humans have limits. Working with large codebases where there are 1000s of types is hard; which it shouldn't be because that was type systems selling point all along.

Cleaner abstractions of code, separated by strongly enforced protocols(types) is the way to go, I think.


Have you tried what I'm talking about, i.e. prototyping something by specifying the types and iterating on the core design first before implementing large chunks of the required functionality?

It's probably hard to see the benefits without having tried it first.

This [0] is one of the bigger Clojure projects I've done (around 2.5kloc), also without spec/schema, and I really didn't dare refactor much, even when having a clear understanding of how things could be done better. It was just too much work.

With types I'd go through 10 design iterations before settling for something I'm satisfied it, and even halfway through the project changing things radically isn't a problem (I've worked on a 50+ kloc Haskell backend service, and changing core data structures used pervasively throughout the codebase was a 10min job, literally.)

[0] https://github.com/pkamenarsky/tellme


I see what you are trying to say. You are saying Clojure is not the right tool to do top-down design. I agree with it. It is however a very good tool to do bottom up design. See https://www.youtube.com/watch?v=Tb823aqgX_0


I don't think this is top-dow design really, just an overengineered OOP monstrosity.

Top-down doesn't mean that you have to model the universe first before getting to model your problem :)


Today I caught a bug in a macro-expanding code walker, where it was expanding the wrong form. The syntax being walked is (foo-special-operator x y z . rest). x and z are ordinary forms that need to be expanded; y is a destructuring pattern (irrelevant here). The walker was expanding z in the place of x: that is to say, expanding z twice, and using that as the expansion of both x and of z. That's simply due to a typo referring to the wrong variable.

The type system would be of donkey all use here, because everything has the correct type, with or without the mistake. The code referred to the wrong thing of exactly the right type.

A lot of code works with numerous variables or object elements that are all of the same type, and can be mixed up without a diagnostic.


So the issue you faced was that you used z twice, and x not at all? In that case, linear types (http://edsko.net/2017/01/08/linearity-in-haskell/) may have caught that.


x not being used at all can be caught by a simple unused variable warning.

In this situation, the variable's value was used somewhere, so that didn't go off.


If you used newtypes for z and x it would have been caught.


> 1. Types don't enforce meaning

Why do you think this?

> Haskell style types only work as far as they enforce meaning.

Doesn't this contradict your statement above or are you saying Haskell style types never work because they don't enforce meaning?

    getUser :: IO (Maybe User)
The above function enforces that getting a user can fail and you must contact the outside world to get a user.


Alright.. What I meant to say was,

Types don't enforce all meaning ie., the enforcement of contracts through types go only as far as they mean something to the problem you are applying it to. It does not cover all the complexities of the problem, or the way the code is changed in the future.

EDIT: This is also why it is easy (and nice) to implement parsers in strictly typed functional languages, because parsers are well studied theoretically. The problems in the real world are not studied well enough for contracts enforced via types to work completely.


Types don't enforce meaning, types are a tool I use to enforce consistency of meaning along certain important dimensions. I actually find that even more important when the situation is messy, because I'm likely to initially mischaracterize some aspects of it initially and when I go to change things it's very useful to be told what's now inconsistent.


I get what you are saying. What I'm trying to say is typing takes too much from me, in terms of complexity over-head, that I'm better without. I found this is true in practice now. As I said before, I write tests to do what you say types do - for me, that is enforcing meaning. Types do allow for easy refactoring, and I think that is weakness for untyped languages.


> The above function enforces that getting a user can fail and you must contact the outside world to get a user.

That seems sort of backwards. It enforces that the caller be able to handle failure (and similar for IO). It may well be that "getting a user" doesn't do either (e.g. `pure (pure defaultUser)`)


Cannot proto-type without type; how ironically circular. :)


> In Haskell, I come up with a coherent skeleton without having to implement mundane details, hit a wall in the design space because of some case I didn't think of, come up with a better idea and then go back to the code and refactor with confidence. The type system always guarantees that my prototype is coherent as a whole. And I can do that dozens of times.

Could you go into a bit of detail about this approach? Do you mock functions out and just specify type definitions, and fill in functionality as you go?


I'm about 5% into a side project implementing Erlang-like actors in Haskell and it provides an example of this workflow.

https://github.com/tel/hotep

Note that src/Hotep.hs exports a bunch of undefined functions. I've been able to verify that these types all make sense even without implementing a thing. As time goes on I may learn that the implementation drives the types somewhere else and then the compiler will make refactoring easy.

However, already I've gone through about 5 iterations of this design which drove me to debug some structural questions about the whole affair and also dive deep into Erlang documentation to determine how they solved problems. These explorations and their results are encoded into the types.

At this point I'm beginning to consider implementation and I can keep filling out just partial implementations against these types. I'll probably make 2 or 3 toy implementations to test out the ideas again with more strength before moving on the final ones. The whole time the types will be guiding that development and helping it move quickly.

Key to this whole affair is the need to describe types utterly before a completely successful library is made... and also the ability to defer the burden of providing type-checking code for as long as desirable. Haskell supports this wonderfully—even more wonderfully with things like Typed Holes and Deferred Type Errors which enable a really great interactive experience I haven't yet needed to employ.


I've used Scala, Clojure, and Go. I found Scala to be too feature-rich for its own good.

It was fun to write (Look at me! I just spent two hours figuring out how to compress this old Java 7 function into a one-liner in Scala!), but reading someone else's Scala was almost as mind-numbing as reading another programmer's C++.

Go is... meh. Quick to learn, easy to write (and read), but you quickly hit a plateau as far as personal productivity goes. I can see the value when working on large teams, but on my personal projects (for which I have limited time) my own productivity is paramount (not to mention I want a language that's fun to use) :)

Thus I've found Clojure is my ideal language for the time being. It strikes a good balance between power and simplicity (and at this point I find it more readable even than Go ).


The way you describe your experience with Scala makes me think you only had a very superficial look at it.

At it's core Scala is very simple & the syntax is very regular, far more than Go or Java and a lot less complex than C++. It's the most expressive typed language on the JVM, so if you like to think in types & you're on the JVM it's your best option.

Clojure is untyped, I hear many people praising it but I don't know any big project done in Clojure. So if you're doing short-lived projects I'm sure it can shine but for software that will be around for more than 5 years I would stay away from it. Btw, if misused, just like Scala, Clojure code can be extremely cryptic.

Go likes it's superficial simplicity, syntactic irregularity & stubbornly refuses to accept that PL design has evolved since the 80-90ies, but I'm sure it's appealing to people who are used to languages from that era.


Go is a language that is a bit tedious to write, no doubt about that, but it's very easy to read. I spend a lot of my time reading other people's code and I really appreciate that.

The fact that Scala as a language allows something like SBT to not only be created, but accepted, means I don't want anything to do with it.

I've suffered long from the Ruby ecosystem's mentality of "look at what I can do!" of self-serving pointless DSL's and frameworks and solemnly swore to myself to stay away from cute languages that encourage bored devs to get "creative".

It's about trade-offs, I guess. Go definitely appeals to a lot of people, and not all of us are unaware of the amazing "progress" that has been made in the 80's and 90's. Awesome progress that brought us Java, SOAP, C++, Javascript-on-the-server, and a slew of other tech some of us want to stay far, far away from.


> Go is a language that is a bit tedious to write, no doubt about that, but it's very easy to read. I spend a lot of my time reading other people's code and I really appreciate that.

Figuring out 1000 lines of code that could have been 10 and verbosity caused by a lack of generics is not going to help you understand code quicker. Figuring out what 10 lines of Scala do may take more time compared to 10 lines of go, but that's not a measure of velocity, the information density of go is just too low. At least 10 lines of scala fit on my screen, 1000 lines of go don't.

Code style issues imho are a team issue, if you do reviews these issues can be managed.

> not all of us are unaware of the amazing "progress"

Look at Rust, at least they did their homework. With Rust out there I can't see any reason to use Go except maybe their crappy GC.


I like Rust too. I re-wrote a parser I had implemented in Go in Rust and got almost an 8x speed-up. Having said that, the two almost have no overlap for me. I don't see how Rust replaces Go in 9/10 of Go use-cases. And vice-versa.


> The fact that Scala as a language allows something like SBT to not only be created, but accepted, means I don't want anything to do with it.

FWIW, we agree and continue to use Gradle, despite making investments in Scala.


> Awesome progress that brought us Java, SOAP, C++, Javascript-on-the-server, and a slew of other tech some of us want to stay far, far away from.

When people talk about progress from PL design, they aren't talking about any of those things. If you notice, all of those things were made in industry, not in PL research/academia. Not to mention that those languages are also ones that ignored the PL design progress! (although C++ finally seems to be adding some ideas from PL research in C++17)

They're talking about things like parametric polymorphism, dependent types, modules, macros etc. (I've mostly been reading about work in the types/ML family languages, but I'm sure there's progress been made outside that as well)


> The way you describe your experience with Scala makes me think you only had a very superficial look at it.

I worked with it daily for 2 years and also took Odersky's Coursera courses on Scala. I liked it more than Java (7), though I find the syntax aesthetically offensive (and I realize that's subjective). Ada (I worked in an Ada shop for 3 years before moving to the JVM) managed to have a robust type system without introducing the sort of syntax wtf-ness that Scala seems to need.

I actually recommended against using Scala at a later job simply because I thought the learning curve would be beyond most of the people I was working with (I didn't phrase it quite like that when mgmt asked me for my opinion, of course). Learning to write idiomatic Scala takes time. In that regard it's an expensive language to use unless you hire folks who already have experience with it.

As far as Clojure's dynamic typing goes, you can use libraries like plumatic/schema to add some checking where you need it (interfaces, etc), and for whatever reason I tend to have less trouble (as far as runtime type issues go) with Clojure than I do with Python or Ruby.

But hey, no language is perfect. I kind of wish Haskell clicked for me the way LISP seems to, since on paper it seems to check all the boxes -- but I just can't seem to get very proficient with it (or I'm just not willing to invest the time at this point).


> though I find the syntax aesthetically offensive

Well that's indeed your opinion, whenever I write code in a language with old fashioned statements I die a little inside. I have a lot of experience with ADA too, at it's core it's still a procedural language prohibiting good abstractions, it's very safe but also extremely verbose.

I learned Scala after I learned Haskell, ~7 years ago, maybe that's why I had a different experience. Since I learned haskell I think in types & transformations, it has made me a much better programmer. Clojure has sortof the same mindset, but I would call it 'shapes & transformations'.


> Clojure is untyped, I hear many people praising it but I don't know any big project done in Clojure.

Not sure your standard of big, but:

1. Most of Climate Corporation's (https://climate.com/) backend is written in clojure and they deal truly massive amounts of imaging data and parse/munge it to be useful for their applications.

2. Various places using clojure for various things without details but some of them are likely big https://clojure.org/community/companies and http://dev.clojure.org/display/community/Clojure+Success+Sto....

I'm sure people with first hand experience will chime in too.


>Go likes it's superficial simplicity, syntactic irregularity & stubbornly refuses to accept that PL design has evolved since the 80-90ies, but I'm sure it's appealing to people who are used to languages from that era.

Not really. The largest proportion of Golang users come from relatively modern interpreted languages like Python and Ruby which arr widely used in server environments. I'm unable to find the survey results but I do recall this trend surprised the Go's original authors who were originally seeking to replace C++ & Java


I guess Scala being "the most expressive typed language on the JVM" is true in the sense that it has a ton of features (OOP, FP, Exceptions, null Java backwards compatibility, etc.), but that's just too many features for a coherent language.


It's funny that both java & c# seem to be picking up many scala features in their latest & future versions like traits, lambdas, tuples, pattern matching, case classes, closed hierarchies, declarative generics... Your language isn't incoherent if you can build features on top of each other with is exactly what scala does.


When I read comments like this I wonder if the person posting is trying to justify their own development decisions or seriously trying to "enlighten" the person they're responding to.


Can you elaborate on why you think that Clojure has a higher level of composability than a static typed language like Scala?


Rich Hickey(Creator of Clojure) has talked about this. Type-specific lingo prevents one from applying common patterns of transformation. Check https://www.infoq.com/presentations/Simple-Made-Easy


That is a large subject, for sure. It deserves a good essay. I'll try to write something tonight. I'll post it to my blog and I'll leave a note here.


FWIW Python is also a great choice for composability. gevent is the only widely used library that really monkey-patches, and with it (to answer the question in OP) you can write essentially any function as a pseudo-goroutine (greenlets that yield to the event loop when reading from a queue which allows you to apply backpressure). And you can still access the vast realm of Python libraries; any that use sockets will automatically yield to the event loop on blocking operations. The criticisms from the recent Python complaint thread are valid, but it's still a great language for using other code.

For JS, rarely do people mutate Object.prototype these days; everything's a functional library. So almost all libraries you use will be good actors. It's also a good choice, though for data management and interop with scientific/vector/tensor operations, it's still hard to beat Python.


Interesting to hear. I like the idea of gradual typing as I'm coming from a predominantly python background and sometimes want to add types as I go. Perl6 looks really promising here. Curtis "Ovid" Poe has a good YouTube video on this. He starts with the Fibonacci function which can easily go wrong depending on a variety of inputs and keeps adding type restrictions such as it has to be a positive int between a range of #'s...i dunno, something along those lines.


As it turned out, more flexibility led to devs writing code that others actually struggled to understand.

This is what happens in almost every language. Niftyness and the prospect of impressing your coworkers distorts the cost-benefit calculation. This is in addition to the true costs appearing months or years after the code is written, involving the interaction of complex factors, like increased cost of debugging.

"Clever" should be regarded as a limited resource. Also, the shop should encourage a culture where "clever" with regards to making code easier to understand should be valued above all else.


This (lengthy) quote comes to mind. If you think it is interesting, please read the whole EWD.

EWD 340 (Prof. Edsgar Wybe Dijkstra) [1]:

"The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language I have been told from various sides that as soon as a programming community is equipped with a terminal for it, a specific phenomenon occurs that even has a well-established name: it is called “the one-liners”. It takes one of two different forms: one programmer places a one-line program on the desk of another and either he proudly tells what it does and adds the question “Can you code this in less symbols?” —as if this were of any conceptual relevance!— or he just asks “Guess what it does!”. From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz. to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. Another lesson we should have learned from the recent past is that the development of “richer” or “more powerful” programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally. "

[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340...


I think it is interesting that you left out the very next few sentences, which provide very relevant context:

"I see a great future for very systematic and very modest programming languages. When I say “modest”, I mean that, for instance, not only ALGOL 60’s “for clause”, but even FORTRAN’s “DO loop” may find themselves thrown out as being too baroque."

While I agree with the general sentiment, it's very important to take anything Dijkstra says with a huge grain of salt. He was a mathematician first and foremost. He obsessed about things like a mathematician. Such people, while very smart, make for very unproductive software engineers. They also have a way of sounding deceptively smart and thoughtful, when really they're just talking out of their ass. Beware.


I agree with Dijkstra, the looping constructs of FORTRAN's DO and ALGOL 60 were too baroque. Dijkstra's comments were written in 1977, at that time FORTRAN IV and ALGOL 60's loop semantics were a mess and were common sources of errors.

Without qualification, Dijkstra was one of the greatest computer scientists in our field's short history. While I don't agree with every single one of his ideas, I would encourage budding computer scientists and professional programmers to look over what he accomplished. He most certainly was not unproductive.

Wikipedia says that Dijkstra is known for:

- Dijkstra's algorithm (single-source shortest path problem)

- DJP algorithm (minimum spanning tree problem)

- First implementation of ALGOL 60 compiler

- Structured analysis

- Structured programming - Semaphore

- Layered approach to operating system design

- T.H.E. multiprogramming system (one of the first operating systems)

- Concept of levels of abstraction

- Concept of layered structure in software architecture (layered architecture)

- Concept of cooperating sequential processes

- Concept of program families

- Multithreaded programming

- Concurrent programming

- Concurrent algorithms

- Principles of distributed computing

- Distributed algorithms

- Synchronization primitive

- Mutual exclusion

- Critical section

- Generalization of Dekker's algorithm

- Tri-color marking algorithm

- Call stack

- Fault-tolerant systems

- Self-stabilizing distributed systems

- Resource starvation

- Deadly embrace

- Deadlock prevention algorithms

- Shunting-yard algorithm

- Banker's algorithm

- Dining philosophers problem

- Sleeping barber problem

- Producer–consumer problem (bounded buffer problem)

- Dutch national flag problem

- Predicate transformer semantics

- Guarded Command Language

- Weakest precondition calculus

- Unbounded nondeterminism

- Dijkstra-Scholten algorithm

- Smoothsort

- Separation of concerns

- Program verification

- Program derivation

- Software crisis

- Software architecture

This quote by galactipony is really offensive to me:

> Such people, while very smart, make for very unproductive software engineers. They also have a way of sounding deceptively smart and thoughtful, when really they're just talking out of their ass. Beware.

It is quite clear to me that Dijkstra wasn't "just talking out of [his] ass."

Learn more about Dijkstra at https://en.wikipedia.org/wiki/Edsger_W._Dijkstra or in any resource on the history of Computer Science.


I know about Dijkstra.

I didn't say Dijkstra was talking out of his ass in this case. I'm saying you often can't tell when people like him are talking out of their ass, as they sometimes do. Who would think the inventor of all these algorithms would ever utter a half-formed thought, on a whim? Unthinkable! Yeah... no.

Another distinction you fail to see is that somebody who is great at finding the best or optimal algorithms (i.e. a great theorist) isn't necessarily a productive programmer. To the contrary, perfectionism and productivity are at great odds.

Dijkstra was heavily at odds with real-world programming as it was done, to the point of isolating himself with his work. If we followed his opinions on how to program, we wouldn't get much of anything done.


> Another distinction you fail to see is that somebody who is great at finding the best or optimal algorithms (i.e. a great theorist) isn't necessarily a productive programmer.

I suppose what you're saying here is that there are a lot of business tasks which don't require a great theorist.

I don't think Dijkstra would disagree.

But still, calling Dijkstra an "unproductive programmer"? Really? I'll take one Dijkstra's algorithm over a dozen web apps. And if I can have a patent on it, I can even make a solid business case for that choice.

> If we followed his opinions on how to program, we wouldn't get much of anything done.

And yet, how many man-years of engineering effort could we waste having bad theorists who can quickly hack out LoB code try to re-invent Dijkstra's algorithm?

Perhaps software engineering is a very wide field, and it takes all types?


> To the contrary, perfectionism and productivity are at great odds.

I've learned this the hard way in my career. I always have to fight against taking the extra 20 hours to perfect something when it only took me 1 hour to get to 95% and 95% is more than good enough for the particular task.


I agree with todd8, but would like to add that the next paragraphs were not added for two simple reasons: the quote itself was already too long and I felt it summarizes well enough the gist of the argument: programming languages should not allow 'clever' tricks.

But, lets discuss the 'for clause' and 'do loop': these constructs were made specifically for one kind of simple loop. It is not a systematic solution for an iterative process. To me it seems Dijkstra specifically aims for languages such as LISP (which, with the renewed interest from Clojure is one of the most oldest successful (semi-) functional programming languages).


"I agree with todd8, but would like to add that the next paragraphs were not added for two simple reasons: the quote itself was already too long and I felt it summarizes well enough the gist of the argument: programming languages should not allow 'clever' tricks."

It still seems intellectually dishonest to leave it out. Clearly, what Dijkstra in 1972 considers too "clever" may in fact be tools that are now basic building blocks of everyday software. We can all agree that "too clever" is bad. We can't agree on what "too clever" is.

"But, lets discuss the 'for clause' and 'do loop': these constructs were made specifically for one kind of simple loop. It is not a systematic solution for an iterative process."

I'm fairly sure (from what I remember him writing) that he doesn't like it because you can do the same thing with existing constructs, so you'd be adding complexity to the language that isn't strictly necessary. History has shown that actual programmers prefer having for loops like Algol.

"To me it seems Dijkstra specifically aims for languages such as LISP (which, with the renewed interest from Clojure is one of the most oldest successful (semi-) functional programming languages)."

Probably not, or else he would've talked about LISP in a different manner (he does talk about it in an earlier paragraph).


"Programmers prefer loops like Algol":

Yes, programmers prefer it, but the general structure:

`for (init-statement ; boolean-continuation-expression ; iteration-statement) statement;` is syntactic sugar for a specific imperative process (with the iteration and ending expression appended at the end of the block). It does not generalize to other imperative processes, it cannot be transformed into a meaningful expression and it invites 'clever' programmers to do 'too much' in the various for-clauses.

Wrt to LISP, do you mean this part? That seems to align well with my standpoint: use very few basic principles and be stable.

"The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as “the most intelligent way to misuse a computer”. I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."


> We can all agree that "too clever" is bad. We can't agree on what "too clever" is.

Of course we agree. Too clever is "stuff I'm too lazy to understand". What we don't agree is on the definition of I; everyone has their own binding for that symbol, which carries a context for different types of stuff we are too lazy to understand.


Interesting take. 'Too clever' for me is: 'using underlying concepts which have semantics that have a poor mental overhead versus applicability within the domain'.


My working definition of "too clever" is:

If you have to spend time explaining 'what' the code is doing, it is "too clever".

If you have to spend time explaining 'why' the code is doing something.. then "them's the breaks working in a complicated world".

Code should only be as complex as the problem domain.


We can't agree on what "too clever" is.

It's not absolute. It's something to be negotiated at a particular shop.


"I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras."

-- Alan Kay, The Computer Revolution hasn't happend yet — 1997 OOPSLA Keynote


Could you explain how exactly this quote is applicable to Dijkstra's quote? Which part do you think is arrogant?

Alan seems to make an overly broad statement, which is funny, but also weak.


Correct me if I'm wrong, but was Dijkstra not known for never actually using a computer. He wrote code with pencil and paper. There's a gulf between academic code and the needs of the day to day programmer. That said, I agree with his statement about one liners. Code should be parsimonious. It should never be a puzzle to understand what the code is doing.


I hereby correct you. You are wrong.

Dijkstra decided to program using pencil and paper much later, and mostly because he (like most good programmers) already knew the solution before writing it down.

I correct you again for implying he wrote purely academic code. This is plainly false: for example, he specified part of and implemented the compiler of the Algol-77 language. He implemented one of the first multi-layer (ring-based) OSes.

And then you mention the day-to-day programmer is not in need of academic code. To the contrary! Dijkstra argued that the software crisis (the gap between what computers can do, and what they are actually doing) is caused because day-to-day programmers do not have the right tools and knowledge at hand. In EWD 340, Dijkstra argues this is caused by the mental overhead caused by clever tricks and languages allowing these tricks.

As a professional, I am constantly relating to concepts and code which have an academic basis. Examples are from type theory, lambda calculus, paxos, map-reduce, queueing theory, compression algorithms and many more. I have seen many programming languages, wrote assemblers, interpreters and compilers, worked professionally with imperative, OOP, functional and (higher-order) logical languages, so I dare to say that programming is not about what you can do with the language, but what the language does with you.


While this may have a lot of truth to it, the ironic thing here is you are posting this in response to a Dijkstra quote urging us down the path of humility.


Humility sometimes isn't, as currently discussed in https://news.ycombinator.com/item?id=13480255.


Tricks are fun- little puzzles. But they don't belong in code unless they both (1) significantly (relative you the task) save time and/or resources and (2) are very well explained in comments- like the full non-clever version being left in the code.

I know I'm preaching to the choir here. I know we've all come back to our own code six months later and had to puzzle it out.


Good advice.

More on cleverness from an old AI textbook (Artificial Intelligence Programming by Charniak, Riesbeck and McDermott):

"1.12.5 Cleverness

Avoid it . Clever tricks ... [Lisp specific stuff omitted.]

To paraphrase Samuel Johnson, we advise you to look over your code very carefully, and whenever you find a part that you think is particularly fine, strike it out."


Agreed, but they also complained about map() and flatMap(). I have to think that eventually every developer can understand the more straightforward monads, functors, and Either - which, along with type aliases, IMO, can make code a lot more readable.

I think the more esoteric features should be reserved for complex code, especially if it's possible that those features can prove runtime correctness at compile time and result in easily-composable modules. I wouldn't expect more than one or two developers to need to develop such a subsystem, however, so the "clever" is used in isolation.


The full comment is actually:

> No map, no flatMap, no fold, no generics, no inheritance… Do we miss them? Perhaps we did, for about two weeks.

So they weren't complaining about map and flatMap, they just missed them.


They only missed these features for 2 weeks?

Good for them! Their problem domain is likely so simple and neat that it fits the Go's limited built-in types well, and does not lead to frequent copy-paste programming. If so, Scala has been an overkill.


Lucky bunch indeed. I'm 9 months in and still missing map(), flatMap(), let alone more advanced FP. I guess this is a matter of personal preference, but I definitely consider:

  users.filter(_.active).map(_.email)
to be easier to read (and not a pain in the ass to type) than:

  emails := make([]string, 0)
  for _, u := range users {
    if u.active {
      emails = append(emails, u.email)
    }
  }
  return email
The first style is more explicit about what you want to do, the second style is more explicit about how you want to do it. Go made imperative vogue again, but for me it feels like a throwback all the way to Turbo Pascal. Sure, the Go compiler is really fast, but Turbo Pascal would beat it hands down - on 20Mhz machines with 1MB of RAM.


> Sure, the Go compiler is really fast, but Turbo Pascal would beat it hands down - on 20Mhz machines with 1MB of RAM.

With more features (TP 7)!


completely agree. There is a certain beauty to the first version that once you've learned the concepts it seems a shame to have to write the second one.


The second example is vastly more readable in terms of understanding what it is supposed to do.

I mention this as neither a programmer on either Scala nor Go.

If it is of any relevance, I've been writing code for about 20 years across C, Java, Javascript, PHP and shell scripts. I'm sure other FP developers will roll their eyes to a traditional developer like me but the one-liner you mention is simply not self-explanatory for an outsider when reading it.


"I've been speaking italian, french and spanish for 20 years. I'm sure people who speak germanic languages will roll their eyes to a traditional latin-languages guy like me, but the german sentence you mention is simply not self-explanatory for an outsider when reading it."


The second one is so readable it contains a subtle bug nobody noticed.


What 'subtle' bug ?

It returns 'email' instead of 'emails' which I assume was a typo, which of course the compiler would complain about, are you claiming that a variable name typo is somehow a 'subtle bug' ?

Only other thing I see directly is that 'users' is undefined, but this is a code snippet after all, not a working program.


I think kaoD's point is that more code will always produce more errors, and more code (in the name of "readability") can give more surface area for bugs.


But that would be caught by the compiler. The error would be variable not defined.


Out of curiosity, what specifically do you find unreadable about the first example?


As simple as a distributed, horizontally scalable SQL database, perhaps? (https://github.com/cockroachdb/cockroach).

Those dumb Go programmers and their toy programs (https://quicknotes.io/n/1XB0).


I program since the mid-80's, many of the features in modern languages weren't always available outside research institute walls, and yet we managed to deliver any sort of software with the tools we had at our disposal.

However, now that those features are part of the majority of mainstream languages, I surely don't want to go back to how I used to write software in the first two decades of my career.


Developers have been using C and a bunch of other imperative languages for years. They obviously, created software two order magnitudes more complex than CockroachDB (or Docker, or Kubernetes, or etcd). We used to write entire operating systems in assembly, heavens forbid. Just look at the PostgreSQL codebase (all C) and compare that to cockroach DB.

Being "a better C" was the original battle cry of Go. It's definitely better than C (well, at least for some thing), but it doesn't mean all of these complex C software people used to write are toys.

Go is obviously not a useless toy language either. But that doesn't mean you won't miss features from more expressive languages. The amount of calls for generics from Go users tell a different story.


> The amount of calls for generics from Go users tell a different story

I think it is from Non-Go users who would use Go if only it had generics.


Some of it, probably. But some are probably using Go out of necessity (your company enforces Go), or because they still think Go is their best pragmatic option, lack of generics notwithstanding.


I do competitive programming using Go and sorely miss "generics" so I'm pretty sure that your thought is not entirely correct.


Software can be complex without having very complex types


The guys who wrote CockroachDB never, afaik, tried to write it in Scala, and never told us whether they miss flatMap, generics, or something like that.

They guys from Movio did; their problem domain is likely different.


A for-each loop is only slightly longer, more familiar (just about every language has it), and more flexible - it covers map, flatMap, and reduce. So why have three or more specialized constructs when one will do?

There are cases when the restrictions of map() and reduce() are necessary, most famously for a map-reduce in distributed programming, but that's not really relevant for implementing a simple function.


Map, fold, reduce and friends are statements of intent, and reading a line of transformations composed on multiple maps and such is generally much clearer than reading a series of loops in which you have to figure out intent by looking and what's going on inside each loop.

Also, error density is relatively constant per line of code according to most studies (this is a very general statement, but we can assume that most code just isn't trying to be clever). I've forgotten to add(), append() and drop items inside a loop far more than I've made mistakes inside more functional methods.


I would argue that for loops are imperative, map/fold/etc are more declarative. That leads to various benefits such as less code, fewer bugs, fewer off-by-one errors.

And, man, tail recursion in a language with function head pattern matching (ML family, Erlang) is so much easier to read than any complicated for loop.

(Update: realized afterwards that "foreach" isn't quite the same as "for", but the larger point still stands, mostly.)


No it doesn't, because map, flatMap, and reduce can be lazy, distributed across local processors, or even the network.

They describe the intent of what to do with the data, not how it is done.

Just like using SQL versus an old xBase database to access data records.


And a goto statement is even more general, covering all kinds of looping constructs, exceptions and even the eliminating the need for functions! Why not just use those?


For loops are yet another of these useless abstractions introduced by the academia, this time straight from the design-by-committee language ALGOL 60.

We all know real programmers used gotos.


Technically, reduce covers map, flatmap, filter, etc. It's every bit as flexible as a for-each loop (the only difference is it implies no side effects, but does not guarantee them if the language doesn't).

The very reason those other functional constructs were split out was not due to need, but due to -clarity-. And that same clarity applies to why to use one of those constructs instead of a for-each loop.


The problem with for-each in a lot of languages is that it's a statement, not an expression, so it only works with mutable data.


I'm definitely in camp map/filter/fold, but if you instantiate a list/accumulator, fill/operate on it inside the loop, then consume it, you're essentially writing pure code. The fact that you're doing it on top of mutable foundations doesn't matter (at that scale).


The trouble is that safe mutation and dangerous mutation look very similar. It's possible to write a function that performs safe, locally encapsulated mutation, sure - but it's much harder for a reader to confirm those properties in code review or when debugging compared to just seeing that the code doesn't do any mutation at all.


As long as you get your mutable code right. If there are any unintentional holes in your instantiate/operate/consume box...


There just isn't that much to get wrong in pure for-each loops:

    def filter_broken(widgets):
        broken_widgets = []
        for widget in widgets:
            if widget.is_broken():
                broken_widgets.append(widget)
        return broken_widgets
The actual errors with loops and mutable objects come from complex nested loops, continues/breaks (multi-level ones especially!), extra boolean conditions, and sharing mutable objects between parts of the code base.

I think that overstating the supposed risks of for-loops doesn't help anything. The problem is not that you're going to mess up a simple for loop. It's the ways that your code doesn't compose well that are more problematic. Oh, and the verbosity sucks too.


Hm. Wouldn't your code reverse the order of the list?

Maybe that's right, maybe that's wrong, but it's indeterminate from the code whether it's desired, whereas a filter function and a reverse function would make it explicit.


Append is a function that adds it to the end of the list. (I think my code is working Python, but I haven't actually run it).

Btw: I'd venture to guess that you're not a Python person, and that's why it's not obvious that it constructs the method in the same order. In imperative languages (where Python isn't the perfect example b/c it has filter and list comprehensions), you get so used to these loops that you don't have to worry about questions like that.

If you wanted a reversed list, you'd either name your function appropriately, or call reverse after you run the filter method. It's a little painful to do the first bit.


Yeah, sorry, got my mental model mixed up. I worry about reversing lists a lot since I deal with TCO so much.


> So why have three or more specialized constructs when one will do?

Because having three specialized functions for three specialized operations makes it easier to see what a given piece of code is doing. One function per function.


When I was first introduced to scala my very favorite thing was the existence of map. I'd been using java for a while and missed it dearly


Not allowed to use Java 8?


Didn't exist at the time. While I still prefer scala to java 8, had the latter been available at the time I likely never would have dabbled in scala in the first place, if that makes sense


Sure it makes sense.

I dabble in what sorts of programming languages, but when comes to work, it is mostly C#, Java, JavaScript and some occasional C++, because that is what customers pay for.

Hence why I see a big value in mainstream languages slowly evolving into multi-paradigm ones, as many of us don't have the luxury to move beyond the first class languages of each platform.


side effects ? testing ? decoupling ? (thus reuse)

If your brain is organized enough to write clean for loops then ... maybe. But it's a bit opportunity for problems. At least to me; but then I may not be smart enough.


I think for tasks such as JSON parsing, map and flatmap like functions are really helpful. You write less code and the code is way easier to understand (for those who are familiar with map functions).

That being said, map and flatmap force immutability in some way. So you pay a price for this, either in speed or in memory, even with tail recursion.


I'd much rather deal with concise, clever code that has a sane interface and works, rather than sprawling long winded code where everything is void and the same low level constructs are used everywhere.

And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".


>And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".

IMO, the answer is people vastly underestimate the cognitive cost of reading code. "Code" rarely exists in a vacuum, the reader has a universe of inputs, outputs, problems, solutions, failures and goals. The cognitive cost of trying to read someone's code is something that must be paid multiple times per day and at one point do you have to wonder - is the gain in expressiveness for a single programmer worth the cognitive cost of the multiple programmers who have to parse that code.

I'd imagine the answer is no - especially for companies that spend more time iterating and patching based on customer feedback than actually designing and architecting systems. My belief is that expressive languages have their place but companies are far more likely to build their software in a "patch-test-iterate" environment which favors less expressive languages.


I'll keep repeating myself: the cognitive cost of "code" is dwarfed by the cost of understanding an application. If a couple esoteric languagn features take an extra few hours to learn but reduce LOC tenfold, it's a price I'd pay every time.

Not everyone feels this way, the investment to learn these features may not pay off right away, and if you come from a "move fast and break things" language (python/php/js) it can feel like a waste of time.


> If a couple esoteric languagn features take an extra few hours to learn but reduce LOC tenfold, it's a price I'd pay every time.

Issue is that a beginner pre-conditioned long enough on "verboser, imperativer" models/languages, when reading/comprehending such a codebase, has to literally mentally expand every 1 line into 10 upon reading, for quite a while .. some weeks, some months, as more intuitive grasp sets in slowly over time. Probably what happened at OP's quoted company and with the different speeds in comprehending what "clever" (compact) code some of their coders came up with, what they flippantly called "code that was harder to understand by others"..


If a couple of esoteric language features can consistently reduce your codebase tenfold, you might want to rethink how you engineer your apps. Unless you're writing your application in FORTRAN (and I mean the version before 1958 and the introduction of procedural code) or assembly, I doubt there are any two general-purpose languages that show a 10x difference in code size in large applications.


I can't up on this enough. If the line count is low and the interfaces are great I'll forgive all manner of oneliners.


I totally agree. Plus, it keeps job interesting.

Language is important and its expressivness. Babel 17 is must read.


> And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".

There are two problems with this argument. First, brilliance in code is not a property of what linguistic abstractions the code makes use of, but the elegance of the algorithm and engineering. I sincerely doubt that no brilliant code had been written prior to the introduction of your favorite linguistic abstractions. Second, even supposing you're right, what difference does that make? If your code's maintainers are dumb, do you think that you could change this reality by making life harder for them, or is it your job to adapt yourself to the reality of the system you're a part of? When creating something for dumb people to use, is it a sign of good craftsmanship to make it harder for them to work with?


Welcome to Haskell ... where clever people write frameworks that even dumb people can't break ;-)

e.g. https://code.facebook.com/posts/302060973291128/open-sourcin...


And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".

This is not an absolute, and needs to be answered in the context of the question: "Does my team work?"


> And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".

If you ever wonder this, then it's definitely a problem with your code and not the reader. Code that cannot be easily read and understood by others is worthless in any commercial setting.


If you're too numb or stubborn to learn the tools that your programming environment provides for writing higher quality code, I have little sympathy and would suggest that such a person fond another line of work.

If, for instance, someone were writing C# and insisted on hand-rolling for loops in all cases[1], because LINQ is "too hard", that's them being lazy and unwilling to invest a tiny amount of effort in learning more effective methods.

[1] There are some cases where it's more performant to use a plain foreach or bare for, but unless you're in tight loops or dealing with ginormous collections, it's a premature optimization.


Or working in game development with low latencies to achieve high frames per second.


I don't agree that all languages are created equal in this regard. I think that there are cultural norms and expectations that come with certain languages that make them more or less susceptible to "cleverness."

For instance, python has never had the problems that ruby or perl had.


I don't agree that all languages are created equal in this regard.

I never said that. I'm only asserting that no language is completely immune, because hubristic cleverness is intermediate to stupidity and genius.

I think that there are cultural norms and expectations that come with certain languages that make them more or less susceptible to "cleverness."

Cultural norms can indeed help or hurt.


I've seen Ruby-like Python. Just because Ruby makes it easier to do metaprogramming doesn't mean you can't do it in Python.


Of course it's not impossible.


As the saying goes: it's easier to write code than understand it. Therefore if you write code as cleverly as you can, then by definition you're not smart enough to understand it...


For those that like trivia, the original quote I believe is by Kernighan and it's about debugging code rather than writing it, which I feel makes more sense:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

https://en.wikiquote.org/wiki/Brian_Kernighan


I sometimes optimistically believe that my cleverness will increase over time.


Yes but occasionally there's no other option. I once had to code a parser for a data format with an odd non-BNF grammar with a bunch of special cases. In order to meet the functional requirement for user-friendly error reporting when parsing invalid inputs I was forced to write really clever (in a bad way) code. Fortunately we haven't found any serious defects in it because I don't think I understand it well enough to debug it.


It is okay to write clever code if necessary. Solutions to some problems require it.

Clever code for the sake of being clever is rarely worthy it.


This is one of the extremely cool things about python. There's always one right way to do this, and it's usually pretty obvious. There's also quite clear, standard style guidelines, so code is generally formatted pretty much homogeneously, and tends to be a lot more readable than many other languages.


Per the Zen of Python, there should always be exactly one sensible way to do something, but unfortunately this is not often the case. List comprehension can be achieved just the same with functools, itertools, explicit for loops, etc.

That said, I do tend to have that mantra a little more present in my head when I'm working with Python.


yep... making 'understandable' is really important.

But given that discipline, scala would outshine go since scala has much better typechecking than go - e.g. http://getquill.io/ can typecheck against a running DB schema, etc.


It got really bad the past 10 years or so with the "look at what I can do, ma!" blog posts where someone spends 4 pages violating the language in order to get clicks. It's especially endemic in Ruby-land, I find.


I always work with a few developers that complain about my long variable names and aversion to certain shortcuts like ternary operators.

They don't understand that unclear code is probably the number one cause of technical debt. Nobody wants to waste time trying to understand it so they start to attach workarounds and it just keeps getting worse.

Some of their code is so "clever" that I've refactored the line with 7 method calls just to understand what the hell is going on.

Lambdas and fluent syntax make me quiver with fear. In the wrong hands they let you do unspeakable things


> aversion to certain shortcuts like ternary operators.

> They don't understand that unclear code is probably the number one cause of technical debt.

At the same time, verbosity can have an obfuscation quality all of its own. For simple assignment, I find a ternary operator very clear and concise, and much preferable to a 5-9 line (depending on style) if/else for a simple assignment. It also might keep you from using the single statement version of if/else if your language supports it, and that's probably justification in itself given how many problems that's caused in the past.

Specifically, I think:

    usefulMetric = wantComplexCalc ? complexCalc(foo)
                                   : simpleCalc(foo);
is preferable to:

    if ( wantComplexCalc ) {
        usefulMetric = complexCalc(foo)
    } else {
        usefulMetric = simpleCalc(foo)
    }
even if only because it doesn't obscure intent with what is essentially boilerplate.


OP is about Scala, where you write directly what you mean, without special operators:

    val usefulMetric = if (wantComplexCalc) {
      complexCalc(foo)
    } else {
      simpleCalc(foo)
    }


Well, that's only "directly what you mean, without special operators" if you come from a C style procedural background and have already internalized all the special operators you've included there, such as parenthesis and braces. Sure, that's most people, but that doesn't mean they aren't operators.


I'd rather have Python's way of doing it, which plays with the order for the sake of readability:

    useful_metric = complex_calc(foo) if want_complex_calc else simple_calc(foo)


Ruby has something similar, and I can't stand it. I think the conditional is the most important item in the phrase, and it's shoved off to the right. If you lead with the conditional, it becomes immediately apparent that the assignment is predicated on the result of a branch.


Ruby (and Python) likely get that from Perl, which has post conditionals, but with specific qualities to prevent them from too much abuse, and which also prevents them from being used in the way presented here (which is why I didn't trot them out earlier, as much as I was tempted by the "you write what you mean" line). The limitations are that there is no else branch, and it only applies to a single statement, so you can't have a block executed with a post conditional. It leads to usage like so:

    die "Invalid param: please enter a positive number" unless $param1 > 0;
    
    $param2 = 0 unless defined $param2;

    return undef if $param1 and not $param2;
    
    my $foo = 1 if $bar; # This unfortunately creates a closure around $foo and is a big source of bugs.
As much flak as Perl gets, quite a lot of thought went into making it flow similar to how people think and talk (which is no surprise if you know Larry Wall is a linguist by training). There were some missteps, but it was very early in this area, so that's expected.


Perl hung onto too many of its warts for too long to stand a chance of competing with Ruby and Python. Only recently has Perl5 introduced real function parameters instead of unrolling @. Flattened lists are another one but the worst is having to specify "use 5.020;" if I'm using Perl 5.20. They've even carried this "tradition" into Perl6 where you have to specify "use v6;" at the top of EVERY damned script. That's progress? Prefixing every variable with "my" is another one which found its way into Perl6. Why can't an advanced language have default lexical scope?


> Only recently has Perl5 introduced real function parameters instead of unrolling @.

Yet there have been modules that support it for years, and with much more features than what was recently rolled out (which was meant to be conservative).

Here's[1] what I said about this quite a while ago. Named parameters with type checking (unfortunately at runtime). I've been writing Perl using different modules (Function Parameters) which use the same syntax for about six years now (for functions, not all the sugar on Moose objects).

> Flattened lists are another one

Flattened lists never cause me a problem. If they cause someone problems, I think they've never really learned what context is in Perl. Once you know how context works in Perl and had a chance to use it to good effect, I can't imagine this complaint persisting. Perl is fundamentally different than most languages in this respect, even if it looks superficially similar to more procedural languages. This is actually a cause of a lot of problems for novice users, because they assume their experience in C/Algol derivatives will map exactly, and where it doesn't people get frustrated.

> but the worst is having to specify "use 5.020;" if I'm using Perl 5.20

What? You don't have to do that. If you want to use newer features that utilize keywords which may conflict with whatever you've written or whatever modules you are using, then yet, you need to opt into those. Perhaps you would have preferred if it silently just broke?

> They've even carried this "tradition" into Perl6 where you have to specify "use v6;" at the top of EVERY damned script.

No, you don't. If you do, and you run it in Perl 5, it will automatically swap out the interpreter for whatever Perl 6 interpreter you have in $PATH though.

> Prefixing every variable with "my" is another one which found its way into Perl6. Why can't an advanced language have default lexical scope?

The requirement to define your variables is not because it's not lexical by default (it is lexical by default, you can use no strict to see). .It's strictness which is enforced, which has been found by the Perl community to be vastly preferably to automatic instantiation of variables because it prevents bugs, and prevents a lot of confusion. You have to define your variable, because the Perl community found that a more sane default.

1: https://news.ycombinator.com/item?id=11633961


Agreed, plus obviously your "if" statement doesn't do the assignment to usefulMetric. One more way the ternary wins (along with functional languages that use "if"s as expressions).


a "final" declaration (in Java) would have pointed that out through a compilation error ;)


D'oh! Fixed. Thanks. :)


Unfortunately, ternary operators eventually end up like this due to refactoring blindness:

  usefulMetric = wantComplexCalc ? 
                   (complexity > 40 ?
                     superComplexCalc(foo) :
                     regularComplexCalc(foo)) :
                   simpleCalc(foo);


At some point you have to rely on policy and not language constraints. I submit that no language is constrained enough to protect against refactoring stupidity while also being flexible enough to be useful to the average programmer on the average project. If not ternary if, it will be something else. So, do you throw out every alternative method to accomplish the same thing, or do you put policies in place to keep the code sane, such as "no chained ternary operators are allowed" ?


In all honesty, I prefer having rules that have no special-case 'unless' issues. It's too much effort/trouble to remember all the cases where things don't work. I'm a good engineer but a terrible compiler.

I believe part of learning a new library/framework/language is to limit yourself to a certain subset of the API offered. After working with Ruby (the language) and Javascript (the ecosystem), I feel like that's the only way to preserve your sanity and productivity. I don't need to know 4 different ways of creating a lambda in Ruby, selecting 1 that can express the other 4 is good enough.

---

In this case, the rule would be no ternary operators, since they work well unless you nest them or unless you make them long/complicated.

Other examples -

You don't need to wrap if conditions unless you have a multi-line body:

  if (myCondition)
    x = 42;
    y = 23;
Early returns simplify short circuiting logic unless your function becomes too long:

  if (myVariableAtBeginningOfFunction) {
    return true;
  }
  ...
  // 2 screens later
  ...
  if (x == 42) {
    return false;  // why am I not getting false?!
  }
Using a variable as a conditional in javascript to test against undefined works well unless the value can be falsy:

  if (person.isStudent) {
    showSchool();
  }

  if (person.age) {
    showBirthCertificate(); // what if age is 0?
  }


> the rule would be no ternary operators

Why not "no nested ternary operators"?

> You don't need to wrap if conditions unless you have a multi-line body

Why not "keep unwrapped if conditions on a single line"?

> Early returns simplify short circuiting logic unless your function becomes too long

Why not "keep functions short"?

> Using a variable as a conditional in javascript to test against undefined works well unless the value can be falsy

Why not "only use conditionals on boolean values"?

I'm not saying your rules are right or wrong, I actually follow a couple of them myself, but your wording implies that other people are simply not following rules, or their rules have a lot of nuances and special cases, but the reality is more likely that their rules are different.

Ultimately we all make different connections and form different patterns in our head. As long as a team can agree on a code style, within a few months everyone starts developing the same cognitive patterns.


It's because it's too easy to lose some of these nuances during refactoring/development blindness. I'd go so far as to say it's inevitable.

If you come into a 3 year old codebase and during the first 2 weeks you need to add extra functionality to a 30-line function with an early return, are you going to refactor the early return? Or are you going to extend it into a 32-line function? What about the new hire after you?

Alternatively, your team has decided to embrace the "only use conditionals on boolean values" philosophy. You're working with a section of code that reads `if (myVar)`. It's been 3 hours, and you don't understand why the code's not working. Suddenly, you realize that at some point `myVar` was refactored from a non-nullable boolean to a nullable number, and someone missed changing this.

And the biggest offender yet - code that is grouped within a file into 'logical sections'. I've never seen this work out. What is a logical grouping for you is a confusing pairing for me. Or maybe it's that I can't immediately grok all 2000 lines of a file I've never seen before, and know where to place the method. This madness around code location is one of the quickest ways to code rot.

---

The perplexing thing to me is that these situations are completely preventable.

If you don't use early returns, scenario 1 won't happen.

Scenario 2 won't happen if you use real comparisons e.g. `if (person.age !== undefined)` (Similarly, `if (person.age != null)` breaks when null and undefined start meaning different things...)

And lastly, a canonical alphabetical/visibility ordering for methods in a file of any length is unambiguous. I don't care what the order is, as long as there is a canonical order.

---

I understand that other teams have their own rules. It's no trouble at all to adjust to things that are purely syntactic differences. But when the rules that are chosen hide lurking semantic pitfalls...I don't know why you'd risk shooting yourself in the foot.

A lot of my strong feelings on code style come from the book Code Complete. I highly recommend that to everyone who hasn't read it. It's filled with examples of confusing/broken code you might inherit, and teaches you how to avoid creating it yourself.

Edit: looks like we hit the HN thread depth limit. Happy to continue this over Twitter, check my profile.


That's just laziness on part of the refactorer. At that point, you need to use an outer if-else statement. Ternary operators are confusing when nested.


what about:

  usefulMetric = wantComplexCalc == false ? simpleCalc(foo)
               : complexity <= 40         ? regularComplexCalc(foo)
               : /* else */                 superComplexCacl(foo)
               ;


Doesn't look much better.


IMHO it's a wrong approach. Every programming language, just like the spoken ones, has it's common shortcuts and idioms. The fact that they're commonly accepted and used is what makes them easy to understand. Your brain learns to recognize them quickly, often much quicker then the long version. With newbies and programmers who switched from other languages problem is that their brain is just not yet trained to do that efficiently. Instead of investing some time into getting used to the peculiarities of the language that they use, they then try to avoid them as "complicated". By lowering a bar too low, and avoiding using these patterns all together, you encourage people to never train their brains to recognize them effortlessly. And by definition of common patterns, they're, well, common, and they'll keep running into them all of the time. Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.

I'm not saying that one should go crazy with one-liners or uncommon patterns, but things like ternary operators used with reasonably short expressions in a single line of code are totally valid and should be readable to any average dev out there.


Great comment.

Particularly

> Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.

I stopped contributing to one Powershell repository because author thought that ps is hard and he wanted Get-Process. I put a "i am the greates babysiter meme" in PR and that was considered very disrespectful

Particulary

> Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.


> I put a "i am the greates babysiter meme" in PR and that was considered very disrespectful

Not sure I can think of too many situations where it would be otherwise.


In general context yes, but if you tell your top contributor that besides doing full day job work for entire year for free (while main author also has commercial offering) he should also babysit "dumb" users (making entire job not fun) and you get the tip multiple times that such behavior will alienate him from the project, you can be sure there is a way better approach to project management. Since I left it, the PRs and issues that nobody looks at started to pile up (I kept both at almost 0) which is extremely important given that project relies on constant PRs and reports by the community.

Here is the meme:

http://content.randomenthusiasm.com/d4ZEVg4VB.jpg

Not exactly profane I would say but what do I know ...


Not profane, but condescending nonetheless and not appropriate in a professional/code review setting.


Its FOSS setting, not a professional setting. Being a jerk to people that do excelent stuff for your project for free is far from appropriate in any setting on the other hand.


Professionalism doesn't get left at the door at 5:00.


>I put a "i am the greates babysiter meme" in PR and that was considered very disrespectful

Sounds like it was disrespectful too.


Your underlying assumption that everyone working on the code will be skilled is wrong in any large team. It's not like it takes a lot longer to read a 3 line if statement than a ternary operator.

Terse code isn't much faster to read, the difference between a 300 and 400 line file isn't significant.

Your attitude of "he isn't 1337 enough to understand my code" is the logic that leads to no comments and horrible to maintain code in the first place


> It's not like it takes a lot longer to read a 3 line if statement than a ternary operator.

It doesn't take a lot longer to sit down and understand how ternary operators work, either. C'mon, it's not differential equations, it's just a notation, and a fairly simple one. It might take a newbie slightly more time at first to understand the logic. We've all been there once, you stop and stare at it for 15 minutes, but after a few times of deciphering it you get used to it. It's not about being 1337 (I surely hope that it's not what's considered elite this days), it's about learning new stuff and any averagely intelligent person can do it. Honestly, would you really hire someone who is not capable to (in a reasonable amount of time) teach him/herself how to read a ternary operator? What programming would that person be capable of doing in future?


One person's clever is another person's clear and vice versa. These conversations are pointless as there is no objective truth on code clarity


This is not really as subjective as you think. Research in software engineering shows that certain structures are more prone to errors than others. We know that higher cyclomatic complexity leads to more bugs, more statements lead to more bugs, and certain usage patterns lead to more bugs.

Smart people can disagree, undoubtedly, but there's a reason why GOTO is considered to be brain cancer and pattern matching is generally considered to be great.


Sure but I'm talking about things like "break things down into lots of small functions to make things more readable!" vs "keeping code together makes it easier to read!" or "make things verbose so it is easier to read!" vs "conciseness makes code easier to read!"

On a lot of those I know what makes code easier for me to read. It's not the same as some of my coworkers. Based on some of your phrasing I suspect we'd agree on a lot of them, fwiw

When the metric used is "easier to read" it becomes far too subjective IMO, things you mentioned are similar but not quite the same


In my experience, very experienced programmers end up converging towards very similar idioms: terse expressions for common patterns, clarity when the domain is complex through verboseness if necessary, and just keeping things as simple as possible unless there's evidence that complexity will reduce technical debt in the future.

I don't really see highly competent devs doing the whole J2EE architecture astronautics anymore, nor using single-char variable names. There's a tendency to write things concisely when simple, and then moving the complexity away to some other place, stashed in its own function, when it reaches a certain mental threshold of complexity. There's a tendency to use the best features languages have to offer, maximizing simplicity through orthogonal features and repeated idioms, while discarding unnecessary cruft; one of the marks of junior devs is their desire to try to fit problems into new idioms just to test out language features or strange design patterns.

Given that human intelligence is fluid but the variance just isn't that high (after all, we all have a similar amount of working memory), common design practices emerge out of this understanding for our limits for reasoning about problems. Exceptions abound in extremely technical and complicated problems (just check out non-trivial linear algebra code or bit-flipping, low-level device drivers), but the most part it just sticks out how things are made to look simple within a finite range of tradeoffs. This has been my experience in my domain of expertise; look at most current web development frameworks and they respect common patterns even in radically different languages that are really about the essence of the request-response cycle, not made-up constructs of additional complexity or a restating of the problem.


> I don't really see highly competent devs doing the whole J2EE architecture astronautics anymore

Yes we do, because the customers and their in-house architects decide how it is going to be, not us.


> nor using single-char variable names

My pet hate.


We had a policy at my last place that any SQL joins alias the table name with a single character alias. The rule resulted in the most insanely confusing stored procedures I've ever seen. Whoever came up with that is a complete idiot


It's a judgment call where small functiosn are more readable than cohesive code.

If the 'idea' of the code isn't easily broken down into abstractions, even mentally speaking, then small functions will just obscure what's actually going on by pointing out all the implementation details that are wound together.


I think the point is that there's a wide range of styles that are readable, but the condition for readability is also related to the skills of the coder to clarify. And that range is ample but also finite.


> but there's a reason why GOTO is considered to be brain cancer and pattern matching is generally considered to be great.

Look at any large C codebase and you will see plenty of goto statements to manage resource cleanup. The problem is using goto in place of structured control flow like loops and if/else. Statements like yours make it seem like there is a conceptual problem with a jump.


Sure there are some cases in C where you want a particular control flow that the language doesn't allow, and goto is the best solution in those cases.

But all the examples of this that I've seen are still structured it's just that the language isn't able to express that structure. The most common examples are jumping out of nested loops (better solved by allowing named loops so you can use break/continue) or jumping immediately to error handling logic (better solved by exceptions, or even just simplistic try/throw/catch).

I do think there's a conceptual problem with arbitrary jumps.


That's why I'm a proponent of code ownership. We should be coding to each others interfaces instead of constantly poking around in the same shared codebase. It just leads to pointless re-writes and low quality - a tragedy of the commons.

https://www.visualstudio.com/en-us/articles/devopsmsft/code-...


Unfortunately sole code ownership increases the bus factor.


Why? The code is still there even when a person leaves.


Code is really hard to work with without the theory that surrounds it, and is alive in the people who worked on it.

Peter Naur - Programming As Theory Building http://pages.cs.wisc.edu/~remzi/Naur.pdf


Legacy shitcode is legacy shitcode. If it's owned by someone at least we can push for a sane interface, which is better than the abandoned communal messes I encounter in the real world.


?


I'm not sure if this is true.

Scala provides a whole new level of ability for people to write code that can be meaningless to others.

It really is quite different.


And that sort of scala code is extremely meaningful to other folks, which captures my point. I see scala code all the time that would give me an instant headache but there are people who would find that more readable. To each their own, the is to work with people who are at least somewhat aligned to your sensibilities.


Fair enough - but I'll offer this:

+ Any decent developer can read decent code in Java or whatever normal language and get along just fine.

+ Only a few people can deal with Scala - and even fewer if there's a log of specific project Scala weirdness used in a particular program.

So sure - among a narrower set of 'Scala friendly' developers, and possibly within that even narrower set of people familiar with the 'Scala weirdness' of a particular project - those people can 'get along fine'.

The problem is that this can be a pretty narrow set of people.

Scala would have to represent a pretty big advantage to propose it's general weirdness as something to bother with.

I don't think it does - hence the 'de-adoption' of various entities.

My gut tells me it's past the threshold - the 'extra power' offered Scala just isn't quite worth it's weirdness for most things, and so most devs won't learn it ... and so then it becomes less valuable from a business perspective.

It's possible we may have it peak Scala.

We'll see I guess.


That's not what you said. You said others, not most people. The most important aspect is who you surround yourself with. For instance the scala folks at Verizon basically live in the zone you're talking about and it is fine for them even though half the time it makes no sense to me


> They don't understand that unclear code is probably the number one cause of technical debt. Nobody wants to waste time trying to understand it so they start to attach workarounds and it just keeps getting worse.

True, but once get into the length of variable names in iOS and Android development, you're in a whole new territory. 38 character variables have no place in life.

And extreme's like:

outputImageProviderFromBufferWithPixelFormat:pixelsWide:pixelsHigh:baseAddress:bytesPerRow:releaseCallback:releaseContext:colorSpace:shouldColorMatch

This is 149 character.


The thing is that, shorter names do not help in this case either...


this is a developer problem not a language issue, i've seen "nifty" code in go just as well..


The design of a language/environment can sometimes exacerbate this particular developer problem. Because so much of the power of Smalltalk lay in its powerful debugging, weird proxy stuff had a potent impact.


Some languages make it easy to shoot yourself in the foot with nifty. I found Perl and Scala to be in that camp.


Some languages have footguns and constructs that are error prone.


Scala is the latest whipping boy(1). It's a great language with tons of warts, but it actually acknowledges the warts.

Case in point, when (2)Paul Phillips went after Scala (mentioned in the article), Odersky took some of that criticism to heart for the next iteration/rewrite of the Scala compiler. In an industry where everyone doubles down, that's extremely refreshing.

Scala's cognitive footprint can lead to misbehaving programmers, but clean Scala has its own elegance if the cuteness is avoided. The slowness I'll give you though :(.

1) I'm old enough to remember this language CoffeeScript that "really really sucked". And then all of a sudden, people were using ES6/TS with parameter destructuring, classes, lambdas, but yeah, CS was the bad guy.

2) Despite his bellyaching, Paul Phillips never really left Scala the language (check his commit log), just the compiler team & lightbend.


"Old enough to remember CoffeeScript" sounds funny to those old enough to remember UCSD Pascal on Apple ][ or RPG2 on IBM System 34's.


Well Coffeescript was refreshing in being very terse and introducing a lot of niceties that were missing in JS. It was also trivial to introduce footguns given its whitespace rules and some pretty radical syntax rules.

New Javascript has basically taken the best that CS had and wrapped it in the necessary turd that is backwards compatibility in a hastily designed language, but the results speak for themselves and modern JS is just much more pleasurable.


Sorry if my footnote's sarcasm was unclear. I really liked coffeescript, I still do. It never had the support it needed to succeed but was a decent enough improvement to be worthwhile. Plus being included in the rails Gemfile was the shot across the bow transpilers needed.

Typescript/ES6 took a lot of the goodness from CS, but I wish they took more. Hell, I wish the typed coffeescript became a thing.


Yes, the frustrating thing was that people previously praised the language _despite_ its obvious footguns. A language where 'dropping parentheses where you can' is idiomatic is a disaster waiting to happen.

I don't know about you, but I never want to be holding a footgun, ever.


Coffeescript worked pretty great when using it on solo projects, because I could use the bits I liked (which where very nice) and ignore all the footguns that I didn't like. The problem as such was that any coffeescript I wrote turned out not be all that idiomatic.


Very eloquently put, I wholly agree


CS really botched variable scope IMO, and the whitespace rules made it a bit of an error-prone word-soup language.

The rest was pretty great.


Some experience to share: I studied Scala and FP on the side before jumping to a team that was using it in production. Most of the engineers on the team have an enthusiasm to learn about and use fp.

Bi-weekly we have a book club where we take turns presenting a topic from functional programming in Scala, functional reactive domain modelling and others.

We program as simply as possible but when a new technique is discovered we go ahead and use it after it's been presented to the team and everyone is comfortable with it.

all code is reviewed and unreadable code does not pass

Compile times haven't been an issue. As an example a 100k line Scala program with around 900 files takes around 2 minutes to rebuild whilst incremental changes are immeasurably fast. Reloading code while a local server is running is easy by default in IntelliJ.

Using worksheets for playing is often useful.

we don't use actors where streams would make more sense and vice versa, know the purpose of your tools

I've had bad experiences with Go. I know that for the application I'm working on it would not scale to 10 programmers working and constant refactoring due to business goals changing.


I haven't tried Go myself and I've been meaning to, but I have spent the last 3 years writing Scala code. Before learning Scala I didn't have any real experience with FP, and I think that was the real learning hurdle for me. Once I learned the FP ideas Scala became my preferred language. I think it's really funny but Java and C# have been becoming more Scala like, and to some extent the latest version of JavaScript are also starting to become Scala like. What I have come to accept is that Scala does take people time to learn, but it's not typically Scala the language, it's the FP aspects of it that are the stumbling blocks.

PS. I also work on a 100k Scala program and can go from clean to compiled in less than 2 minutes, and incremental compiles are extremely fast.


Yeah, this is very similar to my experience.

If you don't have code reviews, and don't have any sort of regular teaching sessions that push people towards similar conventions, then I can see how code can become unreadable. But we really haven't had that issue w/Scala at all.


What magical place do you work that allows for this level of engineering quality?

> all code is reviewed and unreadable code does not pass

Sounds quite nice.


Thorough code review is a great process that should be used in every company.


Keyword: Should


I work at a mobile games company called IGG


> I studied Scala and FP

> Bi-weekly we have a book club where we take turns presenting a topic from functional programming in Scala, functional reactive domain modelling and others.

I think this is precisely why a former eng director at Twitter is quoted as saying he'd do Twitter in Java if he started over again. You have to spend a lot of time studying Scala. This is a no-go for any big org that wants to be efficient at horizontally scaling up engineers.

It works great for small teams or orgs that don't need to hire engineers like crazy.


Looking through the really abstract scala code they linked brings up a problem that really frustrates me in haskell. Why doesn't anybody document their really abstract code? You know it's going to be confusing, so why not help out? If I have a type like

    def foreignKey[P, PU, TT <: AbstractTable[_], U] ...
It's not sufficient to document the function's arguments. You also need to document the type variables! Likewise in haskell with code like

    f . g x . (h (i x) $ j y z) . k $ aNamedVariable
It really isnt' so hard to refactor that into

    let descriptivelyNamedFunction = (h (i x) $ j y z)
        anotherDescriptivelyNamedVariable = descriptivelyNamedFunction . k $ aNamedVariable
     in f . g x $ anotherDescriptivelyNamedVariable
It's much larger, but in certain places it prevents so many headaches. It's great that you can put stuff inline, but both communities seem super lax about accepting code that is the opposite of self-documenting.

edited for clarity, i hope


I mostly agree with you, but I also think a bit part of what Haskell-like FP shows is how often your code is invariant to so many things that the variables no longer have any sense to them whatsoever.

That doesn't really excuse badly named variables when that doesn't hold. It's more that it's something sort of novel to a lot of Haskell-like programmers and so we all get excited about it and probably overdo it somewhat. But on the other hand, I think it's very well-justified often enough.

For instance, with

    class Functor f where
      fmap :: (a -> b) -> (f a -> f b)
there are many words you could give to f, a, and b but they're essentially all misleading.

With another example from the article, Strong Syntax, the "Syntax" bit is basically a convention in scalaz that ought to be immediately obvious if you're familiar with the scalaz library. The "Strong" bit has to do with a subtype of "Profunctor" structures, "Strong Profunctors".

Profunctors are generalizations of functions that show up all over the place. Strong profunctors are profunctors which can "distribute over a tuple".

Giving names to these types is an exercise in futility. At least with a function `a -> b` might be named `in -> out`, but with a Profunctor there isn't even that intuition. It's just too general. Subsequently, it shows up all over the place.


> At least with a function `a -> b` might be named `in -> out`, but with a Profunctor there isn't even that intuition.

What about:

  dimap :: (newIn -> in) -> (out -> newOut) -> p in out -> p newIn newOut


With profunctors there's not necessarily anything going in or out and especially not necessarily and notion that the thing going in produces a thing going out.

That's all roughly true with one kind of profunctor, a function arrow, but not true in general.

For instance,

    data Counterexample a b = Cx (Set a) b
is (very, very, very nearly [0]) a profunctor, but a isn't necessarily "going in" and b isn't necesssarily "coming out".

[0] It's a profunctor if a is finite. You can handle the infinite form by writing it as `data Cx a b = Cx (a -> Bool) b` which is equivalent to what I wrote when `a` is finite... but it also makes it a little easier to pretend that a is "going in" even if that's a bad intuition.


I'm not sure I see how the intuition is bad here? What would the lmap implementation for your (Set a) example be if not equivalent to the (a -> Bool) case? And how is that not an example of data "going in"?

More generally even if there are Profunctors for which this analogy isn't perfect, I feel like the intuitive type names are still more useful than random letters. Especially for a relatively advanced concept like Profunctors, for which I expect pretty much all users to be comfortable with the idea that a type variable does not necessarily imply that concrete values of that type will exist.


It is equivalent to the (a -> Bool) case. In Haskell at least, all "negative type parameters" are ultimately generated by the left side of a function—but this is more a concern for how ideas are modeled (perhaps partially) in Haskell then a fundamental one. If you're writing code that's

    forall p . Profunctor p => ...
then `in` and `out` aren't generally valid.

I definitely hear the argument that ergonomically it might be a good idea to use a sort-of-appropriate model to drive better terminology... but at the same time I think there are drawbacks. I think it helps the early part of a learning curve but then hinders the latter part. An expert doesn't care what they're called since they're just mentally erasing the names as appropriate anyway. A non-expert will try to carry the metaphor further than it can go and gets stuck.

This is why there's endless debate about calling Monoid "Appendable" or something like that. For the commonest cases that's the right idea... but the first time someone questions why there's an Appendable instance for Bool (two, actually!) you're fighting with your own inappropriate metaphor.

"Oh, Appendable actually means Monoid but we didn't want to say that straight up."


Would descriptive types help?:

    def foreignKey[PNamedType, PUNamedOtherType, TTlongTypeDescription <: AbstractTable[_], U] ...
I know they help a lot in c#.


This is what I really hate about functional programming (specially Haskell), is almost as if the more obscure and unreadable your code is the more competent you are deemed to be.


It's like someone's run code through an obfuscator before committing.


What do you mean "document the types". They're type parameters... can you suggest better names for the parameters in StrongSyntax?


As a small example, let's say I have a typed "agg" function with associated typeclass:

    class typedAgg t where 
      agg :: t f r a b -> (r a -> b) -> f (r a) -> f b
      ...
To lots of people, that's going to be really confusing. It might be easier if I document that f is supposed to be the type of the table, r is the type of the row, and a is the type of elements in the rows, if that's the intended usage.


But I'm saying, if you have a Higher Kinded Type like Strong, what do you name the variables? They don't refer to actual nouns. We're abstracted from that level.


In my example, t, f and r are HKTs. Does that clear up what I mean at all? Strictly speaking in my example, f (r a) is the type of the table, but it's still illustrative to say that f is the type of the table, or say that it's probably a functor or at least similar to one.

Specifically with Strong, I'm not sure what I'd comment, as I'm not that comfortable in Scala yet and don't know what Strong is. I'm not going to go digging around and there are basically no comments in that file, which is the problem I'm talking about.


My company also have a lot of problems with Scala, both technical and non technical ones, but so far we have managed to control it:

- As for slow compilation times, incremental building helps a lot.

- Intellij works most of the time, and if it doesn't a few type annotation will fix it.

- It's very hard to find people with Scala experience. We have given up on finding those and decide to train people from the beginning for 2 months. I myself studied mechanical engineering in university and I can code just fine after a few months.

- The language itself is very complex, so we let more experienced programmers write the backbone/framework/library/common parts and the less experienced one do the code glueing-job. That way we can have Scala type safety and expressiveness without scaring the juniors. IIRC one company used Haskell also do the same and they said it is very hard to introduce runtime bugs because almost everything were caught during compile time, "If it compiles, it works".

You may ask why going through so much trouble just to use Scala. There are many reasons (speed, type safety etc..), one of them is that we can be immensely productive when needed, the conciseness of the language combine with a powerful type system let us implement complicated features rapidly without very few bugs. IMHO Scala strives to combine both FP and OOP on top of JVM so a lot of tradeoffs had to be made. The developers have to learn a lot of concepts and have good self discipline, but in exchange we can write fast, robust systems and even enjoy it.

(Edited for better formatting, this is my first time posting here.)


+1 for "If it compiles, it works" .... been there done that ... multiple times !!! And yes in the beginning the feeling of "it works on the first run" was just strange and it get me some time getting used to it


[Scala team lead at Lightbend here]

I'm always eager to learn how we can improve Scala, especially as we kick of the Scala 2.13 cycle (hard at work on compiler performance and standard library improvements). Email is 'adriaan.at("lightbend.com")

Regarding Scala's growth, I will leave you with https://www.indeed.com/jobtrends/q-scala.html.


I don't think that Go is a good language to compare with Scala. (I do like Go, though.) The two languages could not be more different in philosophy. Scala is maximalist - you can do things many ways, you can call java, you can have tremendously intricate types, etc. Go is minimalist - there are just enough tools to get by, and sometimes it feels like you are missing one. My experience with Scala is that you spend more time telling it what to do but not how to do it (and it is often not obvious exactly how things will be done), while in Go you have to tell it both what and how to do things, which results in longer code, and repetition, but less ambiguity. You can't add to Scala to make it more like Go - the only way to make it more Go-like is to remove from it, which is impossible.

I think a more appropriate comparison for the language would be F#, which is probably not a surprise to you. I have never used Scala professionally, so I can't give any suggestions that would improve the use of Scala for day to day programming. Years ago I was learning 1 language per year, and picked up Scala and F# that way. After completing the Coursera courses on Scala, I put together a few projects on github using it, enough to get some job feelers that ignored my "don't send me job offers." And I realized that while I enjoyed fiddling around with the language on my own, I didn't want to spend my professional time deciphering other people's Scala code, and so I dropped it. Take from that what you will - maybe I am just not cut out for it.

I do use Go professionally, although it is a minority language where I work.


Thanks for your balanced reply. Sadly, some people see it as a badge of honor to write super clever code that's essentially write-only, and Scala somehow triggers this in them :-)

We, as the Scala community, play an important role in shaping the culture of programming in Scala as one that embraces simplicity as the true elegance, maintainability and testability, friendliness and openness to criticism. The language will remain flexible (though we're always looking to remove warts), it's really up to your company culture to decide how to use it (which is different for different teams over time).

Many big players, such as Twitter, have done a great job with that (and continue to do so).


Yes maybe some of the success stories regarding changing from X language to Go are actually because Go enforces behavior at the language & tooling level that could potentially be enforced culturally at the company, but for whatever reason, the company has not been able to develop. Kind of, "if you don't play well with your toys, we take them away," instead of teaching them to play well from the start. When you're just a senior developer, you probably can't change the culture but you might be able to change the language for some applications.

There is the old jeremiad to not use technology to fix cultural problems, but when you're just a part of a much larger institution that may or may not have the ability to intentionally change its programmer culture, it can make a lot of sense to move to a language that reduces your reliance on those cultural behaviors if they are lacking.

Some of that could be addressed automatically in Scala with code standards enforcement like with scalacheck. And you can help with good practices like code review or pairs programming. But in a lot of places there is no appetite for "wasting time" on stuff like that (I vehemently disagree with that kind of attitude, but changing other people's view is not easy).

The advantage of Go is that left to their own devices, people will tend to gravitate towards more readable code, in a standard format, using standard tools that are pretty good in most situations. The entropy of having a bunch of people work on a project will then work in favor of coherent approach and style, instead of tending to diverge into using the tools they like best in the format they prefer.



Well, more accurately would be to compare Scala to other languages[1] that operate in the same space (OO/FP).

[1] https://www.indeed.com/jobtrends/q-scala-q-haskell-q-ocaml-q...


Or for those who don't call it "golang":

https://www.indeed.com/jobtrends/q-scala-q-golang-q-go.html


"Go" the language is tricky to search for as just the word "go", so I wouldn't read much into that. Many job ads contain the word go, but have nothing to do with go the language. e.g. "...go to our website..." "...go above and beyond..." etc.


The extra keyword 'go' that you added will return all job postings that have any occurrence of the word 'go' in their description. See for another example: https://www.indeed.com/jobtrends/q-scala-q-golang-q-descript...


I just wanted to take this opportunity to thank you and the team at Lightbend. There is a clarity of thinking and expressiveness in Scala that I haven't found in other languages. I have used Scala professionally for the past 3 years and enjoy it immensely as a language.


Some notes:

- the actor model, along with the Akka implementation, has nothing to do with functional programming; and it isn't orthogonal either, since an actor's mailbox interactions are definitely not pure, with actors being stateful and at the same time non-deterministic; in general if you place those in the same article, there's a high probability that you never did functional programming; and if you did actual functional programming (as in programming with mathematical functions), then you wouldn't want to go back to a language that makes that impossible ;-)

- Akka actors are not the only game in town for processing data, you can also use Finagle, FS2, my own Monix and even Akka Streams; And yes, concurrency often requires multiple solutions because there's no silver bullet and Go's channels suck compared with what you can do with a well grown streaming solution

- Scala's Future are not meant for "hiding threads" and 1:1 multi-threading is actually simpler to reason about, because if that fancy M:N runtime starts being unsuitable (because lets be honest, most M:N platforms are broken in one way or another), then you can't fix it by choosing a more appropriate solution without changing the platform completely

- Your devs are maybe lazy or maybe they don't give a fuck, but given that you're supposedly dealing with concurrency in your software, if those developers struggle with a programming language, then it's time to invest in their education or hire new developers, because the programming language is the least of your problems

- Paul Phillips still works with Scala and he most likely hates languages like Go, so when you mention him or his presentation, it definitely cannot be in support of Go


Paul Phillips here. I searched my tweet archive for my tweets about Go. Draw your own conclusions.

- Kill me before I look at go.

- Nobody shot me first so I looked at go. Sigh. After maybe an hour of go I was ahead of where I was after a week with rust.

- The first audible WTF with go: cuddled else is mandatory. Parse error otherwise.

- go is a terrible, terrible language, yet still a major productivity boost. Amdahl's law in another context.

- You often see languages which are fighting the last war. Go is fighting the War of 1812.

- If someone had made up Go within a work of fiction, they'd have been laughed out of the room.

- I had thought Blub was a rhetorical device, but Go is more Blub than Blub.

- reduce: what you will do with your expectations after you start with Go

- Did they miss any chance to introduce an ad hoc inadequate non-solution? Tool directives in comments! What could go wrong!

- Comical levels of confirmation bias in proglangs. Guy doesn't get it, poor impl, isn't useful, "see?" Examples: scala, python, go.

- Some specifics are enumerations in scala, closures in python, and anything devised since 1980 in go.

- Go: compiler directives in comments is "an elegant design". https://t.co/DjCO1UxgC2 http://t.co/BR3kFxRZvc

- Rust and Scala drown you in complexity. Go drowns you in simplicity.

- This comment sums up Go pretty well. https://t.co/Td9Q3wOW43

- go logging package lets you set the global Writer, but not query it. So you can't save/restore it. Nice job.

- This year's gold medalists in type safety's missing-the-point olympics: Go! https://t.co/ktrlBUEwP9

- Turns out Go is the result of a bar bet between Rob Pike and Robert Heinlein. Everybody lost.

- It’s incredible how far Go has lowered my estimation of the vaunted Google engineer. What an institutional failure.

- Go is what you might come up with if your definition of programming was “shuffling electrons”.

- A Go discussion thread is “how to build a space station with crayons” up to homotopy.

- A supposed selling point of Go is that you can learn it in a day. Think of what that tells you about what it can do.

- The existence of Go leads me to realize the infamous Google technical interview is designed to weed out out-of-the-box thinking.

- I added a second file to https://t.co/ktrlBUEwP9 further explaining how go's casting requirements are broken by design.

- But little did I know, I’m just a “lesser” programmer because I enjoy Go’s simplicity" says a commenter. Well, yeah.

- The Go string type can't be passed as a fmt.Stringer. You apparently have to accept interface{} and type match. Amazing.

- An example of go upside: "go get ...gocov" and code coverage instantly worked. In scala it has always been painful/fragile at best.

- Any go code which has error in parameter position which tests err against nil will silently miss a bunch of errors.

- Hostility to abstraction: the Go story

- Sad that google waited this long to put a deep mind on go.


I missed a couple which didn't mention go:

- Rob Pike is like Henry Ford, except when people said “faster horses” he said “that is the best idea I’ve ever heard.”

- Rob Pike is the Antonin Scalia of programming language design - an Originalist. Except 1970s not 1770s.


Enjoy your horse then; I'll just speed off into the sunset with the latest model Generics Motors has to offer. :)


I don't understand the claim that Akka actors are both impure and non-deterministic.

If your actor is communicating only through its inbox, then it is both pure and deterministic. Given the same set of messages in the same order, you arrive at the same actor state. Sure, you can do wacky things with side-effects, but that's not akka's fault.

> in general if you place those in the same article, there's a high probability that you never did functional programming; and if you did actual functional programming

This sounds a lot like the "no true Scotsman" fallacy. I'm not attacking your argument here, but perhaps you could expound upon that first point and clarify.


> Given the same set of messages in the same order, you arrive at the same actor state

You just described object identity (see [1]), which are objects whose state is determined by the history of the messages received. An object with identity is stateful, side-effectful and impure by definition.

So no, an actor is almost never pure or deterministic. I'd also like to emphasize determinism here, because you can never rely on a message ordering, given their completely asynchronous nature, so you get something much worse than OOP objects with identity.

> This sounds a lot like the "no true Scotsman" fallacy

I'm talking about my experience. Given that I'm currently a consultant / contractor, I have had a lot of experience with commercial Scala projects initiated by other companies. And in general the projects that use Akka actors are projects that have nothing to do with functional programming.

This happens for a lot of reasons, the first reason being that most people are not in any way familiar with functional programming, or what FP even is for that matter. Not really surprising, given that most Scala developers tend to be former Java/Python/Ruby developers that get lured by Akka actors and once an application grows, it's hard to change it later. Evolution to functional programming happens in a web service model where new components get built from scratch.

But the second reason is more subtle. Functional programming is all about pushing the side-effects at the edges of your program. And if you want to combine the actor model with functional programming, you have to model your actors in such a way as to not contain any business logic at all (e.g. all business logic to be modeled by pure functions, immutable data-structures and FP-ish streaming libraries), evolved only with `context.become` (see [2]). So such actors should be in charge only with communications, preferably only with external systems. This doesn't happen because it's hard to do, because developers don't have the knowledge or the discipline for it and because it then raises the question: why use actors at all?

Because truth be told, while actors are really good at bi-directional communications, they suck for unidirectional communications, being too low level. And if we're talking about communicating over address spaces, for remoting many end up with other solutions, like Apache Kafka, Zookeeper, etc.

On combining Akka actors with functional programming, I made a presentation about it if interested (see [3]).

[1] https://en.wikipedia.org/wiki/Identity_(object-oriented_prog...

[2] https://github.com/alexandru/scala-best-practices/blob/maste...

[3] https://alexn.org/blog/2016/05/15/monix-observable.html


I have a simple question. In the article they mentioned they had a concurrency issue with a timed buffer that they later neatly solved with go channels and goroutines. They said that they solved the problem in Scala by moving to the actor model, but that required importing Akka into their project and training everyone how to use Akka.

My simple question is: couldn't they have achieved the heart and soul of the actor model by just making an object on its own thread, and talking to that object on a simple synchronized message queue? It's a handful of easy to understand lines of code, and nobody needs to delve into the sea of madness that is learning and configuring Akka and its actor model.

In more general terms, it's possible to use Scala as Java that plays well with immutability and functional programming techniques without turning your codebase into an overly complex difficult-to-understand mess. But for some reason people just can't stop themselves.

For what it's worth, Elixir hits a real sweet spot of functional goodness, combined with awesome concurrency without getting too deep into bizarre complexity.


I think it's becoming a more commonly held opinion in the Scala community that people often tend to go off the deep end with Akka and I tend to agree with that. In particular, I think that most of what people use Actors for can be done with Futures, and what can't be done with Futures can most of the time be done with Akka Agents (http://doc.akka.io/docs/akka/current/scala/agents.html).

And when I use Actors, I tend to want o wall them off in their own place in the codebase, instead of letting the actor-ness touch multiple parts of the code.


(disclaimer: Akka team here).

The post, somewhat surprisingly, omits to mention Akka Streams (which is a perfect fit, since, as the post mentions 'The data came through a stream [...]'), and Reactive Kafka (which is an official Akka project https://github.com/akka/reactive-kafka ) which solve this exact use case.

These projects/modules have been around since 2014, and we've been presenting / sharing information about them a lot since then. Perhaps this post can be interpreted that we need to put even more effort into the discoverability of them (in fact, we are right now reworking documentation for this and more). Using Akka Streams, a complete re-implementation of the example use-case mentioned in the post would look like this:

`Consumer.plainSource(kafkaConsumerSettings, kafkaSubscription).groupedWithin(maxItemsInBatch, 100.milliseconds).runForeach(batch => persist(batch))`

Too bad the team missed these... I would have loved to point the team to the right (existing) abstraction/library in one of our many communities (github, gitter chat and the mailing lists - we're pretty nice people, come around some time), rather than posting like this. What we've learnt here though is that we need to work even harder on the discoverability of those libraries - and indeed it is one of the things we focus on nowadays (with docs re-designs, cross links, better search and more).

Anyway, just wanted to let you all know what Akka has in store for such use cases, Streaming is definitely a first class citizen in the toolkit.


Actors should be in a process no? If it's in a thread then it'll take the main process down with it.

Erlang's BEAM VM every actor is in it's own process. So if something goes bad then it can be restarted via supervisor.

Scala is on JVM and JVM isn't built with concurrency in mind and I think Erlang's BEAM is too good at this. Akka is gimped too, you have to write actor a certain way iirc other wise it takes over the scheduler. BEAM is preemptive, it doesn't matter if you have a for(1) loop, your process/actor can only take some much of the cpu time.

I think hands down Erlang is a really really beautiful language for concurrency. It's syntax is ugly but it's such a small language that does everything you need for concurrency. Scala is just big and there are so many way to shoot yourself in the foot and tons of compromises. I also think implicit type is too magical and shot myself in the foot many time using libraries that use implicit type.


The JVM has tons of concurrency tools, it's absolutely designed with concurrency in mind. You can write code in the channels style if you want too.


Erlang's processes are green threads, they're just called "processes". Neither Erlang, nor Akka run 1 thread or process per actor, they just schedule multiple actors over N native threads inside the single process (ignoring multi-node situations).


Two notable differences: Akka Actors concurrency is done at library level and an actor can block a JVM thread if not coded carefully. Erlang processes concurrency is support at VM level and there's no way an Erlang process can block a VM scheduler (native code aside, but with native code all bets are off)


Elixir makes Erlang look beautiful! You should check it out!


Yeah I'm actually learning Elixir and eventually Phoenix.

Erlang's syntax took a while to get used to but the community wasn't for me. There were no momentum really, it was really hard to convince anybody that Erlang needed some killer framework that people can get behind. Or hell anything to get excited about other than BEAM and that's behind the scene.

Elixir is beautiful but some of the syntax is meh for me.


I found the Go solution for that issue a bit odd - channels aren't for data processing, they're synchronisation primitives. Using them the way they did in the article ruined the piece for me, since it reads like a rather uninformed decision now.


Are you sure they aren't intended for data processing? I've never heard that claim before, and the Go documentation seems to have a lot of examples of using channels in that way. (https://blog.golang.org/pipelines, for example; or https://tour.golang.org/concurrency/2).


Let me rephrase: not suitable for large amounts of data that need high throughput. It's easy to see that the overhead is prohibitive for applications such as in the article, if you just benchmark it.

https://groups.google.com/d/msg/golang-nuts/LM648yrPpck/tv40...


That's just not true. You often use channels for messages of various kinds, or for returning values in async code.


Maintaining a Scala code base for some years, I've learned a lot. I would not go back to a language that does not support Option/Maybe and map/flatMap. These really changed my coding style.

My largest problems [1] are all still there after years, developers only payed lip service and that killed Scala I think.

The largest bad design decision was to support inheritance which leads to it's own problems with type inference. Sad that after Java devs already recognized how bad inheritance is that Scala also got inheritance.

The sticking out problems is how very very very slow Scala compiles. This makes web development (even with Play) and unit testing a huge pain (and the complicated syntax + implicits + type inference makes IntellJ as an IDE very very slow on detecting problems in your code)

Concerning the article I do think Futures are a more powerful (and higher) concept compared to coroutines. They are easier to combine IMHO [2]

Now trying Kotlin for the faster IDE and compilation speed, sadly the Kotlin developers think Option is only about nullable Types (it's not and something differen!) and don't embrace it.

[1] http://codemonkeyism.com/scala-unfit-development/

[2] http://codemonkeyism.com/a-little-guide-on-using-futures-for...


The only thing on your list on your blog [1] that's still true is that we care about PL research. Since 2.10, we've worked really hard on improving the migration between major versions, and the feedback has been very positive. We'll keep working on finding the right balance between ease of migration and fixing issues in the libraries. Scala 2.13 will be a library release, with further modularisation of the library (towards a core that we can evolve much more slowly, and modules that can move more quickly, but where you can opt to stay with older versions as you prefer).

We've also invested heavily in incremental compilation in sbt. Sbt is meant for use as a shell, and it's super powerful when used like that. When I'm hacking the compiler in IntelliJ, recompiles of some of the biggest source files in the compiler (Typers.scala, say) take just a few seconds. I rarely have time for office chair sword fights anymore.

With Scala 2.13, half of my team at Lightbend is dedicated to compiler performance. We'll have some graphs to show you soon, but our internal benchmarking shows our performance has steadily improved since 2.10.


I still have the problem of upgrading because not all of the libraries are cross compile or do work. At the end of last year we've upgraded one library which cost us many days.

Next is upgrading Lift to 3.0 which will be a nightmare (again).

"We've also invested heavily in incremental compilation in sbt."

Yes, I read this over and over again, and I see micro benchmarks posted.

Using SBT with continuous unit testing I can't feel a difference - or it is so slow with a major code base that it's still much to slow and I judge it having no progress. Either way, after years it is still too slow (newest Scala + newest SBT).

"I rarely have time for office chair sword fights anymore."

Today I expect Kotlins practically instant compilation. 10 seconds for compiling some changed files is already to much for rapid development with TDD/Web, it breaks my flow, but YMMV.

"We'll have some graphs to show you soon,"

See above, I've seen dozens of micro benchmarks that claim improvements, in the end it doesn't show up in my real projects - at least in mine and the person who migrated to Go in the linked article. And all the other blog post authors that moved away from Scala towards something faster (Kotlin, Java 8, Go, ...)

But as I've said, I've moved on to Kotlin for new projects because for me Scala is a lost course.


Another site note: I would never argue with my users and tell them how wrong they are about the product and that their perception of the lack of some feature or quality is wrong.


The Scala compiler is indeed slow, mostly because it has to do a lot more work than the Java or Go compiler. However, in my experience Sbt's incremental compilation works well for small to medium sized projects. Beyond that we need a bigger hammer, and we're working on a parallel (and later, distributed) Scala compiler [1].

Full disclosure: I'm one of the founders.

[1] https://triplequote.com/hydra.html


> Now trying Kotlin for the faster IDE and compilation speed, sadly the Kotlin developers think Option is only about nullable Types and don't embrace it.

Because Kotlin's native support for nullable types makes `Option` unnecessary.


Nope.


If you ever find yourself working in JS and want Options, I've got you covered: https://github.com/jiaweihli/monapt


Is there something that you can express with Option that you cannot express with a nullable type?


There are many things you can express with ADTs that you can't express with nullable types, and once you have those, Option is simpler than extending the type system.

EDIT: Another thing you can do with Option is define generic abstractions that work on it and other types, like map/flatMap. This in turn means you can write generic functions over anything that can be flatMapped which work automatically for Options. (I don't know if there's anything equivalent in Kotlin though?)


For me nullable type expresses something semantically different than Option. Option is a higher concept expressing optionality - duh ;-)

- contrived example - it might make semantical sense to express Option[Option[A]] as a type, it does not make sense to have wrapped nullable types (except as a result of nested function calls).

Nullable types feel like a bugfix to null, Options fell like a concept to model business domains. Same as None expresses something different (not there) than null (usually e.g. in Java conflating not there with not initialized).

With Option it also makes sense to have flatMap, for comprehensions etc.


I think you're only meant to use nullable types in Kotlin for exactly that purpose - expressing a value that may or may not be there (aside from the compatibility with Java libraries of course).

For things that just cannot be initialized directly in a constructor, you have more idiomatic constructs, such as the `lazy` property delegate, or in the worst case, the `lateinit` keyword (though at that point it may be better to rethink the design of your interfaces).

For indicating that an error occurred, you have exceptions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: