Hacker News new | past | comments | ask | show | jobs | submit login
Metals with Scala 3 (medium.com/virtuslab)
72 points by tdudzik on Feb 4, 2021 | hide | past | favorite | 76 comments



Scala gets a lot of hate, but honestly it's a great language. As fast as Java with all the great tooling and ecosystem and compiler/IDE help, as concise as Python and just as usable for fast prototyping and rapid development.

How many Python developers would like a 10-20x faster version of the language with better parallelism and concurrency, or Java developers a more concise language better suited for rapid development on top of the huge ecosystem they built their systems on? Scala provides both.

There are a bunch of confusing frameworks from Scala's early days that leant it a bad rap, or niche tools that do not fit everyone's use case, but you don't need to use them. I've personally never used Shapeless/Akka/Scalaz/Cats in my entire career and I've gotten by OK. I haven't used SBT for years now. You can use these things if you want, and some people do, but only if you really want to. The Scala community is a Big Tent.

Really, every language is converging on Scala: a concise, hybrid OO/FP language with a rich, inferred static type system it uses for bug-catching, tooling support, and performance. Python has got static types and case classes, is getting pattern matching and type-driven compilation. Ruby is getting static types. C# is getting pattern matching and case classes. Java got lambdas, is getting type inference and pattern matching and case classes. Haskell is getting OO-style dot-syntax for records. Go is getting generics.

This is a spot in the language design space everyone is trying to get to, and Scala is already there and works great. I use it professionally to implement programming languages, distributed systems, websites, etc. and it really works as well as you'd expect "Superfast Python with static types" or "Java without all the Java-verbosity problems" to work. A true general-purpose high-level language


I agree, I picked up Scala a bit over a year ago and have loved it. I believe there is a happy medium between the functional and OO aspects. Finding this sweet spot is where the language outshines others. Using classes as decomposition components and then using Scala's functional aspects such as immutability and the large offerings of classic functions (map, reduce, fold) give you a powerful and productive language.


The new libs make Scala way more enjoyable:

* os-lib: allows for nice filesystem operations

* upickle: allows for sane JSON

* utest / munit: sane testing (rather than Scalatest)

* scalafmt: great go-like automatic code formatting

As long as Scala keeps replicating all the features in other programming languages, it'll have a bright future. It's a really powerful programming language.


completely agree..scala is a fantastic language. Scala has gotten a lot of hate, for sure Some for good reasons, but much of it is, I think, doesn't really apply anymore. I've been using daily for about 2 years now and it would be difficult to go back to Java at this point, for all of the reasons you stated. For me, it feels most like Swift.. concise but expressive, but with more mature collection and concurrency primitives. I think developers are looking at it more closely now that they're more experienced with functional capabilities in other languages and now can understand all of the type constructions. Granted, most of what I've done is in Spark, but I use it now for other mundane things like os scripting (something I would have used Python for previously). There are some great libraries for it and can't recommend it enough.


I used Scala and loved it for a period, back in the Scala Lift days. It’s a beautiful language in many ways, that is also just too bloated with complex features that enable a small amount of brevity at the expense of a huge amount of cognitive overhead.

I do kind of miss it though. Right now I’m pretty happy with Typescript...


The community as a whole has definitely dialled back use of complex features. If you're used to Lift-era Scala, that was probably the high water mark in terms of "people having fun with confusing features and syntax", and things have gotten steadily more "normal" in the intervening half decade (perhaps approaching a full decade!)

To be clear, these complex features are still there, it's just that people tend to not use/overuse/abuse them as much. Just like how Python has metaclasses and import-hook metaprogramming, but it's not something that a typical developer has to deal with on a day-to-day or even year-to-year basis


love your book.. and your blog. Thanks!!


To add to lihaoyi's sibling comment, some good examples of things people realized were overly complex pits of pain in Scala but were all the rage in the Lift days are the cake pattern and gratuitous use of implicit conversions.

Those thankfully have largely disappeared. The design space of programs in Scala is still large (detractors might say too large), but it's way better explored now and there's a much clearer idea of the various tradeoffs among the various approaches as well as how to harmonize one with another.


Re: implicits - just try doing anything in cats. The whole thing is hugely based on implicits, which you need to import with magical import invocations (which of course the compiler has no way of suggesting to you).


Thinking of "implicits" as one single feature can be extremely confusing. OP is talking about "implicit conversions", which transform values from one type to another automatically just by being in scope.

Cats imports bring in "implicit classes", which are a way to extend existing types with new methods, in a way that works even with parametric types in a type-safe way. That's the way to get ad-hoc polymorphism (aka typeclasses) in Scala.


Here you go:

`import cats.implicits._`

Done.

And once you start understanding the typeclass hierarchy a little bit, finding the specific syntax and instances comes pretty naturally.


> the compiler has no way of suggesting to you

Of course it does. And since Scala 3, it does suggest to you the imports you need for implicits. And IntelliJ does recommend imports even for Scala 2.

Regarding cats, the codebase has been restructured, so that you won't need the implicit imports (or not as much at least): https://meta.plasm.us/posts/2019/09/30/implicit-scope-and-ca... The trick there is that the Type class instances were moved to appropriate companion objects which are searched by the compiler by default and thus the used doesn't need to do any manual imports.


I think Scala will always suffer from its choice to make implicits so fundamental. Whether abused or not they impede compiler efficiency and complicate code. Implicits were the main reason I dropped Scala a few years ago.


I think it's the opposite.

All languages have their own "implicits" - just not as a proper language concept, but for "special cases". The problem is, that while it gives developers are more stable/clear framework to work with, it also limits the development of the language itself.

If you look at Scala, implicits could be used to model typeclasses from haskell and method-syntax-extension. Other languages such as Kotlin have a dedicated language feature for that. Scala never needed that.

However, this power also ask for responsible usage of course. That is a drawback that can't be denied.


> implicits could be used to model typeclasses from haskell and method-syntax-extension

To be fair those are powered by two different mechanisms (implicit arguments and implicit conversions) that just so happened to share the same keyword name but are semantically quite different (and have been broken up in Scala 3).

But your overall point is well-taken.


> To be fair those are powered by two different mechanisms

You can call it like that, but in the end it is the same language feature: describing things in a scope as implicit and having the compiler resolve them by certain rules.

What happened in Scala 3 is just that they were named different to make clear when things get defined and when they get used - one without the other is useless though, hence to me it is the same concept or language feature.


Implicit conversions I think are landmines that should to the greatest extent possible be avoided except in a few very select cases.

Implicit arguments... man I go back and forth on that so much. On the one hand every crazy tower of abstraction that grinds the compiler to a halt in Scala inevitably begins with implicit arguments and you can have a really rough time with them if your IDE doesn't support looking up implicit arguments. On the other hand, if you want ad-hoc polymorphism, implicit arguments are really useful, often far more so than inheritance.

My usual take on this is that you probably don't need ad-hoc polymorphism most of the time, which is my way of wriggling out of the dilemma.


Clojure has ad-hoc polymorphism and it isn't even statically typed so I don't get how implicits are so essential. Implicits are still alive and well in Scala 3 I hear so I guess they blew any chance of going mainstream (again).


We're talking about different notions of ad-hoc polymorphism (as you point out, static vs dynamic).

For statically typed languages, implicit arguments (in some form or another) are really great at getting statically checked ad-hoc polymorphism.

But even in dynamically typed languages such as Clojure, ad-hoc polymorphism mechanisms bottom out at some form of implicitness at runtime. I assume you're referring to multimethods here (which generalize protocols, whose very existence is a performance hack akin to atoms vs volatile, so I'll focus just on multimethods)?

The entire lookup system based off of defmulti is really just the same thing as Scala's implicit system just at runtime rather than compile time, but even harder to understand IMO because it can change wildly at runtime. The same "spooky action at a distance" problems can result here.

The reason this isn't an issue in Clojure is that multimethods are very rarely used and instead a lot of code just specializes on vectors or maps (again going back to my point that you usually don't need ad-hoc polymorphism).


> I think Scala will always suffer from its choice to make implicits so fundamental.

Thankfully it won't. Scala 3 deconstructs implicits and discourages (or outright discards) the bad parts: http://dotty.epfl.ch/docs/reference/contextual/motivation.ht...

"Implicits" are useful for the Type class pattern and is available in Scala 3 with an improved syntax.

"Implicits" for implicit conversions is an ugly hack that should be avoided as much as possible and Scala 3 restricts it.

"Implicits" for extension methods is a clever hack, and Scala 3 provides a dedicated syntax for defining extension methods.


I think the space you describe is better served by Kotlin now ie. better Java compatibility, decent FP (even better with Arrow), first-class Spring integration and dominance of the Android platform. That's a lot of value.


Yeah I don't disagree. Kotlin is great. I personally think Scala is a better Kotlin than Kotlin is for most use cases, with the exception of Android, but there's a big enough market to support multiple such languages


The one place where Scala really doesn't shine is ML. There's nothing even close to equivalent to PyTorch or Tensorflow.

Actually, languages with rich type systems in general seem to really struggle in this space.


I'd argue it's just "languages that are not Python struggle in this space". Python has an undoubtably unique ecosystem and gravity around ML, but that's more a unique advantage of Python than a weakness of every other language design


Scala does have Spark's ML capabilities. Spark's ML capabilities might not be the best, but they can be run on huge datasets. I'm guessing Dask has better model support, but isn't as scalable as Spark ML.


> Actually, languages with rich type systems in general seem to really struggle in this space.

I suspect there's two factors.

First, I think the current crop of languages with "rich type systems" (I imagine you're thinking of Haskell, Scala, ML and the like?) basically all are in this weird uncanny valley where people get tempted into trying all sorts of cool type tricks to make everything line up, and the type systems are just not _quite_ expressive enough to make it all go well.

Most of the time if you just use arrays/your language's usual collection type of numbers (or arrays of arrays... etc.) in those static languages you'll do great.

To take it to next level and really model ML faithfully I think you probably will need some version of dependent types and there just aren't really any good implementations of that at the moment.

Secondly, a lot of ML is powered by data scientists and statisticians to whom dynamic languages mesh probably better with their own educational background (when was the last time you saw types written out in a stats textbook?). It's really kind of jarring and annoying to have to deal with type systems when you've never done any professional programming before.


What I have seen in the wild is that in dynamic languages programs end up being "stringly" typed, where a mistyped string in the program results in an error, and programs needlessly rely on array positioning.

ML languages that support variants eliminate these unnecessary errors are no cost to runtime performance.

What problems have you encountered where a ML system couldn't represent things better than primitive type system? The two things that come to my mind are when the program has to accept arbitrary user input (strings), and writing an extremely expressive program i.e. a lisp interpreter with support for variable argument length functions. Even in these cases, I think the guarantees around the messy sections help with refactoring and my general sanity.


> What problems have you encountered where a ML system couldn't represent things better than primitive type system?

I assume you mean a primitive type system to be a strict subset of an ML type system? So in that case the answer in theory is clearly negative. The problem is more that ML type systems give you more, and fool you into thinking that's enough to try to use the extra bits, when that's not quite enough.

> ML languages that support variants eliminate these unnecessary errors are no cost to runtime performance.

Closed variants are fantastic. I love them. I miss them every time I use a programming language without them.

What I'm talking about is requirements like making sure that all your matrix dimensions line up (you mention variadic arguments, which is another good one if you're trying to follow a lot of mathematical notation).

A lot of ML-inspired statically typed languages let you come really close to making all that work. But are just unergonomic enough to make things painful if you really try to statically verify those things.


Makes sense, thanks for the explanation.


i don't know if it's the type system...

Julia has a very rich type system. It's not exactly popular (yet?) but it's certainly very usable and good for numerical work and ML. but they started out with the goal of building a numerical language from day one.

I can imagine if you have a completely general purpose language, and want to provide an library that's an interface to low-level numerical libraries, NumPy style, that this task might be more difficult to get right in a language with more complicated types. There seem to be a lot of half-assed unfinished numerical libraries in both Scala and Rust.


Julia has dynamic types. I think the parent is referring to statically typed languages by "rich type systems" (because dynamic types can often have arbitrary richness since they're basically just another form of runtime errors/exceptions).


This depends a lot on what one means by richness I suppose. I'd say that Python, Javascript, mosts lisps that don't have CLOS, etc. are all dynamic languages with very non-rich type systems.

Julia's type system (and to a lesser extent Common Lisp's) are very different. They can do a lot of extremely powerful things with their type systems, just not (usually) static type checking (though even that's changing in julia land with JET.jl: https://github.com/aviatesk/JET.jl)


Yeah; I think the distinction with runtime types among dynamic languages is that you can always emulate any other system's dynamic type system with some amount of constant syntactic overhead in any other system. E.g. you could write a CLOS system or Julia's multiple dispatch in Javascript if you wanted. It would just be extremely ugly because you would have to wrap every function and every function call (so as to intercept the native function call and substitute your own).

Nonetheless though that's "only" a constant amount of (painful) syntactic noise per call, which means that it is conceivable for a library to provide this functionality. And in fact occasionally libraries will require users to wrap all functions in order to give those functions certain "superpowers."

It is of course nicer if it integrates very nicely with the language rather than being full of wrappers everywhere so there is still a lot of value in these runtime type systems being built from the ground-up with certain features in mind.

But it's a very different world from static types, where usually if one static type system is missing features of another, no amount of emulation will get you there.


I'm not sure that's a particularly convincing argument. Bolting a type system on a dynamic language is incredibly hard to do if you're trying to make a type system that's worth using. It's a fundamental language design problem, and if the type system is doing anything worth while, it'd probably take around a decade to make one that is reasonably sound and robust. At that level of commitment, one can just turn around and look at your static language and say "Well, couldn't you just implement a new language with a new type system in your language?" (assuming your language is turing complete). This really isn't that different.

Meanwhile, Julia's type system is actually a fundamental player in it's optimization schemes (the compiler essentially breaks up programs into statically typed chunks and optimizes those separately), so rather than inducing a constant overhead, the type system is responsible for some of the fastest code that's possible to write, including BLAS libraries that are competitive with OpenBLAS, BLIS, and MKL (https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/24)


I think we're starting to talk past each other and this is falling into the well-rehashed pit of "are runtime types really 'types' or just semantically normal runtime assertions."

I'm implicitly talking through the lens of the latter and you're talking through the former.

Regardless I agree with your central point that Julia is doing cool things with runtime types and its design is what is giving it the ability to do such impressive runtime optimizations.


> I think we're starting to talk past each other and this is falling into the well-rehashed pit of "are runtime types really 'types' or just semantically normal runtime assertions."

Sorry, that wasn't the direction I was trying to go. I just meant to say that the work it'd take to make different type system in a dynamic language is approximately the amount of work it'd take to make a whole new language.

So I think from a practical POV, it's fair to think of the type system that comes with a given dynamic language as being more or less intrinsic to it, regardless of what one's philosophical stance is on types in dynamic languages generally.


Its because data scientists dont want to learn scala if they don't have to. As a scala developer, I see https://jupyter.org/ and https://almond.sh/ and cant think of a good reason to use the first one. All the OP's comments would apply.


Maybe you can argue that it's been outclassed but Spark's ML toolbox is plenty serviceable for commercial applications and used all over the world.

Now that all the APIs are centered around Dataframes, Scala's rich type system has been relegated to providing implicits for the SQL-like DSL, which is a shame, but it's still an eminently Scala-centric tool.


> I've personally never used Shapeless/Akka/Scalaz/Cats in my entire career and I've gotten by OK.

Everything's great until you join a team which uses it...


Last time I tried scala it took forever to build a relatively small project and the compiler was a heavyweight ram gobbler, which translated to shitty IDE performance as well. Now granted last time I tried Scala was probably 6 years ago, but I honestly doubt they got that language to compile as fast as Go - so I don't really see the Python comparison.


Not as fast as Go, but straight compilation is about ~3x faster than 2015, with many improvements around incrementality and IDE speed that make things a lot better. Compilation still isn't "as fast as Go", but has probably approached "acceptable" now in 2021.

As one data point, "slow Scala compiles" was the #1 biggest developer complaint at my company when I joined in 2017, and nowadays that complaint doesn't even break the top 10. Scala compiles are not great, but not terrible.

A lot about Scala sucked in 2015, and I wouldn't recommend 2015 Scala to anyone, but it's not 2015 any more. 2021 Scala is what I'm talking about


This is super interesting. I was a pretty early adopter of Scala, but our org grew fairly frustrated with in in 2013/14 due to the slow compiler. We invested a ton of effort into building tools to mitigate this problem (effort we'd rather have spent on our actual business).

I haven't really touched it beyond tiny toy projects since 2015 though. Really glad to hear that compilation speed has gotten so much better.


Yeah the effort to speed up compilation around Scala 2.12 really took off. You can see it in the compiler benchmarks:

https://scala-ci.typesafe.com/grafana/dashboard/db/scala-ben...

Where the better-files library went from taking ~700ms to compile in 2016 to ~230ms today. Definitely a huge improvement that our developers appreciate every day!


Not to mention how much Mill improved my dev cycle. Thank you for your contribution to the ecosystem!


Yep. I'm just a scala hobbyist but I'd have given up if I had to use SBT.


But is a bit better than dog-slow actually that great?


I compile large (~1M loc) Scala projects on a low-end 2016 laptop. It's not fast, but a clean compile run is reasonable and incremental compilation has gotten much better.


To the best of knowledge only ocaml compiles as fast as go. Honestly I do not understand while compilation speed became such a hot topic in pl design. When you know the language, you can bake a lot logic in types alone, saves quite a bit of typing in tests


Let's say you're even with longer compile times vs time to cover it with tests - I'd still take writing the tests if they run fast enough and can be done asynchronously - long compile-run cycle breaks my flow and I end up getting distracted.


oh I am a cigarette smoker. I have time. But seriously just take a look at how much testing can be avoided with things like refined types alone


But even then you'd prefer to be able to choose when you break flow.

Also live reload doesn't work if your compiler isn't fast enough for the live part.

This all depends on the kind of work you do I guess- I've written plenty of C++ in the past where faster build wouldn't be as critical (but would be appreciated). I could never do frontend this way.


Pascals are still the winner in compile time benchmarks!


> Haskell is getting OO-style dot-syntax for records. Go is getting generics.

I would dispute the former means Haskell is getting OO features (more the case that it's finally realized that having proper records a la Purescript or Elm is pretty important for day-to-day code; records and objects are quite different beasts) and Go getting generics is Go getting functional (or OO features, I don't find Go particularly OO to begin with).

> Really, every language is converging on Scala

I think this is more symptomatic of Scala trying to do so much to begin with, such that any change any language makes can be tied to some feature Scala has.

That being said, I'm really happy to see Scala 3 reining in a lot of those choices (except a new parallel set of whitespace-based syntax... that seemed like a gratuitous cost to one's novelty budget, but time will tell; the community already seems to be warming to it).

To those starting out with Scala, lihaoyi's book is a great place to begin!

I'll offer one piece of advice (to those beginners, not lihaoyi, who I'm sure needs none of my advice). These days when I talk to people about Scala who've never used it before, I don't like talking about it as a hybrid FP/OO language.

I prefer to describe it as an OO language with a lot of syntax sugar and standard library choices to enable emulation of FP on top when desired. It is useful to think of Scala the base language, i.e. what everything desugars to, as an OO language.

This is neither good, nor bad. However, I think this realization helps illuminate a lot of strategies around coding in Scala. If your having performance issues, drop down to the desugared OO layer and you'll probably find ways to squeeze out more performance. If you're having a hard time understanding how some of the FP constructs work, step through their desugared OO equivalents (e.g. there aren't really higher-order functions, just an automatic instantiation of FunctionN classes wrapping methods at call sites).

Scala 3 has smoothed out a lot of the mismatch so that its most obvious issues are gone (monomorphic functions vs polymorphic methods, weird eta expansion issues, weird subtyping requirements for implicit prioritization, etc.), but I've still found treating OO as the core of Scala to be a general rule of thumb which is useful for new Scala programmers when they come across something they think should work but doesn't. As you work more and more in Scala and that conversion of FP to OO primitives becomes extremely fluid and natural for you, a lot of choices and quirks that Scala has become far easier to explain and a lot easier to deal with.


Yeah the line between FP and OO can be argued over.

At my work it's pretty OO, with FP used where convenient. People join and start reading/writing Scala with zero experience or training, and it works out OK.

At Stripe, I understand their Scala is pure FP, with the first onboarding docs discussing "What is a Monad". I assume they need more training for people who don't have a pure-FP background, but if it works for them who am I to judge.

My own personal style is somewhere in between, certainly more FP than at my work, but nowhere near as FP as those who are really into it.


I think even if you ultimately use Scala as an almost entirely pure FP language (e.g. you live entirely in the Typelevel universe), you really need to understand the "desugaring to OO" well in a way that doesn't happen in the reverse direction (i.e. if you use Scala entirely as an OO language you don't really need to understand equivalent FP constructs).

I think that's where a lot of flames towards Scala from some hard-core FP programmers came from: trying to use Scala as a pure FP language, but having to grapple constantly with the quirks of the underlying OO desugaring.


imho, it feels like the language is always getting in the way because much of Scala's FP ecosystem is created by abusing language features in ways the author did not intended. For example, when explaining typeclasses in Scala I found engineers quickly understood the concept of a typeclass and the got tripped up on how to make them with implicits. Scala3 really cleans a lot of that up.


FWIW I think that typeclasses are actually a prime example of what Scala 2's implicits were meant to enable from very early on rather than abuse of them (I know that e.g. the `Ordering` typeclass was at least in Scala 2.9 maybe earlier and context bounds to emulate typeclass syntax were in Scala 2.8).

But I do think your example of typeclasses is a great one. It's a good example of an FP concept (typeclasses thought of as logical constraints to a type variable) desugaring into an OO one (additional implicit arguments).


Fair point, although I'm not sure I'd call implicit function parameters an OO concept.

Regardless, scala3 will make that conversation so much simpler http://dotty.epfl.ch/docs/reference/contextual/type-classes....


> implicit function parameters an OO concept.

True enough that implicitness is not an OO concept in and of itself (I think that was the operative word you're focusing on), but I do think it's the first thing that comes to mind if you're thinking in an OO manner. (Ugh... I keep needing to thread this parameter around... oh I know I could just use this DI library to implicitly insert this parameter for me!)


which framework would you recommend instead of Akka?


For most use cases you probably don't need Actors at all. How many systems do you see using Actors in Java/Python/Ruby/PHP/C#? Not many, and they get by just fine. A plain old web server handling HTTP/Thrift/GRPC works for 90% of people, maybe add Futures for parallelism/concurrency and you satisfy 90% of those remaining

I use my own Castor actor library for some of my own projects where Actors were really becoming necessary, but at 2 files and 200 lines of code it would be a stretch to call it a framework.

If you really think the "Akka-way" is what fits your thinking, go ahead and use it, but it's by no means necessary, and I think most people can get by just fine without it.


It depends on what you're doing. Do you need a framework at all?


> 10-20x faster version

It’s also at least 10-20x more complex a language. Can’t see the appeal to Python users.


Metals is such an awesome tool. Definitely one of the most polished LSP implementations I have used, and a great demonstration of what the protocol is capable of.

I know it's all old hat to the IntelliJ crowd, but I just don't like using an industrial IDE if I can help it. Metals lets me stick with a lightweight editor like Emacs or VSCode while keeping the great tooling that a powerful type system like Scala's enables.

I still fire up IntelliJ from time to time for certain types of mechanical refactoring, but as Metals has improved I find those cases come up less and less.

Big props to the Metals team, and thanks for all the hard work.


What advantages would switching from IntelliJ to Metals backed editor bring?

I've been using Scala for a few years and IntelliJ with Scala plugin has been pretty amazing for Scala use. 90% of time the suggestions it makes to fix everyday blunders (like missing imports, basic typos) are right on the mark.

Have not hit any realy pain points with sbt either. I do not want to spend time messing with configuration.

Compared to say using VS Code with Python plugin Scala development feels so much nicer that I am considering switching to PyCharm.


Well if you're already happy using IntelliJ, I'm not sure there's much reason to switch. It's more that you're not forced to switch in the other direction.

I'm happy with my Emacs setup, and in particular I spend a lot of time editing code in other languages as well as text docs like markdown or org. In the past, while this workflow was great for most things, I'd usually have to switch over to IntelliJ for serious scala editing. But now I don't have to any more!


so what is "metals"? is it the apple graphic api, or a scala library, or a cli tool?


The answer is in the first section titled "What is metals"- it is a language server for editors and IDEs to base their scala support on.


"Scala language server with rich IDE features"

https://scalameta.org/metals/


NO, NO, NO. Scala is NOT a 'great' language. It is implemented in Java, so suffers ALL of the performance issues of Java (and then some). I would be THRILLED to see a new, straightforward language (vs. C++) which is strongly typed. From my personal perspective, GOLang is the best improvement so far, but suffers from a worse case of 'rpm hell' equivalant than general linux rpm packages do. Secondly, "Functional Programming" is best described as "DisFunctional Programming". The primary precept of functional programming is that there are ZERO side effects. Well, welcome to the real world of hardware/network failures. Please stop pitching garbage ideas to the rest of us realists.


Dismissing new ideas out of hand does not make you a realist, it makes you ignorant. You seem to be confusing functional programming with purely functional programming for one, and for two, if you think purely functional programming doesn’t deal with side effects or failures in any way, then you don’t know enough about it to pass judgement.


> The primary precept of functional programming is that there are ZERO side effects. Well, welcome to the real world of hardware/network failures.

You clearly have never actually learned a functional programming language, because functional programming tends to make it easier to handle errors and side effects precisely because they are more contained, instead of allowing any arbitrary code to throw an exception, or having to remember to handle error codes, etc. I strongly suggest that you build a small toy program in a language like Haskell—it really does change how you approach problems, even in more mainstream, imperative languages.


Sounds like you never tried it, just because it runs on JVM doesn't make it bad, on the contrary actually. Go is a completely different thing. Scala is one of the most pleasant languages to write.

> Please stop pitching garbage ideas to the rest of us realists.

trolling much!?


What?

> It is implemented in Java, so suffers ALL of the performance issues of Java (and then some).

Uhm no. You can create native binaries or javascript code as well. I think your knowledge is from many years ago.


The common definition of what side effects are would argue otherwise. Functional programs definitely do adhere to maintaining referential transparency. FP is about the world of the program, and the unknown. In the unknown, server failures and network issues happen, absolutely, but for the maintainability and local reasoning of the code, the rules of functional programming are followed so as many of those situations are handled. No paradigm can account for everything, and functional programming is one way of reasoning about the outside world in one of the safest ways you can. I think its totally unfair to consider FP as a garbage idea, there are tons of companies that follow these practices that rake in billions of dollars because their software works and is reliable. There plenty of valid criticisms of FP, but the world outside the JVM failing isn't one of them.


Sorry for the nitpick, but this is one topic I have to link to this great answer, because referential transparency doesn’t mean what you mean by it:

https://stackoverflow.com/questions/210835/what-is-referenti...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: