EDIT: If it gets changed, the submitted title is "Why Scala is the best language available today"
Hey man, that's like... your opinion
In my opinion types are a poor man's schema, my email is not a string, it's a email, and my longitude/latitude pair are not just ints they have specific ranges, saying that longitude is 500 is as wrong as saying it's "foo". I use schema libraries and asserts to both document the code and make sure you can't feed it wrong data. I even generate automatic html documentation out of my schema so my clients know what to send me, and this only in the places I need, no need to a schematize or assert every simple function with obvious parameters.
Maybe type annotation is more useful in OO languages, but as a clojure programmer, for me, data is just data, there's no behavior associated. If I really want type safety at compile time I can use core.typed only in the places I need it.
As a former long time java programmer (who tried scala for months) I welcome my new found freedom from type slavery, but that's just... my opinion.
What's the distinction you're drawing here?
> my email is not a string, it's a email, and my longitude/latitude pair are not just ints they have specific ranges, saying that longitude is 500 is as wrong as saying it's "foo"
I fully agree. Those sound like exactly the kind of thing I would enforce using types.
> and this only in the places I need, no need to a schematize or assert every simple function with obvious parameters.
How do you enforce that all code paths go via your schema? Just visual inspection?
I really value strong typed languages in such projects.
Let's say I'm refactoring a function, I can run it to see what it actually does and I can write a new function and test it immediately on the spot to make sure it has the same behavior.
When you're working in a REPL you always have confidence that whatever change you made does what you want immediately. There's no recompile cycle, you don't have to restart your application and build up the state to see what effect the change has.
This allows you to refactor things and be very confident that you're not breaking anything. Coupled with high level integration tests you can be confident that all the use cases work correctly without having to have a ton of unit tests.
Having worked with Clojure it really shocks me that most modern languages do not have any REPL integrating in the editor and you still have the code/compile/run cycles.
At least in Clojure, the REPL is integrated in the IDE. I can use the IDE to tell me where the call sites are what references the function, jump to definition, etc. Then I can send code from the editor directly to the REPL to test it out.
I use Intellij with the Cursive (http://cursiveclojure.com/) plugin, but you can do the same thing with Emacs and Eclipse.
My experience is that unit tests make sense in some scenarios, but not in every situation. On the other hand, high level integration tests act as a contract for your application. You should have these regardless of whether you're working with a dynamic or a static language.
At least the ones you know about. Now, take your codebase to 100k lines, 75% of which you didn't write. Good luck.
Wrt to integration tests: they're a pain to set up, and the complexity creeps it really easily. It's straightforward to unit-test edge cases, but the combination of layers makes it much more difficult to rely on integration tests for this kind of thing. Plus, integration tests usually take a long time to run. Rapid feedback is important.
If you're working with a half decent IDE, it can do things like tracking down function usages for you very easily.
>Wrt to integration tests: they're a pain to set up, and the complexity creeps it really easily.
That has not been my experience at all. Clojure has very nice testing frameworks that makes this trivial.
>It's straightforward to unit-test edge cases, but the combination of layers makes it much more difficult to rely on integration tests for this kind of thing.
The REPL allows doing a lot of what unit tests are normally used for. When you make a change you test it live in the REPL. This is no different from running a unit test.
>Plus, integration tests usually take a long time to run. Rapid feedback is important.
Hence why you don't run them all the time. The REPL provides you with rapid feedback, much more rapid than running unit tests I might add. Meanwhile, the integration tests are used for overall sanity testing.
Now try that on the code base developed by 100 developers on three countries across two years.
This is what most Fortune 500 enterprise projects look like.
You might have an umbrella project that's developed by 100s of developers. If you actually look at how the work is split up, you'll see that individual teams rarely exceed 10 people.
Also, here's a case study of how SISCOG has been developing a large Lisp applications since 1986: http://www.siscog.eu/upload/GESTAO-DE-LISTAS/News/PDF/eclm-2...
The problem is when those chunks get integrated.
The joys of spending one to two months doing integrations until everything finally works, for each milestone.
Just because a few companies happen to do it right, doesn't mean the majority does it.
In one of such projects, the developers were deploying code to a live server, after producing the corresponding artifacts on their own computers. Management did not care, because it worked most of the time anyway.
Good. Except, next time you hire somebody, you may get accidents.
> If you're working with a half decent IDE, it can do things like tracking down function usages for you very easily.
Oh, certainly, but it kind of invalidates your claim that REPL + integration tests is enough to maintain a codebase.
> That has not been my experience at all. Clojure has very nice testing frameworks that makes this trivial.
Hm. I've worked with "very nice testing frameworks", but I'm not aware of any which helps with the two main issues that crop up when you create integration tests:
- integration with external tools (external database, LDAP...)
- amount of setup in your application necessary prior to running scenarios
> The REPL allows doing a lot of what unit tests are normally used for. When you make a change you test it live in the REPL. This is no different from running a unit test.
Well, considering that a well-tested unit of code (ie, what you put in a single file) is usually dwarfed in size by the test code itself, because you're testing a lot of scenarios, then you either spend an unholy amount of time in the REPL, or you don't test everything. Which brings me to my second point: unit tests will run the same tests every single time. And even better, everybody in the team will run the same unit tests every single time.
A REPL is no substitute for unit tests. I'm not saying it's not useful, because it allows you to validate an idea fairly quickly, especially in simple cases, but it is absolutely not suited to validate the correctness of a piece of code.
> Hence why you don't run them all the time. The REPL provides you with rapid feedback, much more rapid than running unit tests I might add.
I fully agree with that (though running unit tests on a single unit of code is very quick).
> Meanwhile, the integration tests are used for overall sanity testing.
Sure. But in my experience: most codebases have little to no integration tests, and they cover things like "make sure login is not broken" or "make sure saving an entry works" as opposed to "make sure login does not break with the login 'null'" or "make sure we reject an entry which contains invalid XML", because the cost of setting up scenarios is much higher than with integration tests.
That goes for any project in any language. You simply have to mitigate these things in a way that makes sense for your team.
>Oh, certainly, but it kind of invalidates your claim that REPL + integration tests is enough to maintain a codebase.
My point was that REPL allows you to test things quickly and can be used for many scenarios that unit tests are often used. For example, when you refactor code you can test it on the spot. The feedback loop is much tighter when you work with a REPL.
>- integration with external tools (external database, LDAP...)
The goal is to test how your code behaves and not what the external tools are doing. In my testing I capture the expected response from the external system and replay it in the test. My interest is in what the application is doing with a given input.
>- amount of setup in your application necessary prior to running scenarios
I'm not sure what this setup is exactly.
>Well, considering that a well-tested unit of code (ie, what you put in a single file) is usually dwarfed in size by the test code itself, because you're testing a lot of scenarios, then you either spend an unholy amount of time in the REPL, or you don't test everything.
The point is that the integration tests are used to test correctness. As long as they pass your application is doing what it's supposed to be doing. The REPL allows you to make sure that the change you just made does what you think it does.
>Which brings me to my second point: unit tests will run the same tests every single time. And even better, everybody in the team will run the same unit tests every single time.
Unit tests are also a huge overhead in maintenance. Any time you make a change in code, you have to go and update many tests. As you yourself point out, since you're testing many scenarios your tests often have more code than your actual application. With integration tests, you only need to change your tests when the business logic changes.
>A REPL is no substitute for unit tests. I'm not saying it's not useful, because it allows you to validate an idea fairly quickly, especially in simple cases, but it is absolutely not suited to validate the correctness of a piece of code.
I don't think unit tests are a valid way to ensure correctness either. Your individual pieces of code might be correct, but how they interact might be wrong. The only way to capture the actual use cases is by running end to end tests.
Conversely, as long as the end to end tests are passing that means that your application business logic works correctly. This is the only thing that's relevant from the user perspective.
>I fully agree with that (though running unit tests on a single unit of code is very quick).
This of course requires you to write and maintain the test for each single unit of code.
>Sure. But in my experience: most codebases have little to no integration tests, and they cover things like "make sure login is not broken" or "make sure saving an entry works" as opposed to "make sure login does not break with the login 'null'" or "make sure we reject an entry which contains invalid XML", because the cost of setting up scenarios is much higher than with integration tests.
Your argument here isn't that integration tests don't work, but teams you worked with haven't bothered using them correctly.
The same problem often happens with unit testing. I've seen many projects where people start out writing lots of unit tests, and then after a month or so things start sliding, then you have large swaths of code without tests and lots of failing tests, and then you just turn them off all together. That's a very common scenario in my experience.
Fair enough. But according to your definition, this still does not test correctness, since if you feed your actual SGBD/LDAP/etc with invalid data, your program will still produce an error.
> I'm not sure what this setup is exactly.
Consider that you have, say, a web application. You have a form, and you change the email normalization rules.
With integration tests, you simulate a POST request with a the form data (all 56 fields of them), you capture the Location header in the result, because you need to know what ID you have generated, and you do a SELECT in your in-memory database to find out if your user email is correct.
With a unit test, you simply build a user, apply the normalization function, and validate that the email looks as expected. Which is one of these scenarios is likely to be less code?
> Unit tests are also a huge overhead in maintenance. Any time you make a change in code, you have to go and update many tests. As you yourself point out, since you're testing many scenarios your tests often have more code than your actual application.
How do integration tests help with that? You're going to either end up with the same number of tests (but with more code) or you won't code the tests, and you may end up with a new issue in the bugtracker next time you release.
> I don't think unit tests are a valid way to ensure correctness either. Your individual pieces of code might be correct, but how they interact might be wrong. The only way to capture the actual use cases is by running end to end tests.
I'm actually not advocating unit tests as an alternative to integration tests, but as something complementary. It is an effective way to test the effectiveness of a unit of code, though.
I can run the original function and capture its output. Then I can write an updated function, see what its output is, I can compare them visually to ensure the change I made works as intended.
When you don't have a REPL the only way to do this is by writing a test and running it once you've made the change.
In fact, you can use the code from the tests you write in the REPL to create a unit test for later. The difference is that the REPL feedback loop is much tighter.
A REPL helps interactive development a lot, but it is by no means a substitute for testing enterprise class applications.
You cannot cover all use cases of your code. Specially when other departments using your libraries make assumptions about your code behavior.
With dynamic ones, it is always a guessing game.
Been on this game for quite some years now, the corporate world gives zero value to unit tests, regardless of what gets discussed and presented as success stories on conferences.
IME, strong (or, more to the point, static) typing doesn't eliminate the need for unit tests. Correct types don't guarantee correct values.
Unfortunately dependent types are incredibly difficult to get working (Scala's type system is already turing complete (and can handle path-dependent types) and true dependent types would make reasoning about nigh-on impossible).
I would argue that you're dealing with a sub-typing problem here, Email <: String, LongLat <: (Int, Int)*
* I would say in Scala:
case class LongLat(longitude: Int, Latitude: Int)
they are kinda cool, but also limited as hell.
presumably it just doesn't have a short syntax for them? if so, given what i know of the rest of scala, i'm surprised one can't be defined...
Here's the documentation for custom types in F#:
Types are a very good way to model your data strictly.
Err, you know that a type system could take care of all of those contraints, right?
Because you come across as types for you are restricted to be just the known scalar or barebones object types.
Having an email just as a string if far from the exhausting the specialization you can do using an "email" type.
Dependent types to the rescue!
Dependent types are nice, but much what the grandparent wants can be achieved fine with normal types. E.g. in Haskell, define the Longitude type and some helper functions
newtype Longitude = Longitude Double
fromDouble :: Double -> Maybe Longitude
Use the right tool for the job, and there are jobs where Types are the right tool.
I don't think that's true. I think that the ritual incantations to the type system required in, particularly, C++/Java-style languages are a hindrances that outweighs the benefits of static typing on very small projects.
And the limitations of limited static type systems can also be a hindrance.
But when you have a richer type system and/or type inference that reduces the necessary incantation, the size of project on which static typing becomes a net positive drops.
Because you will screw up.
I'd personally say Python is best.. for general data manipulation. But it's slow.
Go is best.. for network and concurrent programming, but it's rough around the edges and comparably low level.
C is best.. for precise control of the hardware. No hand-holding here. But that's also it's weakness.
C++ is best.. has all the bells and whistles you can think of, and more. More of the later probably.
PHP is best for.. masochists and people used to it. It's also one of the quickest ways to create a web application by completely abstracting the low level stuff away.
C# is best.. for religious people and people who buy into the Microsoft ecosystem. Strong library selection.
Java is best.. for people who are looking for a job. As boring as enterprise is, they have money and look for Java devs.
VBA is best.. if you are on a locked down corporate environment, just because it's almost the only option.
<insert language> is best.. <advantages>, but <constraint>.
So, you want to tell me that Scala has all advantages and none of the disadvantages? I don't buy it..
Critically, never take a "best" statement at face value since these things can't be measured anyways.
I'm productive with Python, but it lacks many useful things and I'm no longer comfortable with the rampant mutation.
I like Haskell and the type system is extremely useful (typeclasses ftw), but I haven't managed to become productive using it.
Scala scares me like C++ does, through sheer size.
Rust is very interesting, but not yet mature enough. I'm not even certain that level of abstraction would generally be the one I want.
Clojure is perhaps the closest (especially with core.typed and ClojureScript), but the dependency on the JVM is unpleasant for me to say the least.
I also said not that Clojure and Haskell were vastly superior in every way, but that they individually represented their language niches better.
This is the problem, everything and the kitchen sink is tossed into Scala without any consideration for how well things work together because they've let it be a PLT grad students playground.
The Scala community didn't lose one its best core developers for nothing.
I mention Clojure and Haskell in the same breath because FP and immutability are what I care about and Scala fails to be a nice functional language.
Scala isn't really the best functional language, but it might be the best OO language out there, especially with traits, no other statically typed language has done mixins as well.
Also the demigod is paulp who worked on the Scala compiler team for five years. Hard to miss if you watch Scala dev at all.
And why should we be satisfied?
5 years is a long time in terms of Scala, maybe the Scala community has the chance to welcome you back some time in the future and show you how much stuff has been improved since back then? :-)
I'd love to see a concrete example for that!
I'd also use Scala over Haskell in a lot of cases and I don't think Clojure necessarily represents FP better. Either way these arguments are still pointless.
That's cute; he seems to think the reason for managing configuration information in separate files is due to the inflexibility of Java. Let's hope that one day he has to deal with a non-trivial project and is enlightened.
Looks like GrǬǬvy, also begun in 2003, went the other way, making its language grammar larger but its core methods small.
Beginning with a clone of Java's grammar, they expanded it with lots of extras, see http://svn.codehaus.org/groovy/trunk/groovy/groovy-core/src/...
And for the core methods http://groovy.codehaus.org/api/org/codehaus/groovy/runtime/D...
(note the methods marked "deprecated" still actually exist but were repackaged into other classes for Groovy 2.x).
EDIT: actors -> the actor model
In the sense in which it is accurate to say that "design patterns are merely patches for missing language features" -- that is, design patterns that are applied as code templates -- anything that can be implemented once as a library is not a design pattern in the language where that is possible.
Perl has an experimental Actor model library threads::lite - https://metacpan.org/pod/threads::lite
For distributed scenarios NServiceBus is a good option although it's a bit different kind of model.
Yes, you can use Akka from Java. Just to show that you don't need something like Scala to implement an actor system.
I confess I haven't used it before though...why use pulsar when I can get away with using akka?
A new programming language that hopes to achieve wide adoption is a big deal. A very big deal. C has been in widespread use for 40 years now, and Java for almost 20. You could switch a language every five years or so, but that means that you're not in "mainstream software development" but in some kind of Silicon Valley startup (a tiny minority among software developers). Mainstream software developers don't switch languages every five years, and in most case, not even every decade, so they'd better get the language right.
But what makes a programming language right? That is a big question. Obviously, adoption, which is also a result of marketing, is a major factor regardless of the language's intrinsic merits. Also, you must consider your domain. Ruby is great for short-lived projects; Java is great for big, complex projects with dozens or hundreds of developers.
But assuming you've taken all that into account, making a switch better be really justified. The problem with the article, I think, is it focuses on specific language features (effects, delegation, etc.), which is what got Scala into trouble (IMO) in the first place.
When thinking about a new programming language, especially one not intended for research but for widespread adoption, this is not the way to go. I believe we should start by thinking what are the big problems with software developments, and how we can address them with a new language – one that does not simply offer this or that convenient feature, but gives a really big advantage, one that merits a switch. Java has done it with a GC for a C++-like language, and to a lesser degree, with threads. Clojure and Erlang address the issue of state - Erlang because of fault-tolerance, and Clojure (mostly) because of concurrency.
On the other hand, some languages, like Ruby, focus on "developer productivity", which often means getting to working, useful code quickly, while maintaining good readability. This is, no doubt, important (for some domains much more so than others), but is this a major problem of software development today?
Also, the article talks a lot about type safety. But type safety is language feature which is a means to an end. That end can be project maintenance/manageability, and arguably more correct code. But are there other ways of achieving this goal? I'm not saying there are, I'm just saying we should start by studying the problem, not examining particular solutions. Every language feature must be a part of solving a big problem; big enough to justify switching languages. After all, that's is a big deal so we better get it right. So, what big problem are you trying to solve?
I also find it a bit irritating that you love the vague claims like in the last article, but don't like articles with a precise and to-the-point description of issues. It's almost like you want to love all negative articles on Scala, but have to hate even neutral or remotely positive ones, completely irregardless of the quality of the article and the issues raised in them.
It leads to this whole train-wreck of reasoning here which you seem to be forced to post in pretty much every post with the word “Scala” in it.
Nobody can prevent you from spending your time with things you hate, but from my experience, live is happier if you spend it on things you love.
I can't say it any better than dancapo said it here: https://news.ycombinator.com/item?id=6844600
Maybe you should really at least try to consider the points he raises.
Also, I try to form my opinions after a lot of consideration, and you can be certain that I read most arguments against them. Sometimes I remain unconvinced, as in the case of Scala. I am not prejudiced against Scala. I think Martin Odersky is a very smart PL researcher, and when Scala first came out many years ago, I actually strongly advocated adopting it in a very, very large organization. My opinion has changed over the years, partly because of the path Scala has taken, partly because of new options that are now available, and partly because I've gained more experience with large projects and large teams.
My own criticism of Scala is, I believe, well informed; the fact that you consider its lack of minute detail as lack of evidence demonstrates the Scala's community obsession for tiny details to the detriment of a grander vision.
The comment you referred me to demonstrates the same. I do not dispute that Scala's features are well integrated in the compiler. It's a very nice compiler. What I meant in my comments on those other threads is that Scala mixes concepts whose mix creates a negative synergy, whose only purpose is that sometimes you can use one concept and sometimes you can use another. It doesn't matter that case classes are implemented as classes. Conceptually, they are meant to be used differently; again, that's getting bogged down in detail. Scala never gives a compelling reason why it is so useful to mix different concepts in a single language and pay such a heavy price in complexity, other than "sometimes it's better to use one and sometimes another". That's not a hallmark of great design. What big software development problem (as opposed to a PL research problem) does Scala solve?
Sorry, but no it isn't and it's painfully clear to a lot of people.
> the fact that you consider its lack of minute detail as lack of evidence demonstrates the Scala's community obsession for tiny details to the detriment of a grander vision.
You have a lot of talent in interpreting things the way you want. If people would have said “there are minor inconsistencies, but Scala has a grand vision it focuses on and that's what counts” you would have turned around and argued that it's the lack of attention to detail which makes everything in Scala horrible.
Sorry, but I have seen how you do this more than I care to count already. It gets tiring and embarrassing to read.
People have tried to tell you that “making up facts to retroactively fit your opinions” is not a good thing and then you just do it again:
> I do not dispute that Scala's features are well integrated in the compiler.
Let's just ignore the way you try to derail the debate here and move the goalposts around for a minute, because probably everyone who has spent 5 minutes to look at the actual source code realizes how mind-boggling wrong this claim is.
The compiler goes to great lengths to catch all the weird corner cases and shield it's users from inconsistencies which e. g. sometimes arise when running on a JVM or having to deal with other languages running on that platform.
Please have a look at javac, fsharpc and others. None of them come even close to effort scalac does to present its users with a consistent, coherent behaviour. They just let the user deal with all those minor annoyances.
> Scala never gives a compelling reason why it is so useful to mix different concepts in a single language and pay such a heavy price in complexity, other than "sometimes it's better to use one and sometimes another". That's not a hallmark of great design.
Sorry, you mentioned it yourself already that your mainly interested in a language which dresses up Java's minor pain points. I can understand that when coming from that point of view trying to understand the reason why Scala does things in a certain way is not interesting. But you should at least differentiate between “there is no reason” and “I didn't care enough to look for the reason”.
> What big software development problem (as opposed to a PL research problem) does Scala solve?
- Abstraction and code maintenance: In all languages, you can arrive at a certain point where refactoring/abstracting things any further (to increase readability, maintainability, modularity) doesn't add more value than it costs. The difference is that in Scala you are the one who can make this decision, not the compiler. This is a huge advantage, which can't be understated, but is also hard to describe if you haven't seen it yourself. Dropping into an unknown code base and being able to be positive that changing/fixing/modifying one item involves touching one place, instead of possibly dozens or hundreds of pieces of code, is a considerable benefit.
(Not really related to this, but implicit classes are an interesting example of how lots of code maintenance issues are made impossible by design, when compared to inferior approaches like extension methods.)
- Long-term evolution and extensibility: Scala gives you lots of options and tools to evolve your code in a way which minimizes the impact on the consumers of your code. Have a look at how Scala doesn't expose any public fields in classes, how easy ensuring binary compatibility is with tools like the Migration Manager, have a look how typeclasses enable you to design your code with extension points in mind right from the start, have a look how @deprecated and friends allow you to have meaningful transitions between old and new code if necessary. This reduces the amount of technical debt accumulated over a period of time and makes it easy to keep existing code bases up to the changing/growing requirements of the real world.
- Closing the decade-old gap between data inside and data outside of a language: Type providers go a huge step toward a state where developers can just stop worrying whether their data is in the source code of their language, inside a database, stored in an XML file, or wherever else. This will be a huge productivity boost to all developers.
As is the opposite... Language debates are like that.
> -Abstraction and code maintenance...
> -Long-term evolution and extensibility
This is really unsupported by evidence. How many Scala projects over, say, 2MLOC and older than 5 years are there? C++ had the same claim. Some of its abstractions helped maintenance, while some harmed it. But Scala's adoption is too small to tell.
> - Closing the decade-old gap between data inside and data outside of a language
Yeah, they're cool, but Java has had (compile-time) annotation processors (as well as AST generators) for quite some time, and they're well integrated with IDEs, too. Adding macros to the language seems a bit of an overkill.
Also, I really doubt that will be a "huge productivity boost". Data formats are pretty standardized. You don't come up with a new one every week, or even every month, and most common data formats already have "type providers" for most languages (say JSON, CSV, XML, SQL, protocol buffers, ASN.1). This is not a big, unaddressed, problem with modern software development.
It would work much better if you didn't make up assumptions and tried to sell them as facts. There is a slight difference between “it's possible to disagree about X” and “your claim about X doesn't survive 2 minutes of fact checking”.
> Scala does not enforce [...]
You can accept the points I made or discuss them, but this is just moving the goalposts AGAIN.
You asked “What big software development problem (as opposed to a PL research problem) does Scala solve?”
And I gave you a few points.
You didn't ask “Is Scala the best thing ever and the last language we need?” and I certainly didn't answer “Yes!”. With your own straw-man which you're trying to make up now, probably only Haskell and a few close friends would suffice, but certainly none of the languages you're trying to sell. Hey, in Scala could at least use the effect plugin, but I haven't such a plugin for e. g. Kotlin yet.
Why should that happen?
> This is really unsupported by evidence.
Don't believe me, just look into to spec or try the language. Again I didn't claim that Scala solves all the problem and makes it impossible for malicious people to write code in a way which will break in the future. No language claims that and you didn't see me claiming that either. As I mentioned, Scala gives you a lot of useful tools which can reduce maintenance issues and some features are designed around the idea of helping you achieve that.
> Yeah, they're cool, but Java has had (compile-time) annotation processors (as well as AST generators) for quite some time, and they're well integrated with IDEs, too. Adding macros to the language seems a bit of an overkill.
I can't really tell whether this is sarcasm or not.
- There is a reason why basically no language since Java adopted that model.
- It's pretty damning to see that in the few months in which macros existed, more useful functionality was written than for Java annotation processing ever.
- Annotation processing integrates very poorly with IDEs, I'd say not at all. Again, there is a reason why pretty much every annotation processing library needs a special IDE plugin to stop the IDEs from being confused. Just try developing with AspectJ in Eclipse without the AspectJ-for-Eclipse plugin. It just doesn't work.
- Annotation processing adds additional hassle to the build script, which needs to be setup, configured and maintained. It's not the first time that I have seen that multiple annotation processing plugins interfered with each other, leaving behind unusable class files and runtime exceptions in production.
- Again, IDE support is very poor. You can't even jump from an annotation to the corresponding annotation processing code. The IDE will just jump to the annotation declaration, which is completely unhelpful.
Do you want to have a guess which of the issues mentioned above can't exist by design in Scala's macros?
> Also, I really doubt that will be a "huge productivity boost". Data formats are pretty standardized. You don't come up with a new one every week, or even every month, and most common data formats already have "type providers" for most languages (say JSON, CSV, XML, SQL, protocol buffers, ASN.1). This is not a big, unaddressed, problem with modern software development.
I kind of feel like you are missing the context here. I recommend watching one of the F# talks for an introduction. Again, just because you don't see the benefit or the reasoning behind it, doesn't mean that the brightest minds in our profession are all wrong.
I am really not. My claim is a general one about Scala's kitchen-sink design philosophy, and why it's inappropriate for an "big engineering" language, as opposed to fast development languages like Ruby or Perl, yet it is you who insist on focusing on detail, which is part of the problem. So I responded to the detail.
> Why should that happen?
Sorry, I made a mistake there. The compilation will break when you try to pass the object to a method that accepts that structural type.
> It's pretty damning to see that in the few months in which macros existed, more useful functionality was written than for Java annotation processing ever.
You are most certainly wrong here, though I can't prove it. You're forgetting that 99.9% of Java code is not open-source. In the organizations where I previously worked we used annotation processing where needed.
> Again, just because you don't see the benefit or the reasoning behind it, doesn't mean that the brightest minds in our profession are all wrong.
Those brightest minds are usually the brightest minds in PL research. I don't think it's a coincidence that most of the languages designed by them, while brilliant in some respects, are not widely adopted in the industry. You can only attribute some of that to marketing. I also do not think it's a coincidence that all languages developed at Google, a company that knows a thing or two about large scale development, follow the Java philosophy.
I am not a PL researcher and not particularly interested in PL research aside from learning a thing or two about the potential of programming languages, but I am very much interested in moving the software development industry forward. Some of the things PL researchers focus on I find to be genuine problems with software development – like taming concurrency, which Haskell does a good job at though perhaps at an unacceptable cost to the industry, and Clojure and Erlang do an excellent job with despite their lack of PL researchers – while other areas, like developer productivity, I find to be the wrong problems to focus on.
Let me explain the last point because I think it's important. We have some good languages (Ruby, Python) that are excellent for productivity, and are extensively used in some domains. Where those languages fail - concurrency, large-scale engineering - productivity is less of an issue.
But then I found Clojure (and, mind you, I personally prefer static typing and think it's generally better for engineering, though I recognize this is just a means to an end), and saw a language that is probably as "productive" as Ruby, yet actually tackles the big issues (like making sure some careless team member does not introduce a very hard to find race condition).
I think Scala takes powerful though complex features and channels them – unlike, say, Haskell – into all the wrong places (productivity), and at the same time makes it impossible for a mere mortal to understand how a basic collection is written. You can argue details all you want with me, and say that you could do this and could do that, but a language that makes collections the domain of experts without at least fixing the real hard issues, is just plain misguided.
And that claim is just not defensible in any kind of way.
Have a look at C# and C++. Both of them are magnitudes more kitchen-sinky than Scala by pretty much every measurable metric. Both of them are considered to be “big engineering languages”. Don't believe me, look into the spec.
Scala is certainly not the language which ships features which have been explicitly designed for working with the MS Office API for example. I can give you dozens more if you want.
> You are most certainly wrong here, though I can't prove it. You're forgetting that 99.9% of Java code is not open-source. In the organizations where I previously worked we used annotation processing where needed.
Well, there is certainly Scala code which isn't open-source, too. That doesn't mean that looking at the existing, available software out there is not a reasonable metric.
Anyway, even if that point was wrong, I have mentioned a lot of issues concerning Java's annotations processing and pretty much every item should be a deal-breaker if the goal wasn't to ship a failed experiment. People use Java annotation processing, because the alternatives are even worse, not because Java annotation processing is so great.
> Those brightest minds are usually the brightest minds in PL research. I don't think it's a coincidence that most of the languages designed by them, while brilliant in some respects, are not widely adopted in the industry.
Those are the brightest minds which shipped most of the stuff which made C# great. I wouldn't call C# PL research by any means.
> while other areas, like developer productivity, I find to be the wrong problems to focus on.
Developer productivity is certainly not the sole focus of Scala. But developer productivity means that people have more time to look at the hard problems they are trying to solve instead of fighting the language and shipping the business value they are paid for instead of wasting time on unrelated issues.
A language which has e. g. concurrency done right, but is so painful to deal with that everyone writes single-threaded code to get home early, is completely useless.
> but a language that makes collections the domain of experts without at least fixing the real hard issues, is just plain misguided.
Scala's collections are the most pleasant ones I have worked with and I think most people will disagree on that.
I also think that Scala's trade-off on letting the library authors solve hard problems once instead of letting every user deal with them on their own each and every time they use it is a reasonable trade-off.
In other languages you would probably have to implement every operation yourself if you wanted to define a new collection type. Mostly tedious, error-prone work. In Scala instead, you understand the CanBuildFrom facility once and get almost the complete implementation for free, overriding only the operations you care about.
Note that understanding CanBuildFrom is a one-time effort, while in other languages you will hit those issues every time you want to create a new collection type. Again, seems like a reasonable trade-off to me.
And I guess we'll just agree to disagree, because I think we disagree about what's important even more than we disagree about the merit of each feature in Scala on its own. I believe Scala's tradeoffs are very misguided (and yeah, C++ is certainly similar) while you think they're justified. I believe that even an excellent feature should be left out of a language if it increases its complexity and the benefits do not solve a problem I care about every day, while you might think it's OK to increase a language's complexity even for features that won't be used so often. I think that in modern software development, application developers are better served by an opinionated language (especially when it comes to concurrency) while infrastructure software developers (assuming your software environment allows to easily mix languages) can certainly live with some annoying boilerplate if the language "closer to the metal", but both value simplicity; you might think that "one language to rule them all" with few constraints is preferable even at some cost to both types of developers.
No, I don't think we disagree here at all and I think it's one of the design principles behind the language.
“do not solve a problem I care about every day” is just another phrase for “anything I'm remotely unfamiliar with”. And again, just because you decided to not invest the time to understand thing X, doesn't mean all daily users of X are idiots.
You couldn't offer much here except structural types, so let me just take it is an example. Structural types were not “added” to the language because they are “so great and useful”, but because it eliminated an inconsistency where one would have types which one could express and a subset of types which would be e. g. a valid type for a method's parameter.
Treating types consistently is a huge reduction in complexity for users because they don't have to divide types into types which can be used everywhere and types which can't be used everywhere. If you think this is just an academic idea, have a look at the weird corner cases this creates in Java where you can call methods on an instance directly, but if you assign that instance, the methods seem to not exist anymore.
Ceylon, by the way not only agrees on that with Scala, but the designers marked this as one of their pillars of language philosophy (principal types).
Almost everything you can do in Java is probably easier and more consistent in Scala. Claiming that those things which Java can't do (e. g. having a remotely useful type system (1), typeclasses, ...) make the language simpler is just trying to ignore that problems are not solved by wishing them to go away. Those things just crop up in an ugly way elsewhere, be it in ridiculously complicated frameworks, in bytecode manipulation, etc.
(1) There is a reason proponents of dynamically-typed languages bring up Java when they explain what's wrong with static typing.
I wasn't calling anyone an idiot, and I don't know who you are, buddy, but I'm starting to get the feeling that I might have some years experience over you; perhaps even a decade, so chill.
Look, you can go on believing Scala is simple all you want for whatever measure of simplicity you define; Martin Odersky even tried using grammar size as a measure (personally, I wouldn't go for consistency as a measure, though, because machine language is also very consistent, but YMMV...)
Then, you can try to get the team working on the next aircraft-carrier control system (or a huge mail routing system or air traffic control software) to adopt Scala, and see how fast they throw you out the window.
> Claiming that those things which Java can't do (e. g. having a remotely useful type system (1), typeclasses, ...)
Again with the features. I don't care about what type system a language has and how mathematically correct it is. I care about writing maintainable, correct and efficient code. Types might help achieve those goals, but it's not as if the more "correct" the type system is, the better those goals will be served. Why? some of the reasons are cultural, some have to do with the way humans think, and some have to do with the way computers "think".
You think Scala helps achieve those goals, I think Scala is detrimental to them. I abandoned Scala about 5 years ago and I have no intention of using it again, especially now when there are languages which I find superior to Scala in every respect that I think matters. If you feel Scala works for you I'm not going to convince you otherwise. For me and for the organizations I've worked for it didn't work at all.
I think this explains all of it. The thing you are talking about has pretty much no resemblance with Scala today. You can bash Scala-from-5-years-ago as much as you want as long as you stop misleading people that your claims are in any way relevant to what Scala is today.
Really, look at how your extrapolations from pre-historic Scala versions worked out til now. You only embarrassed yourself. I recommend being honest next time and just sharing your experience about Scala 2.5/2.6/2.7. I think we could all have a good time telling war stories, considering how hilariously unstable the compiler was at that time.
Use it in peace. If 5 years from now a significant portion of the software development community is using Scala, then you were right and I was wrong; if not, you'd better recheck your convictions. Maybe the language you like so much doesn't solve problems that hurt badly enough, or doesn't solve them well enough, for people to pay the price of its complexity and lack of coherence.
Either way, Scala has certainly taught us a few things and we can learn from its mistakes. Maybe some of the lessons will be used to create a language that works for me and those who see things my way, too.
"Even F# simply isn't powerful enough (never mind OCaml)"
I think this a statement which deserves some actual facts.
There's plenty in Scala that other languages do better, the author's claim is that no languages puts all of its features together better than Scala.