Hacker News new | past | comments | ask | show | jobs | submit login
Advanced programming languages (2009) (might.net)
219 points by ZeljkoS on June 19, 2016 | hide | past | web | favorite | 202 comments



This post is very fortuitous for me. I've been looking to learn a functional language and I had more or less narrowed my options to Haskell, OCaml and Scala. I liked the breakdown between these languages and the resources.

Can anyone who programs in Haskell or OCaml regularly tell me the current state of standard and third party library support? I'm very attracted to Scala because it has JVM support, which sounds fantastic for overall ecosystem maturity.

That said, I come from a C++ and Python background, so do I need to know Java well before learning Scala?


I can recommend Scala:

- It is mature and rock-solid, but still manages to evolve, fix issues and simplify/remove features. Most other languages are purely additive, meaning you will have to carry on all the baggage since the languages' inception.

- It is a language which is interested in identifying the best way to solve common programming issues and spares you with all this ideological "OOP/FP is bad" bullshit.

- It has the largest ecosystem (plus it can use all of Java's, too).

- It has the best tooling.

- It has the largest community.

- It has some great learning resources, like the three (or four?) Coursera courses starting in a few weeks.

- Its JavaScript backend is stable and production-ready, Scala-Native is being worked on.

> That said, I come from a C++ and Python background, so do I need to know Java well before learning Scala?

Java knowledge might reduce the amount of learning (as many Java things like the JVM also apply to Scala), but isn't really necessary. You will be fine.


Once you have had real Hindley-Milner Scala feels painfully primitive. Just go straight to OCaml.


I'm afraid I disagree. I went from Ocaml to Scala. While full type inference is nice, it's a minor issue in practise, but the vastly bigger library ecosystem that Scala has (due to being a JVM language) isn't. Object orientation also works better in Scala than Ocaml.


I'm afraid you have it backwards. Scala's type inference comes at a slight cost (you have to annotate parameters), but it enables a much more powerful type system than even HM allows for.


Or go to F# and enjoy the vast universe of .NET.


To be fair Scala does have access to the Java ecosystem


Can you expand on this for someone who is not familiar with Hindley-Milner typing?


Hindley-Milner means that you – unlike Scala – don't need to provide any type annotations at all as they can all be inferred.

This comes at a steep price, as HM-inferable typesystems are very limited, and most languages have in fact a typesystem that might be based on what HM can infer, but have extensions that require type annotations.

In the end Scala's requirement to annotate input parameters is not a big deal, as this only enforces what Haskell/Ocaml/... consider to be best practice anyway.


> In the end Scala's requirement to annotate input parameters is not a big deal, as this only enforces what Haskell/Ocaml/... consider to be best practice anyway.

It's a huge deal. I always write type signatures once I know what they are, but a lot of the time I get the compiler to infer them for me which saves significant headaches.


I am not sure about tooling. Do you consider SBT part of tooling? In that case I do not understand how can you say best. SBT is good enough at its finest.


IDEs, editors, SBT, ScalaFmt, Mima, WartRemover, etc.

Yeah, build tools are usually terrible. Being "good enough" is enough for SBT to beat the other ones though.

(Best incremental compilation, fast artifact resolution, reliable dependency management, versioning support, fastest to-Android compilation, broadest plugin support, interactive REPL and project console ...)


I am so freaking excited about Scala Native. I haven't been so genuinely excited and enthusiastic about something for such a long time, but Scala Native is so awesome.

I feel like turning Scala into a cross-platform language is the best idea possible. I'd love to see Scala someday be able to just dump out WebAssembly, or JVM bytecode, or LLVM IR, or whatever the next "hip" format is, while keeping the language the same.

I also find it shocking how hard people are pushing Typescript when Scala.js exists. Why would you ever subject yourself to TS?!?!


The size of the Javascript generated by Scala.js alone makes it a non starter for serious projects.


Yeah, these tens of kilobytes of JS next to those ten badly compressed 600kB jpgs are really killing it.


this isn't a fair comparison. It fails to account for what happens after the files are downloaded, which is where overall js code size has an outsized effect.


I think people have been including jQuery just fine for the last decade.


Yes, but jquery is much smaller than an empty scala.js app. jquery is about 25kb gzipped and minified (the other comment used the raw size) whereas scala js is 45kb for an empty application.


TBH, You don't use scala.js to write hello world apps. I'm much more curious how Scala.js handles a 100Kloc app, but I haven't seen any data on that.


Look at here: https://www.scala-js.org/

"The generated JavaScript is both fast and small, starting from 45kB gzipped for a full application." - 45kB... jQuery is ~80kB.


jQuery gzipped and minified is 25kb. You're comparing raw to compressed.


Cedric, how does a 45KB js file make Scala a non-starter?


What is Scala native? Is there a Scala that doesn't rely on the JVM for hosting?



Ocaml has opam, a package manager, with currently 1227 packages in it. https://opam.ocaml.org/ The standard library of ocaml is quite basic, so usually, you use either Core (https://github.com/janestreet/core) or Batteries Included (https://github.com/ocaml-batteries-team/batteries-included) I personnally prefer Batteries because it tries to stay compatible with the standard library.


I've been writing Haskell professionally full-time since 2010. Before I started pursuing Haskell I looked into Scala, but it seems to be trying to embrace too many paradigms, which has resulted in a gigantic unwieldy language. In the 6 years that I've been using it professionally the Haskell ecosystem has matured tremendously. Obviously the third party library support can't be as good as popular languages like Ruby, Python, etc simply because of manpower, but I find that these days there is an open source package for just about everything I need. Furthermore, due to Haskell's purity and strong type system it is MUCH more likely that these libraries will work. It is also usually quite easy to make improvements when needed.


If you're really interested in learning a language for itself, rather than using it for some practical goal, libraries that are "just" bindings to some other language with radically different semantics don't really count as a positive. They generally end up non-native and a real pain to try to write "language-A in language-B", unless someone has taken the time to write a native-idiom wrapper around them, which is pretty uncommon. So for educational purposes, I'd only look at the pure-Scala libraries rather than "the JVM ecosystem". Or, similarly for Haskell, Haskell still AFAIK does't have a great binding to native QT or GTK or any other really good desktop toolkit. There are bindings, but you're writing C in Haskell at that point, which is definitely a pain, and not a great way to learn the language.

If you want to do practical things with the language, by all means count the bindings as an asset. It's still better to have a foreign-library binding to some capability than no capability at all, and it's not hard to imagine that using another language for some reason, even with the cost of the foreign-idiom binding, is still net cost/benefit positive.


I've spent some time with both Haskell and Scala.

Haskell's big pluses are purity and rigor. Most functions can't affect external state; they can only return results. Haskell also has strict typing, which is great for catching mistakes up front, and takes most of the pain out of using it through really great type inference. Unfortunately the community around Haskell is very small, so if you use it for a project larger than one person you could run into hiring problems.

I wrote some more about the pluses and minuses of Haskell here: http://short-sharp.blogspot.ca/2016/06/should-you-use-haskel...

Scala's big advantage is that it works well with legacy code in Java. Unfortunately it straddles the boundary of OO and FP, and making both programming models work requires a large and complicated language. I'd prefer something simpler.

If we were to take another swing at FP, I'd look hard at Scheme and in particular at the Racket system. The big plus of Scheme is a certain elegance, which is very appealing. And lots of people have learned Scheme in school.


I'd prefer something simpler.

Consider Kotlin: https://kotlinlang.org/docs/reference/comparison-to-scala.ht...


I didn't have large problems with Haskell library availability until lately. I was attempting to see what some code from work would look like from work. It used:

Mssql, Cassandra, CSV, and MySQL

MySQL and CSV were no problem. The Cassandra bindings were bitrotted and I had to do a good amount of work fixing the Cassandra dependency and the thrift dependency it has. Mssql bindings through hs-odbc have an allocation error I haven't had the time or energy to figure out.

If I were using Haskell in production and not having mssql was a showstopper, I'd need to take the time to fix this library to with fix my immediate case and then continue maintaining it.

I found that Haskell Library support seems to be stronger in the technically Superior Technologies Department. For instance postgres support is quite a bit better than MySQL support.

It makes total sense that the Haskell Community would put more work into supporting technically Superior Solutions using their technically Superior language, but I fear that's biting off more than you can chew.

At home for instance I've been struggling with Linux issues rather than putting time into my side project. It would probably be a better economic choice to buy something with more seamless Hardware support like a Mac, and then be bleeding edge with my side project using Haskell.

Point being, if every part of your life is bleeding edge you'll just end up bloody and never going anywhere.


See my post history for many of the positives of Haskell. The above post only highlighted the negative.


There's a very good summary of the Haskell ecosystem here: https://github.com/Gabriel439/post-rfc/blob/master/sotu.md


We write OCaml professionally. In case you find it useful to browse real world, battle-tested code, here's our main repo (there are others, but this is the largest): https://github.com/libguestfs/libguestfs There is an introduction/overview of the source here: http://libguestfs.org/guestfs-hacking.1.html


Heh, libguestfs helped me out a couple of years ago.. Never realised it was OCaml. Nice! (Noobing myself through OCaml & F# right now, so this will be an interesting reference to study :) )


It's a mix. The library is C, pure and simple. But we use OCaml to generate a lot of that C - which I think is a very under-used but powerful technique giving you the best of both worlds. We also use OCaml to generate programming language bindings to the C API, so that when a new API is added you simply have to recompile and the API is available in every supported language (about a dozen so far).

However the main use of OCaml is in the higher-level utilities, for example virt-v2v (https://github.com/libguestfs/libguestfs/tree/master/v2v) and virt-builder (https://github.com/libguestfs/libguestfs/tree/master/builder) are substantial OCaml programs.


Indeed, yes, having a quick look at the repo I now see the point you make. Thank you for the pointers to the H/L utils - therein lies more interest :)


Scala's JVM support can get rough. Many a developer had gone down the road of finding a library like a db driver and using it in Scala. This works fine until you deadlock your production system because your execution context ran out of threads because your db driver blocks.

So yes, while you can use Java libraries you often have to jump through a lot of hoops.


Scala makes it really easy to call a blocking function using a future with a configurable execution context. I don't see why one would run out of threads. Currently we use the Typesafe supported Slick library but the mechanism is the same.

Maybe I'm missing your point but as a professional Scala programmer I've never run into a problem with Jvm or Java integration


I've done it naively with a mongo driver. Just wrapping it in a future doesn't cut it, as you said you have to configure a separate execution context and even that context is finite. Or use a blocking future which has other implications.

It's more a point against the "strictly better than Java, use the bits you want as you you go" point in the article. You have to understand Scala's execution model and make sure you are mixing the paradigms properly, because it's fully of leaky abstractions.


It's also a real pain to call Scala code from Java, which makes blending Java and Scala code in a project, or adding Scala code to an existing Java project, much less attractive.


Knowing Java won't help you much in learning Scala. It's the functional programming parts that will trip you up if you haven't done that before, and this is the most valuable part to learn. I use Scala every day, and it was easy to learn; but I suspect that was because I did a lot of functional programming before that.


This is true, I was from an OO background and Scala was my first functional language. I had to put it aside for a while and learn Haskell to grok the functional side of Scala. The ML world can be a little alien if you're new to it, and I found it useful to learn from a purely functional language first and then go back to Scala with a better understanding of how it fits OO and ML together.


But ins't one of the nice parts about Scala that you can use it strictly for OO programming, use it for strictly for FP or both?


Yes, that's definitely one of the great things about Scala.

If you come from an OO background, I recommend to avoid using Scala's functional features at the start. Just treat Scala as a nice, conventional OO language until you are comfortable with the language. Then, slowly use some of the more advanced features. Case classes and pattern matching are easy and powerful next steps.

Enjoy the ride!


I had zero background in Java, and I'm a very happy Scala developer for about two years now (still at the beginner level, though).

I'd say not knowing Java the language isn't a big deal. Also not a big deal, but certainly a point to consider, is Java the ecosystem. I can imagine Java pros bringing their knowledge regarding the JVM, threads, and other stuff, that's certainly a plus.


I would recommend Elixir or Clojure over Scala. In my opinion, Scala and F# suffer from supporting multiple programming paradigms, which means that people tend to fall back on the procedural or object-oriented programming they know, rather than learning how to do things the right way, functionally speaking. That has been my experience with them, anyway.


Answering your second question:

Since you have experience with a statically typed language, you'll be fine with little no to Java knowledge.


The huge difference between Haskell and the other two (that isn't in libraries) is that Haskell has controlled side effects, whereas side effects are uncontrolled in the other two (and virtually all other languages...)

That, at least to me, is way more interesting/important than libraries, which can always be ported or otherwise interfaced with.


Some would argue that not knowing Java would be a benefit to learning scala. It depends though, some scala codebases are very java-y and others are very un-java-y


What's up with functional languages nowadays? The exists since the dawn of humanity, but they are getting very trendy lately. Is it just my perception? Is it a HN thing? (I've been a reader for just some months)


A quick idea, as systems have become more complex, needing more threading and such to perform at scale, I think many developers are reaching for functional programming techniques to make the code easier to reason about.

I'm not a pure functionalist, but I recognize some nice things: immutable by default, composition instead of inheritance, chained operations/monads.

I've personally become a huge fan of Rust, while this isn't a pure functional language, it definitely inherits a lot from them, but it also doesn't force FP on you.


2 things IMO

- There's a long running pendulum that swings back and forth from FP. We're currently in an FP cycle.

- Current mainstream computing puts a heavier emphasis on concurrency and distributed computing. FP's focus on immutability and referential transparency makes it easier to reason about these.


> There's a long running pendulum that swings back and forth from FP. We're currently in an FP cycle.

I think this is also happening because Functional Programming can really give us advantages, but also because it is very hard to understand fully (I am looking at you monads). We see how it can reduce: the complexity, the concurrency problems, or how we can code faster with it. But when we try to use it in a production environment, we give up because it's too complex. So you have this back and forth here in the field, between every wow moment with FP being all cool, and then being too hard to "get" or to use in production.

Now I think we are getting there slowly. Trend after trend, we have more and more the main languages that are really incorporating FP principles. And not just as features. Plus many companies are pushing FP languages forward. Swift is a significant push for it that should democratize FP principles to the "masses", rather than a more "academic" crowd.


They're not, really.

A couple of concepts have become trendy: composition and immutability.

The former is certainly a staple of FP but the latter, not as much (a lot of languages considered functional support both mutability and immutability).


The usual explanation is that they're good for safe parallelism on multicores. Which are everywhere now - even phones.


Pure FP languages are still pretty niche, but what has happened is the big mainstream languages like C# and Java has adopted more functional features like higher order functions, lambda syntax, increased focus on immutability and so on. So functional languages are clearly trendy among the language designers, from where the inspiration trickle down to the mainstream.


My guess is that it is a result of processors getting faster, memory bigger and compilers better (and GUIs more complex). Functional languages do have some performance penalty over highly optimised code directly manipulating memory but for most usages such performance is not needed. Thus programmers tend to trade some of it for clarity of code and safety.


> since the dawn of humanity

Since near the dawn of computing is more like it :), but I get what you mean. There was this early language by John Backus (yes, of BNF fame) called FP (sic), though Lisp may have been the first one.


Concepts that originated in functional languages have been slowly percolating to everyday use via dynamic languages, things like first class functions, map/reduce, closures, lambda etc. Bouncing these buildings blocks in your head for 10 years or so makes the rest of the FP concepts look pretty natural and a logical next step even if you're just joe programmer in the trenches.


It's been a trend for some years now (not just recently), due to reasons others here have said, such as multi-core, immutability, easier to reason about, etc. E.g. a friend of mine was in a startup that was using Clojure in production over 5-6 years ago.


Reasonability. In an age where billion dollar companies are gutted by 16yo hackers on a monthly basis, it's worth the effort to build strong, secure systems that can be formally reasoned about.


Its disengenious to imply that functional cant be as insane as procedural. Also, everything is strong and securw until vulnerabilities are found (heartbleed as an example). Its in the hackers best interest to never let these vulnerabilities known. Also since there are many less eyes on less popular languages, vulnerabilities will take more time to be discovered.


The combination of stronger type systems (especially dependent typing), less error-prone design (no manual memory management, no mutation, no global state, no loops/off-by-one errors), separated side effects (crashing during computation won't break things, less places for outside interference/external failure, etc.), and better error handling (no null, usually no exceptions meaning you have to encode failure into the return value itself without losing type information) solves many common bugs in imperative code.


Type systems - already available in the most popular languages

no memory management - c++, go, java come to mind

No mutation - has a side effect which you probably will never mention. But it will become important as the internet of things gets smaller

No global state - in what environments would that ever be an encouraged paradigm? Javascript?

No loops/off by one - you still need loops, they are just recursive. The difference is you need to jump around to figure out what the hell is going on instead of reading it top to bottom

Better error handling - nothing you just said seems better to me

Sure, there are many thing I like about functional languages. I like how you can create a tree to reason about you code to the point it looks like a flowchart. It has forced me to question my own coding style and how I compartmentalize and where I can run things in parralel. How infinite lists, streams and arrays can all be considered the same interface. But I think there is room for growth.

- forces me to the bottom of a document to find out where it starts

- uses obscure language in order to avoid oop

- Cannot use symbols represent the start and end of a typed object constructor with a single string argument (regex, jsx, queries, etc)


I don't believe GP meant 'type systems' the way you mean them. He's talking about the more powerful concepts such as dependent typing, not just the existence of types.

There are very few popular languages which would fit this category IMO, depending on how one defines powerful type system and popular. Scala is the only candidate that I can think of for my own definitions of those two concepts.


>Type systems - already available in the most popular languages

This implies you haven't actually spent time with a strongly typed functional language like Haskell. Once you have, it's hard to continue thinking of Java or C++ as having "type systems".


I personally would really hesitate before picking a language that will be hard to hire for to build a business around. Yes, you'll be more productive in the short-term, but a business is more than just code and you will need to wear those hats too before you'll have enough understanding and resources to hire those out.

At this point, I'm fairly certain your first hire should be someone to take the engineering load off of you. Engineering will be the only thing you really, truly understand about your business and so it will be the only thing you'll be able to effectively manage others doing.


Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier because the signal to noise ratio in the OCaml community is so much better than other, more approachable languages. He would send a job post to the OCaml mailing list, get 15 responses, interview 10 people, bring five onsite and ultimately hire three new people.

I don't have direct experience with this, but it makes sense in theory. PHP is an easy language to find developers for, but it's incredibly annoying to figure out how competent the developers you're interviewing in that ecosystem will be. But languages like Haskell and Scala have a (perceived or real) barrier to entry, so you typically have a higher median ability among developers in those languages.


I've got managerial experience hiring (on both ends of the spectrum of talent) and engineering experience in computational finance at a prop firm similar to Jane Street. He has a fast-hire ability and high-signal to noise for sure but he works at a prop firm. His hiring practices are a market anomaly simply because he can pay effectively whatever a competent developer wants. Jane Street can basically throw money at the problem. What you have at a 'normal' company is a real difficult time finding someone who can tell me when to use a lens and when to use a zipper in Haskell who are actively looking for employment.

Anecdotal but I casually attend Boston Haskell and I can't remember the last time someone was out of work other than 'funemployment' (e.g. a startup goes bust and they have accrued more than enough money to just sit around, stretch and work on pet projects). The second they 'need' a job, they'll casually mention it and I think ~3 out of the ~7 times this has happened, someone in the crowd went like "oh Bill, yeah come in to <his office> Monday and meet our CTO". No recruiters, no wait. The CTO trusts Joe has heard Todd ask enough insightful questions consistently enough that he's going to be a good hire. Tuesday 9AM HR, 10AM pull the repo down, setup stack, start poking around Haddock and start closing the easy tickets. Good talent (10 of the 15 applicants are quality) might respond to Minsky but those other 9 guys will be picked up almost invariably in well under two weeks


The benefit there doesn't seem to be intrinsic qualities of your language, but the benefits of a small community of like-minded individuals.


he can pay effectively whatever a competent developer wants

I think any niche language person would be delighted to be paid equivalent to Java or C# rates for their city or industry, to hack Haskell or OCaml or Lisp or whatever.


OT but how useful would the boston haskell meetups be as an extreme noob? Will the talks be so far over my head that I should learn me a haskell a bit prior to attending or would it make for a good way to start absorbing some knowledge?


I haven't gone to the Boston meetup specifically, but my experience with other Haskell meetups is that the level of talks varies significantly, both in how much Haskell you have to know and how much general math/CS knowledge is expected.

If you're a beginner along both fronts (which is great: Haskell is a wonderful place to start), you might need to look at the talk topic ahead of time and choose ones which seem the most accessible.

If you're just a beginner in Haskell but comfortable with general math and CS topics, chances are that many talks won't be too Haskell-specific and you'll learn something interesting even if you don't get all the Haskell details. I've attended talks about things like succinct data structures, concurrency models, FFT algorithms... etc. All of these used Haskell and Haskell concepts, but weren't just about Haskell; you would have gotten something out of them even if you didn't know much about Haskell specifically.

I don't think most of the talks will be great for learning Haskell (unless that's their explicit aim) so if you actually want to learn the language you'll also need to do some reading on your own, but going to the meetups will still be valuable.


Thanks for adding more color to this!


> Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier

No one will ever admit publicly they are having a hard time hiring because of poor technical choices, that would be suicide.


Cedric, do you have inside information of Jane Streets hiring difficulties or is it just speculation?


> you'll be more productive in the short-term

I'm not so sure. One of the advantages of these more mainstream languages is the combination of an incredibly large selection of libraries and many people having already made the mistake you will make. This means when you run into an issue, there are already several posts on stack overflow about it, and you move on without issue.


The really interesting problems have the property that there is no library written for them yet. Granted, you need some kind of interaction with the outside world, but this is easy to handle in another language layer.

The "existing libraries" are only going to be efficient if your entrepreneurship is based around the idea of rehashing or recombining existing tooling, which is already there. But far from all projects has that property.

Also, one of the really intriguing thing about splitting your service into smaller micro-service based architectures, is that you can mix and match different languages. Which gives you the ability to pick languages which suits a solution.

A good example is the TensorFlow system by Google. Model building usually happens in Python, but you then load that model into a thin C++ layer which serves the requests on the network as a standalone application. The same approach can be taken with some of the more advanced languages.

Even better, as a startup, the least of your worries straight away is horizontal scaling. Modern machines are so fast you can run an incredible amount of concurrency on a single large node, so why bother too much with scaling? This help the micro-service model even more, as there is less reason to worry about the communication overhead.


> The really interesting problems have the property that there is no library written for them yet.

But it's the reverse of that that's precisely the point though. If all the uninteresting problems already do have nicely packaged solutions, you get to focus on the interesting problem instead. The question then becomes: are you better off solving the interesting problems in better language at he cost of having to solve more of the uninteresting ones?


> Modern machines are so fast you can run an incredible amount of concurrency on a single large node, so why bother too much with scaling.

Because that node might die any second.


"I personally would really hesitate before picking a language that will be hard to hire for to build a business around"

Unfortunately, this is HN so we know you're wrong:

http://www.paulgraham.com/avg.html


Not everyone here accepts PG articles as infallible scripture, and many people have lots of experience to back up their disagreement.

(I agree with PG on this issue.)


After walking in the land of functions for a long time I wonder if there are advanced advanced languages.


Well, there's:

- Dependently typed languages (ATS, Agda, Idris)¹ should be fairly familiar if you're a Haskell veteran.

- Array languages (APL, J, K, and more obscure ones like Nial) are pretty enlightening if you're a functional programmer (at least they are for me). Most of these trace to Ken Iverson and his Notation as a Tool of Thought. They are a bit brain-bending at first, largely because of the density, but they're fun to use and the density makes comprehension easier after a while.

- Function-level languages (FL, FPr, J) – a somewhat obscure and very advanced sister to functional programming. If you're familiar with Haskell Arrows, there are many parallels. They are, essentially, a more convenient point-free style. Most (all?) of these trace to John Backus (of Fortran and Backus-Naur Form fame) and his 1977 Turing Award lecture Can Programming Be Liberated From the von Neumann Style?

J (http://jsoftware.com/) combines array and function-level programming, and IMO is a very good language to learn to expand your horizons if you're a veteran functional programmer.

--

¹ Coq kinda, but it's more of a theorem prover than a programming language. Agda sort of fits that too.


I had gnu apl installed (along with emacs mode:). APL and it's siblings are amazing, tiny, expressive, light.. lot to love. Juts nothing as mind bending as; say lambda calculus.

Didn't know FL/FPr were arrow-like. Didn't even know they were implemented .. I thought Backus quit because of IO.

Lots of people are suggesting the Idris/Agda road .. I guess I have my answer.


As far as FL/FP/FPr go, http://www.call-with-current-continuation.org/fp/ is the most mature and complete implementation I'm aware of.

There's #proglangdesign on freenode where a bunch of us have been on an array language/function-level streak lately. Some people there might be aware of more.


brilliant , I didn't know about the ccc impl nor the IRC chan. I know some folks that might be interested.


There's theorem provers like Coq. I'm pretty comfortable with Haskell, but concepts like the difference between Propositions and Booleans are enlightening to my Haskell the same way Haskell can be enlightening to use of other languages.


Cool, that's the kind of derivative I was looking for.


Have you explored languages with dependent type systems such as Agda, Idris, Coq, or Twelf?

You could also explore languages from an academic perspective. Benjamin Pierce has written several high quality texts about topics in programming languages.


Not yet (watching some youtube talks and introductory classes in college not couting). But even then I'm not sure they're more 'advanced' than Haskell. Do they implement the full lambda cube ? Even then it feels like constructive proofs are a 'normal' step after recursive functions. I'm curious about complete new way of thinking and modeling, something that would allow full graph exploration and multistaged/layered partial evaluation easily... from abstract concept down to signals and fixed-sized electronic compositions. I don't know what precisely. I don't want to say Categories because it's overloaded these days and I only know it through the 'monad tutorial' end, but I liked the idea of adjoint functors to jump back and forth between concepts.

ps: Esterel/Lustre synchronous programming comes to mind too, or Petri nets ..


You might enjoy attending programming language events such as OPLSS (all of the old lectures are online and free to watch) or ICFP. You get to learn about lots of new research in programming languages, and often, you'll have an opportunity to explore programming languages from different perspectives, including logic and category theory.


Agreed with siblings: Agda, Idris and the like are one option.

You can try Curry or Mercury for Logic/Functional cross.

Shen for something with Turing-complete type-system (by design, as opposed to C++ templates).

Do you know any array language? Like J or K (or APL, though personally I never dared). I can recommend J as a better documented and easier to get into language.

Have you seen Avail (http://availlang.org)? It looks very funny at first, but it is incredibly expressive.

What about Factor and Cat/Kitten? Stack-based languages, the latter additionally statically typed.


The language itself is not, but I would argue that the Wolfram Language has an advanced, advanced standard library.


I'm warry of Wolfram Lang. I'd need to see field report because I think what was demoed, as nice as it looks; is only the working subset. I feel a huge ontological mess underneath.


what "advanced" even mean? All those languages, they don't do something that others are not capable of(not mention that all the jvm ones just produce bytecode over the same vm)...different development principles is not a reason to call a language advanced.


Haskell and ML at least have advanced type systems. Since types are erased during compilation, you can't reason about type systems in terms of their impact on the generated code. An example of a simple type system is C with its structs.

Haskell also has lazy evaluation, which is very difficult to do in C. Obviously you could encode your own trampolines, but at that point you might as well be coding in a Turing tar pit language "where anything is possible but nothing of interest is practical".

Pattern-matching and ADTs are also usually difficult to do in other languages.

Most of the benefits of something like Haskell only show up on larger projects, so it's not meaningful to point to specific language features as a point against it. I can do just about anything in C. The type systems you probably can't do; you'd have to embed a language and that doesn't count.


* A lot of mainstream languages are syntactic skins on top of substantially the same semantics and skill in one is easily transferrable. A person skilled in C should be able to understand a Java program, but may find similarly high-quality, idiomatic Haskell totally inscrutable.

* The number and complexity of concepts with which you should be familiar are much higher. To tread water in most imperative languages, you need to find if/else, while, for, assignment, function call, and function definition. To tread water in these languages, you will generally need more.

* The idioms provided by these languages provide for and encourage code that is more expressive (says more with fewer characters) but may be more difficult to think about and write.

* All languages are equivalent in the sense that you can simulate them all by each other, i.e. by writing a compiler from X programs to lambda calculus expressions, and a lambda calculus interpreter in Y. This result does not imply that you can always express the same concepts in a reasonable, space-efficient manner.

I do have some nits to pick with the list, though. I would consider C++ advanced, in that there is a lot you need to know and think about to work with it.

Scheme itself is very simple: the entirety of its syntax and semantics can be learned by a laymen in an hour, and by the end of a semester first-year computer science students can write their own Scheme interpreters in Scheme. Its complexity is an emergent property of the kinds of abstractions it supports, not inherent to the size and complexity of its design like Haskell or Scala.


I'm really surprised that "Learn you a Haskell" is not in the Haskell resources list.


Not to knock LYAH, but I personally think it stops at a place that would leave newcomers a little ill-equipped to work on haskell projects, considering there are concepts (like monad transformers) that nearly every modern haskell project is going to make some use of that aren't covered. Pedagogically, I think its great that it takes some of the intimidation out of learning something that could otherwise seem very alien and academic.

These days, I recommend the book "Haskell Programming From First Principles" [1]. It's still a work in progress, but its already probably the best haskell book available right now. Just be aware that its a bit lengthy (1000+ pages) and that many of the chapters aren't necessary for learning haskell, but may make the going easier.

I also highly recommend using Stephen Diehl's "What I Wish I Knew When Learning Haskell"[2] which covers a huge amount of pragmatic advice on style, library use, GHC extension use, etc.

[1] http://haskellbook.com

[2] http://dev.stephendiehl.com/hask/


I've seen people recommending this book more recently: http://haskellbook.com/

I'm not sure I like LYAH as a practical resource: It's silly and might put people at ease, but I wouldn't feel comfortable writing a database or parsing tool or compiler after reading it.

Real World Haskell is better for these, but is a little dated by now.


LYAH is really good for getting you over the first big humps of learning Haskell, if you've never learned a language like Haskell before. I still really like it's explanation of monadic values. It builds up to them in a nice way. If you can pull open a GHCi session while reading that portion of the book and play around on the interpreter while reading it, you'll learn some good stuff. (As with many programming books, it may not seem logical, but I recommend typing out all the examples you find yourself. It works.)

It does not itself teach you much practical stuff. But if you learn what is in LYAH, you'll be pretty close to what you need to pick up most of the practical libraries in Haskell and understand the API level enough to use it, because once you get over that particular difference of Haskell's libraries, you can start working with them like you would any other language's libraries, and incrementally fold in the "Haskell special sauce" as you go, while you are at least learning things instead of stuck adding numbers together and filtering lists and all of the other handful of things you can do without files, network, or anything else you need an IO-based library for.


I agree, I was immediately turned off by Learn You a Haskell, it felt way to dense to me. This looks like a promising and more approachable(in my opinion) book:

https://www.manning.com/books/learn-haskell


Steele developed the early Scheme language together with Gerald Jay Sussman. Common Lisp wasn't designed by Steele alone, but by a larger group of people, including Steele, Weinreb, Fahlman, Gabriel and Moon.


Any reason F# is not in the list?


Writing/deploying F# on a non-Windows machine is a huge pain. Been there, won't do that again.


It's not. I actively refuse to use Windows and have never had any trouble installing or using F#. I've only really used it in the last 1.5 years or so, so maybe what you claim was true in the past, but it is not true anymore. There are other reasons to not like it though. For me those would be a lack of HKT and functors, and its primitive compiler optimization due to relying heavily on a VM built for a very different language.


This was > ~3 years ago, so maybe things have gotten better lately.


Though, candidly, OCaml has historically been a pain to deploy on Windows outside of some very Cygwin specific setups. The OCaml group has made a lot of improvement as of late, but so has F# on Linux.


There have been many discussions of this post: https://hn.algolia.com/?query=Advanced%20programming%20langu....

I fear that the current one shows signs of HN's inevitable reversion to the mean.


Help us break away : nothing is inevitable, the future is unwritten!


If you are interested in picking up or dabbling in SML, I recommend checking out the /r/sml wiki [0]. I'll also single out the #sml channel on Freenode as a great place to get help if you get stuck.

[0] https://www.reddit.com/r/sml/wiki/index


> It's untyped, which makes it ideal for web-based programming and rapid prototyping. Given its Lisp heritage, Scheme is a natural fit for artificial intelligence.

Why does being untyped and having a Lisp heritage make Scheme suitable for these three tasks?


Web based is because JSON is always easier with dynamic types.

Rapid prototyping is because it requires less explicit up front design due to dynamic types.

Artificial Intelligence is because metaprogramming is easier in homoiconic languages.


I just don't buy the "dynamic types" lets you iterate faster argument. You are implicitly using types, and how often does that type change such that all the code you've written doesn't need to be updated anyways? And without the benefit of a compiler to tell you what you need to fix. A language like Go is fast to iterate, trivial to parse and serialize JSON, and simple enough that the tools let me make sweeping changes quite easily.

For a while now I have felt that the reason for the creation of languages like Python and Ruby is simply a response to the pain of one such as C++. Now that we have modern languages like Rust and Go, what is there to be gained from sacrificing a type system?


Iterate faster != Rapid prototyping.

The type system definitely helps with long term maintainability (and therefore fast iteration) but when your goal is to whip out a Proof of Concept for a greenfield project Ruby wins for me hands down. (Provided it isn't overly complex, defined as "can be completed in an hour or two").


Can you explain how types slow you down for whipping up a proof of concept?


Hitting an API endpoint, pulling data out of a database. Areas that already have an implicit schema require you to explicitly redefine that schema within your application.

I say this not just from personal preference, but I've proctored many timed coding challenges that were language agnostic. The challenge involved consuming data from a handful of API endpoints and compiling a response. Most statically typed solutions took twice the time of a dynamically typed one. We catered the time limit to the static case and it was a simple pass fail so no bonus points for doing it faster, but there was a strong advantage to the dynamic languages.

Note: This is very much alleviated with typed interchange formats like protobuf.


That coding challenge sounds like a far cry from developing even a prototype application. What was the skill level of the people doing it? Had they had experience in both dynamic and static types languages? What were the languages, Java or something modern?

Edit: I don't mean this to be snarky, I just feel that it is not a very good experiment to draw conclusions from.


We prepped them ahead of time telling them that they would need to produce an HTTP API that accepted requests and returned a response compiled from multiple HTTP endpoints. It's basically an orchestration layer, and I've done similar things in production many times. ("If we exposed X to the clients, we could allow action Y. Would action Y contribute to our goal of Z?" Instead of building a highly performant correct solution, we would do an on the fly derivation of the data needed, launch it for 5% and see the experiment results.)

It was for all candidates, meant to test ability to understand a problem and prototype a solution quickly.

Edit: most candidates did Java/Scala/Python/Ruby.


I think it's not so much a fundamental issue with "strongly typed" languages as it is the way "dynamic" languages are typically designed. For instance, Ruby doesn't care if you add an integral type to a floating point type; it'll give you something back. A strongly typed language will typically complain about this (the alternative is e.g. an onerously huge constellation of custom typeclasses/traits/whatever to support the full range of numeric intermingling), and Ruby could too, but its designers have made the decision that arithmetic operations should silently convert one argument to the type of the other, and return the answer as an instance of that type.

If I'm hacking together a quick and dirty prototype, I might not even care what type I get back from that operation as long as it's in the ballpark. Using Ruby means I just don't have to worry about it for longer than it takes me to type `a + b`. If I used Haskell, or Rust, or whatever, I have to be explicit. That's additional work, however minimal, and in my experience it really does add up.

Of course, if I'm writing code that's going to be in production, this sort of thing is highly irresponsible and using a strongly typed language will help me to avoid numerical errors that would not even be runtime errors in e.g. Ruby; they'd be silently swallowed instead. That's extremely valuable. And if development of my prototype is going to span multiple days or more, strong types and compile-time type checking are going to reduce my mental workload since I don't have to remember how everything fits together; it's explictly annotated throughout the code and checked every time I build.


> because JSON is always easier with dynamic types.

I think you mean JSON where the schema isn't defined. If you have full control over the JSON you're consuming, you control the schema. Typed languages aren't any worse than dynamic languages in this case.

> Rapid prototyping is because it requires less explicit up front design due to dynamic types.

Many types languages have a REPL environment specifically for this reason. In my experience, it's not any slower.

> Artificial Intelligence is because metaprogramming is easier in homoiconic languages.

I have no experience here, thus I will assume you are correct.


Complex, recursive JSON schemas are much more difficult even when the schema is known in most typed languages. Not impossible but I've definitely found myself changing my schema design because of friction in parsing JSON in Java.


   Artificial Intelligence is because 
   metaprogramming is easier in homoiconic 
   languages.
Homoiconicity is something you can add to any language, dynamically or statically typed, simple or complex, it doesn't matter. It used to be belived that only syntactically simple languages like Lisp are suitable for homoiconic extension, but this myth was destroyed by Walid Taha and his development of the MetaML family of languages.

For an overview of how to add homoiconicity to languages, see https://arxiv.org/abs/1602.06568


While I'm sure it's an interesting exercise, using a language that already had this property without any additional effort, still seems like the best path for most people trying to produce practical results.


I'm not sure I can agree. Every language that is being heavily used eventually acquires meta-programming extensions of some sort of other.


I'm really intrigued by your last point that "Artificial Intelligence is because metaprogramming is easier in homoiconic languages."

Can you elaborate on this? Why are or what about homoiconic lnagunages make them more suitable for metaprogramming and AI in general?

Thanks


Homoiconic languages are ones where the program itself is a valid data type. Lisps (like scheme) are great for this because your program is just a list. You can insert and remove code using the same tools you would for manipulating a list generally. It makes metaprogramming much simpler.

The relation to AI is an assumption that AI and metaprogramming are connected. This is probably debatable, but it makes sense to me (and others) that AI development would benefit from being able to easily generate code and modify itself, much like you or I would learn a new skill.


I don't think this is actually true for most of what people do with AI right now. Machine learning these days is something where you want strong, native machine level support for numeric programming on matrices. Possibly with GPU support. So a lisp based language wouldn't really help you much in the current environment.

If what you want is a system that rewrites itself, then you definitely want homoiconicity. It also helps if everything is one big expression.

Let's say you want to write an evolutionary algorithm that rearranges fragments of its code and runs it. More successful functions make it to the new generation. You want LISP for that.

Realistically, you could probably do this on any abstract syntax tree. But the problem is that unless your language is homoiconic, what you get back might not actually be valid in your language. So, homoiconicity gives you bidirectional support for rearranging the AST and still getting something you can read.

It's also just a lot easier to write something that looks at the code and does computation on it when that code is also data.

But, like I said, I'm not sure many people are actually doing this these days. It was popular back in the day. It's not clear that it's the right way to do things.


> [...] you want strong, native machine level support for numeric programming on matrices. Possibly with GPU support.

Scheme is a popular extension language. You could use it the same way Torch uses Lua.


Thanks, sure, this makes sense given the origins of LISP. Cheers.


Why is JSON easier with dynamic types? In my experience, there's always a schema of some kind.


Your first two points are also met by optionally typed languages.


I'd argue that optionally typed languages are dynamically typed. Typescript at runtime is dynamically typed, same with Erlang. Perhaps not always true, but usually the type system provides static analysis which is help but not sufficient to provide guarantees.


Dynamic typing and static typing can coexist in the same language. C# is generally statically typed, but the more recent versions support the dynamic keyword - at which point you're statically typing things to be dynamically typed. Typescript of course goes the other way - adding static typing features to otherwise dynamically typed Javascript.

...the first thing I do when encountering JSON in TypeScript these days is to add type definitions to describe the schema. If I have a hard time describing the schema to TypeScript, I'm going to have an even worse time trying to keep it straight myself.


Oh yeah, because those are a dime a dozen and available almost everywhere! /s


> web-based programming and rapid prototyping

Any dynamically-typed scripting language is good for that sort of thing. Scheme in particular is very productive because of its meta-programming capabilities and REPL-based development.

> artificial intelligence

"Scheme was created during the 1970s at the MIT AI Lab [...]"

https://en.wikipedia.org/wiki/Scheme_(programming_language)


And of course, it is not "untyped". It is dynamically typed.


PL theorists generally talk about whether or not syntactic terms (as opposed to runtime values) have types. Scheme is untyped by that definition.


PL theorists - from what I saw, not being one - prefer the term "unityped". It's not true that there are no (static!) types - there is a single, unnamed type which contains all possible values.


PL theorists who know what they are talking about know that this is false, because the actual diverse types of those values can be reasoned about at compile time. Such a "unityped" program is made up of syntax. We can reason about an expression more deeply than just "it returns a value" and we can do so statically. For instance in Lisp we know more about an expression like (cons 1 2) than that it returns a value. We know statically that it returns a cons. From that we can infer that (+ (cons 1 2) 3) is not well-typed.

Type isn't just an implementation system; it's how we can reason about the program, as a mathematical object. The reasoning which informs us that (+ (cons 1 2) 3) is not well-formed without evaluating it, has to do with type.

How we implement values (using a generic "cell" that is type-tagged) is just a convenience. We can get things working without being concerned for up-front type checking. It also represents the philosophy that type is innate to objects (not just a syntactic property of programs).

A piece of compiled code can have a type passed into it which didn't exist when that code was compiled, and sensible things can happen anyway.


It's not false, it's just unspecified. If you hand me scheme I can provide it a unitype system without conflict. If you hand me Haskell I cannot because the language has explicitly said that uncheckable statements are invalid.

You can of course go further with your static analysis—a language may support many type systems in principle regardless of what the compiler accepts.


It depends on the context. I guess rayiner has referred to "untyped lambda calculus".

Thats not a bad name as it is self consistent - there are no type expressions in that language. If you say that it is actally "unityped" it is (just) your (semantical) categorization looking at it from a higher level perspective.

Of course on the other side whole numbers were just "number"-s, before fractionals came, so in the end it may be ok to introduce higher level perspective in naming just to diferrentiate better. :)


PL theorists don't get own the definition of what "type" means.

Type pervades computing.

A directory is a different type from a file or character device. A JPEG is a different file type from a PNG. You have MIME types in your e-mail. An ICMP packet is a different type from a TCP datagram.

Machine language instruction sets have types: pointers, signed and unsigned words of various sizes, floating-point values.

"Type" is a very broadly encompassing word. What all notions of type have in common is that it refers to some representation by whose rules some digital bits are interpreted to have a meaning (in some context which somehow establishes which rules apply).


> PL theorists don't get own the definition of what "type" means.

Every field has its terms of art. "Type" is a term of art in PL theory; PL theorists are entitled to define it how they want. Of course you can use the word however you want, but you can't complain if that non-standard usage results in miscommunication with others.

Besides that, it's very useful to able to talk about the "types" of syntactic terms as distinct from the "types" of runtime values. You can easily reason about syntactic types. You can, for example, use syntactic types to resolve syntax ambiguities (as in C++) or prove program properties. Runtime "types" are much less useful to PL theorists because it's hard to reason about them.


A PL theorist would note that most of the time when people talk about ‘types’, that they really mean ‘classes’, and that all of those things your mentioned would be considered to be the latter. Types are rather the elements of your program that correspond directly to logical propositions (which your program proves to be true).


In software engineering, programs are usually not constructed to prove anything; this claim is just the consequence of an equivalence between logic and static typing (Curry-Howard). Programming languages do not make very good logic systems in spite of the isomorphism. They are not designed to be expressive that way. The logic system exhibited by the type system is geared toward proving trivialities about the program: mainly that its run-time translation won't misuse data. Plus it supports certain devices like polymporphism and pattern matching and whatnot.

Now instead of writing a useful program, we can exploit the type system for doing logic in a separate domain, detached from the program. The proof occurs as a byproduct when we compile the program.

Somehow we encode, for instance, the "all men are mortal; Socrates is a man; therefore Socrates is mortal" argument into types and write the corresponding code. Then when the code compiles, it verifies: yes, the argument is valid. At that point, we throw the program away. What (if anything) that program does is irrelevant; its types have proven the modus ponens as it passed through the compiler, thus quod erat demonstrandum.

That is supposedly what type is all about in a very narrow, myopic branch of computer science.


Since Strong A.I hasn't been invented yet, I don't think anyone knows which language is best for A.I. Maybe it'll just be something boring like Java.


Agreed, that was the only sentence in the article that gave me pause -- a language having been used extensively for (failed!) attempts at A.I. does not make it a great A.I. language. Maybe we'd have general purpose A.I. by now if we didn't waste a decade with LISPs; the secret has been there in assembler all along!


He only mentions Clojure in passing. At some point I decided that, were I to learn a functional language, it would be Clojure. How would you say Clojure fits in this comparison? Is it only relevant or interesting for concurrency?


The article's 7 years old. Clojure's first stable release (1.0) was released the same year.


Thanks, I knew it wasn't new but didn't think it was so old.


I don't know much about Scala but Erlang/Elixir fits quite good to the description as well:

> Scala is the programming language I use for tasks like writing web servers or IRC clients.


Scala has a very well-regarded implementation of Erlang's actor system. I have personally seen it handle hundreds of thousands of requests spun off as lightweight threads.


The JVM (well, Java really) has been supporting orders of magnitude of that for many years.


I still don't quite understand the point of actor-model concurrency systems running on runtimes that can't perform pre-emptive scheduling, but I guess 80% of a solution is better than none at all.


> In academic research and in entrepreneurship, you need to multiply your effectiveness as a programmer, and since you (probably) won't be working with an entrenched code base, you are free to use whatever language best suits the task at hand.

... with the extreme caveat that your language has bindings to {insert language with libraries that already implement 50-85% of your functionality}.


I need a suitable sized project to implement in a niche language I am learning. I used to use tetris but it isn't big enough. It needs to exercise multi threaded updates of some state consisting of many elements and their relationships based on user or file inputs. I think. I want to write it in c++ then learn Haskell and use that. Then use c++ again and look at the differences.


multiplayer tetris?


"multiplayer tetris" is the game played by UN Security Council, so why not?


I would argue that a well-known language is fairly expressive in the sense that it is well-known.

Imagine English itself was a programming language - this is the best. But for now I would recommend using those who are commonly used in the common platforms of interest (Java, javascript, C etc)


> Imagine English itself was a programming language

That's a terrible idea.

https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffal...


Yes, but in any language - syntactically valid code is not necessarily good code


Wait for someone to write it in Perl.


I once watched this youtube video:https://www.youtube.com/watch?v=02_H3LjqMr8 as the entry to Haskell world.


Haha here was my personal intro to the world of haskell: https://www.youtube.com/watch?v=RqvCNb7fKsg . Was instantly hooked.


I am a bit surprised Clojure and Clojurescript are not in this list. I guess Clojure still was a bit young at the time this article was written. Within past few years Clojure has become extremely popular.


Yes, its first stable release (1.0) was around the same year this article was written.


For a real world project pick a suitable language that will not only help you to implement whatever you have to implement but also to maintain the project. Also you should be able to find other programmers knowing that language if you need to. Maybe even use the language you already know even if it is not the newest and hottest stuff.

But for private or side projects use whatever enlightens you. Those projects should be fun and it's always good to learn new things on the way. I even think you don't have to learn all new languages in depth but just do a small project and reflect if it is something that gets you somewhere. You probably wont write a big business application in Smalltalk but all that message passing, awesome. Seriously have a look at Smalltalk.


I had a small server written in Haskell but since the other contributor familiar with Java/C/C++ was not able to pick at it effectively, we switched to Rust and we all were happy, and transitioning was fine.

At work I deal with Java code, but we have a framework we wanted to use that's written in Scala, and since Scala can interop with Java, we were all productively able to transition over for the better.

For a hobby project we use Objective-C, but we are starting to write some bits in Swift. Apple is especially effective at trying to convince the developers to change over to this shiny language.

I think "advanced" languages can be used in real world applications, and the transition experience from other well known languages can certainly matter.


Has anyone experience with "Curry" (functional logic programming)? If it has all the advantages of Haskell plus some possibilities of logical languages, it should be really interesting for experimental approaches.

http://www-ps.informatik.uni-kiel.de/currywiki/


GHC is top notch, I wouldn't say any language without a mature compiler "has all the advantages of Haskell".


Yes, that's exactly what I practice. My "breadwinner" language is C/Java/C++ at the moment because that is what others in the project know and that is what makes sense in the ecosystem the project is in, my sideproject language is Haskell.


Prediction: APL, J, K, or Q will be on this list soon.


idk if you are serious or not but part of me feels like the fact that those languages haven't gained much attention in the last 30 years means that they won't get much attention in the future.


I think it's inevitable we'll use some of their toolset. The problems we solve don't get easier so we will reach for more powerful tools.

As an example, just being able to think in terms of a transpose makes many problems trivial: http://michaelfeathers.silvrback.com/moving-quickly-with-com...


What about javascript? Could that be a decent language to get into functional programming? It's definitely mainstream enough.


You're lucky you didn't get voted down -5, with the Type Police out and about.

I like ES5 (would be ES6, but for IE support...) Javascript, but it is a bit light on the immutability aspect, given the environment and problem set it usually runs in.

Check out the "Ramda.js" library for JS (http://ramdajs.com/0.21.0/index.html). There's some good concepts put to use there. (although I have my own version of "currying" implemented at work, as I like to sometimes have "0 arity" functions for event handlers -- but those types of "functions" aren't particularly "pure")


Can someone comment on what it's like building a webapp with Scala/Play?


Unlikely you'll be cranking out a Rails style MVP in Scala + Play, but you will produce a robust, blazing fast, scalable application that's a dream to maintain and build upon.

In short, Play is a really, really good MVC framework. For SPAs you can go lightweight with just Play's router and WS api, or try something like http4s for a more FP-based REST api.

Throw Scala.js into the mix and you've a got a fully typed stack across the board.

and then Scala Native on the horizon...things are looking good in Scala land ;-)


> you will produce a robust, blazing fast, scalable application that's a dream to maintain and build upon.

At work I'm working on a scala/play app. It was 2 years old already when I joined, and the first scala/play project for the team. It's not that fast or scalable. That is to say, scala and/or play are not silver bullets that will make your code hyper optimized and scalable.

ATM I'm working on performance. There are a few blocking calls in the API that are used everywhere and they manage to block the various execution contexts (controller, akka, etc.) It's a mistake to use blocking calls when it can be avoided, but it's an easy mistake to make.

So yeah you can write robust / fast / scalable / maintainable apps in Scala but you need to properly learn it first.


> Scala is a rugged, expressive, strictly superior replacement for Java.

Scala is not replacement for Java. It runs on JVM and can invoke java code, but has completely different approach than Java. Use Kotlin if you want Java replacement.

Also I would argue that Scala is much better suited for writing compilers than Haskel.


Scala can be a fine Java replacement, if you drop 20% of its features on the floor. As long as you never see a higher-kinded type, I think you are ahead of Kotlin, and the learning curve is not really any different.

The problem with this approach is that 90% of Scala libraries are built by people that live in higher-kinded land, and believe that since they have reached that level of expertise, so should everyone else, the second they start learning the language. For instance, look at SummingBird: The hello world example includes F bounded polymorphism, and path dependent types! Anyone new to the language would run away screaming, and I'd not blame them.


Scala compatibility is not so great. It has its own collections and other libraries. And it does not even support java getters and setters.


Why would you create a new language and then keep a huge part of what developers interact with on a daily basis in such a broken shape as in Java? (Or rack up a massive amount of complexity and break your typesystem by trying to put lipstick on the Java collection pig? (Kotlin).)


1 billion lines of code, that is why.


> Also I would argue that Scala is much better suited for writing compilers than Haskel.

Any examples of how it can help out?


Languages today are not just about command line compiler.

Compiler today must interact with tools such as IDE. If you keep some basic rules, it is reasonably simple to write compiler which can interact with Intellij Idea and provide incremental compilation, live edit hints, error messages.

In Haskel you dont have any of that.


I prefer practical programming language extensions, like the new C++ standard which adds useful things, or Hack which makes PHP useful, or the typed extensions to Javascript and Python.


Well said!


c++ programmer wants to learn scala now :)


I am still stuck on why this is titled Advanced Programming Languages. Wait, so anything other than the most widely adopted robust language technologies used in enterprise systems is 'advanced'? Language compilers, interpreters, database engines, and runtime implementations are advanced. Who says, "we need an advanced language for this solution", as opposed to "this problem requires an advanced solution"?


OP reeks of subjectivity.

"Scala is a rugged, expressive, strictly superior replacement for Java."

I have nothing for or against Scala, and Java is a popular target. But "strictly superior"? There are critiques of the language and its toolset that contradict the strictness of its superiority. (https://www.quora.com/What-are-some-criticisms-of-Scala)


I think "strictly superior" makes sense in the way that Scala was never designed or intended to be a worse-is-better language like Java and people were willing to make that happen by evolving the language. It was never intended to be a Java-with-more-concise-syntax unlike some other newer languages.

If something was clearly worse, it would have been changed years ago already. That some design is still there might suggest that some people just have a different minority opinion on some aspects. (Assuming that "strictly superior" does not mean "100% of devs agree 100% that this is 100% better than Java".)


[flagged]


You're missing out on a huge amount of great things if you let the parenthesis stop your lisp journey. With editor support for formatting and matching they are easy to deal with. In fact the main reason closing brackets are so noticeable in lisp languages is because of the minimal syntax needed. In other languages you need to deal with statements and expressions that end in many ways such as "fi" in bash, the lack of a tab in Python, ")};" in JavaScript.


Viaweb used Lisp as their secret weapon, and pg and rtm rolled the money from the Yahoo acquisition into Y Combinator's first classes.

http://www.paulgraham.com/avg.html

Hacker News itself is written in a new Lisp dialect that resembles Scheme.

https://github.com/arclanguage/anarki/blob/master/lib/news.a...


How can you write a complex program with so many semicolons? How can you write a complex program with so many braces?

Your statement sounds more like unfamiliarity of other major PL styles than anything else.


Your argument would make sense if the only punctuation in a language was semicolons or braces.

But comparing a language with almost nothing but parens to a language with a mix of things like braces, brackets, parenthesis, periods, and more, then I don't think the comparison is a fair one.

I'm not advocating one way or the other, just pointing out the comparison seems flawed.


What problems do the parentheses cause you? They've always worked just fine for me and I notice no difference between the programs I can write in Scheme and the programs I can write in other languages.


Using a good editor helps a lot. Actually I usually do not understand how can people write Java. People are different and different things seem logical or useful for different people. If you are hung up on syntax problems you probably haven't used different languages yet. After a while it gets less and less relevant.


Datalog for constraint programming, but not Prolog? Oh come on. If you're going to recommend languages, look some up, already.


Select the best language for the task in hand.

At the end of the day, they all ultimately control the same instruction sets on the chip's either directly or indirectly through a framework or libray and/or operating system.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: