Wow. I've been searching for my next language and this could be perfect. I loved Haskell and really miss static typing, type inference, etc, but got tired of living in mondads. I've been looking at Clojure a lot anyways, but really dislike dynamic typing. If I could have Clojure with static types, I'd be done searching.
Though BODOL mentions static typing, I didn't see any examples... What did I miss?
Exactly, I have been searching for my functional language and have always felt that neither Haskell nor Clojure quite cut it, so I've been putting it off. The former because, well, it's not Lisp and I'd really like to learn myself a Lisp. The latter because of how the Java relationship overall ruins it for me, especially as it takes a lot of functional purity out of it. You always have to be aware not only of your coding style but that of the number of third party code you'll read both to learn from and to use. Also, not static and I have myself enough dynamic languages. I considered Roy and may yet begin using it in some places I currently use CoffeeScript, but then again it's not Lisp. BODOL could indeed be the perfect piece I've been missing.
Kind of all of the above, but I started to realize that I was writing code in a very imperative fashion, since, although I do understand monads and how they keep IO pure, I felt weird writing very imperative feeling code. e.g. pseudo:
f <- openFile 'somefile.txt'
r <- read f
s <- doSomethingToR r
Kinda looked like every other language. Except no one else spoke it and the libraries (at the time) was of highly varying quality..
Also, while I was comfortable with monads, I never got comfortable with monad transformers. I had one monad that was a stack of something like 4-5 monad transformers and I started to feel as though I was losing touch with the beauty of Haskell.
That's a bit of cheating, as it removes 'lines' or 'line' from code, decreasing readability. So to be fair you want to add it back:
print sum(map(float, open('somefile.txt'))) # over lines
print sum(map(lambda line: float(line), open('somefile.txt')))
Either one is all right, but personally I prefer list comprehension notation.
As per Guido: ... and that a more concise notation for anonymous functions would make map() more attractive. Personally, I disagree—I find the list comprehension notation much easier to read than the functional notation, especially as the complexity of the expression to be mapped increases. ...
In Python it's well known that the default iterator on a file gives you the lines. Arguably a function would've been a clearer design, but this is idiomatic in the language we have.
As far as map vs. list comprehension, yes, I prefer the comprehension in the cases where map would need lambda. Here that's not the case. Per "as the complexity of the expression to be mapped increases", this expression 'float' is the simplest possible.
Python was my favourite language, but after a year of scala I can't go back. Syntax is almost as lightweight as python, sometimes even lighter (_.something rather than lambda x: x.something is a godsend); tuples and case classes mean it's easy to define a new type for things where you'd use a dictionary or object() in python, and the type system is expressive enough that it doesn't get in the way, not to mention the joys of e.g. the typeclass pattern (which allows a level of dynamicity that would be simply impossible to do sanely in python). There's a mature deployment story (because at that stage it's just .jars) and the huge java library ecosystem is available up to a point.
I'm curious, what for? Performance, type safety or compile errors?
If type safety, then in my opinion it is overrated. A side effect of using dynamically-typed language is that developers keep their naming conventions consistent and in check. Because that's the only sane way to avoid stupid typos in a language that doesn't have a compiler. It is not so in the compiled language world. What I often see in the code, where type safety is enforced, - is that developers get sloppy with names! And as a result, there is a significant increase in bugs caused by logic errors. sad
A side effect of using dynamically-typed language is that developers
keep their naming conventions consistent and in check.
Were we working in a compiled world, we would have refactored and the compiler would have caught any missed dependencies in a second.
What I often see in the code, where type safety is enforced, -
is that developers get sloppy with names!
I've definitely not seen that. I maintain the same rigor w.r.t. hygiene whether I'm working in statically or dynamically typed languages, but I really miss the compiler when I'm working in dynamic languages.
At the risk of sounding really rude, can I ask whether your anecdotes are informed by work with competent developers?
We left it as he had written it because it would have been a pain to hunt down the dependencies. Were we working in a compiled world, we would have refactored and the compiler would have caught any missed dependencies in a second.
Ideally, unit tests should take compiler role and be able to catch typos and missing dependencies. There is really no excuse for not maintaining unit tests in the dynamic world.
Unless it is a research grade code with limited lifetime.
As to your question, I have no idea really. I've stopped judging goodness of other developers and myself some years back.
Pardon me for not being clear enough (and for derailing the thread?).
What I was trying to say, is that you can catch missing dependencies and misspellings for free, while running lightweight unit tests. (unit tests are there for catching logic errors and allowing debugging of individual modules; catching misspellings is just a side effect).
On a side note, if somebody is complaining that "a typo in the python code caused client call at 2AM, this could have never happened in Haskell", what they are really saying is that they haven't bothered to run any test coverage whatsoever before deploying new code.
>What I was trying to say, is that you can catch missing dependencies and misspellings for free, while running lightweight unit tests. (unit tests are there for catching logic errors and allowing debugging of individual modules; catching misspellings is just a side effect).
Sure, but a lot of code doesn't need that. Say you're writing a conversion between two datatypes; in something like Haskell you can be confident that if it compiles then it's correct. There's no logic for there to be errors in, so having to write a unit test would be an additional overhead in a more weakly typed language.
>On a side note, if somebody is complaining that "a typo in the python code caused client call at 2AM, this could have never happened in Haskell", what they are really saying is that they haven't bothered to run any test coverage whatsoever before deploying new code.
Bollocks. You can have 100% line coverage and still hit a type error. And a language where you have to have 100% coverage is a lot less pleasant to work in than one where you don't.
There are some things you can enforce with a type system that you cannot practically implement with unit tests.
For example in  it is described how a type system can be used to help you keep track which kind of string contains what data (SQL, XML, user input...) to catch problems with injection attacks at compile time.
A type system like this can create a lot of drag, when, say, you need to interface to an external library and suddenly discover that this library, annoyingly, only works with strings, not your data types. In this case you might have been better off, just by keeping name conventions straight.
I'll take a sloppily named Haskell codebase over a well-named Python codebase, especially when reliability is compared.
Type-safety of the Java kind might be overrated. Type-safety of the Haskell kind, though is hard to over-rate. It catches the vast majority of bugs, including ones people would refer to as "logic bugs".
Imperative code is much more intuitive in the small. That's why we lose when we make side-by-side code snippets of small toy programs and call it a case for FP. The 25-line imperative method is often more intuitive than the highly-dense 6-line functional one. The problem is that imperative code composes very poorly.
I wish developers had more sense of marketing in general - with all due respect to it's author(s) but BODOL (the word) sounds,... well strange, unsexy, uninteresting. It really doesn't do the language itself any justice!
COBOL does have one thing worth stealing, though, which is its data definition language and automatic parser generation facility in the core language.
Essentially, COBOL allows programmers to declare what their input file looks like and the compiler generates code that does all of the work of parsing the input and getting the data into the relevant variables.
Because COBOL comes from the (Big) Iron Age, this is all in terms of record-oriented files with fixed-width fields; a modern re-implementation of the concept would have to operate in terms of nested data structures (XML, JSON, sexps, etc.), which a Lisp-like's syntax is well-equipped to describe anyway.
Features are much more relevant than marketing. Look at Ruby, some years ago it was hyped a lot but that enthusiasm has fade away. Look at Lisp, it wasn't hyped for about 50 years but it is still alive today, and many modern language developments still copy features from Lisp.
If the features are good then I don't care about the name. I really like Rust (http://www.rust-lang.org/) although it sounds "rusty".
Obviously you haven't seen the lisp ads from the 80's :) Lisp was surely hyped then. Not to mention all the hype from people like ESR and PG. Even clojure owes part of it's success to the great presentation skills of Rick Hickey and the other early clojure adopters. Languages DO succeed mostly because of marketing, if it was features, then smalltalk would have won, and C++ and Java never would have existed, Lisp would have won and python, ruby and perl would have been forgotten. Even other technologies works that way too, NeXT and the other Unixes would have won, and windows would have never had a chance if what you are saying is true.
As for the name, if clojure was named anything that had the word "lisp" in it, like "foo lisp" or the like, it would have been dead already. Even racket had to get rid of the word "scheme" from their name for marketing reasons.
> Obviously you haven't seen the lisp ads from the 80's :)
I studied computer science in the 80's. Maybe that there were some business ads of Lisp (especially Lisp machines). But I had the strong impression that Ada was much more hyped than Lisp. Where is Ada today?
> if it was features, then smalltalk would have won
It depends on how "features" is defined. C won over Lisp and Smalltalk because of the single feature of performance. At the time when C was invented hardware was very expensive.
> Even racket had to get rid of the word "scheme" from their name for marketing reasons.
Racket is a different language. Scheme (R6RS etc.) is a subset of that.
According to your logic Rust will have no chance at all. We will see.
Trust us, Lisp was hyped (I worked for "the other Lisp Machine company" in the early '80s). Albeit perhaps not as much as Ada.
Whereas C is very much a building blocks approach, and within the JVM Clojure is as well. E.g. for the latter the popular Leiningen tool uses directives like
And known repositories to pick up the corresponding .jar files. Java and any other language that fits into the JVM ecosystem is an equal player on the JVM with Clojure, and it defers to Java for things the latter already does quite well enough.
This sort of approach, which in the case of Java and Clojure is NOT Worse is Better/The New Jersey Way, seems to have survival characteristics.
Hard to determine how much this was a factor vs. performance, C being a general purpose portable assembler. After UNIX(TM) and C grew up on tiny machines, we then largely squeezed into barely larger ones, the 8086 and 68000 based ones when DRAM was still pretty dear. Or for "engineering workstations", I'm told a non-common, although not all that good configuration early on was one Sun with a hard drive and 2-3 diskless ones all sharing the drive.
I spent 2-4 weeks really trying hard to learn Haskell and I loved it. But when I tried to make something practical with it, I got so frustrated with its dependency manager (cabal) and its lack of support for Windows that I dropped it for Clojure.
I love Clojure's ecosystem, its ability to use any Java library, and its (lack of) syntax.
But I really miss the language features of Haskell. I think Clojure lets you get away with a lot of imperative style programming. You can have a function deep down in the bowels of your code perform side effects if you want. Haskell makes you define these in one place and forces you to separate it from the rest of your code (at least that's what the beginners books led me to believe).
A language that combines the benefits of both sounds like a dream.
First know that my background is Java so I'm used to write once, run anywhere. Haskell didn't feel this way at all. Also, I shouldn't have specifically named Cabal. My complaint was about my experience with Haskell's ecosystem more than just its dependency management app. That said I find leiningen really easy to use. That may just be because I understand maven though.
But, I couldn't install it because it depends on the curl library and the curl library depends on linux binaries. I develop on Windows. The people in the IRC channel kept on telling me the path of least resistance is to switch my OS. But I'm not willing to do that, especially if it's just for a practice project.
I was told I should be using http-conduit (or something) instead. But that was lower level than shpider so not ideal. I tried to install that and I got an error, too. Apparently the project owner had an incorrect config file and it was trying to use a version of the fault library that it was incompatible with (he fixed that the next day which was great. But the only reason I could have figured out my problem is with a team of people on IRC holding my hand through it). I can't blame Cabal for that, but for a complete newbie who has no idea what's going on, it's extremely intimidating.
Long story short, I tried installing ~10 libraries with Cabal and only one installed smoothly. All the others had me googling for obscure forum posts. Sometimes I could "trick" cabal into installing the library but when I'd use it I'd get a runtime error related to a bad install. I've never ever ever had a problem like this with leiningen. But again, I'm experienced with maven.
On top of that, Cabal seems to install everything in a global namespace which scares me. If your Project1 needs LibraryX version 1 and Project2 needs LibraryX version 2 and these versions are incompatible, what do you do? Use something like cabal-dev I was told. But I have no idea what I'm doing with Cabal as it is. If something goes wrong with an abstraction on top of it, I'm totally screwed when it comes to debugging my problem.
I think I get why Haskell did this. Time is precious and if you need something like Curl, your OS has Curl and you don't want to make it your business to rewrite Curl in Haskell, you'd just reuse the OS library. But if I use a different OS, you kind of screwed me.
I love the Haskell language and miss all its lovely features. But if the language requires me to change my OS to develop anything useful, that's a nonstarter.
Yeah I think you're coming from a pretty big write-once run anywhere perspective here. This is especially the case with CURL. Languages like Ruby would have similar issues if they didn't have the force from their communities to bolster the language. It's sort of an unfortunate truth to Haskell that it's development tools aren't so polished as something like RubyGems. That said, I've found that the time I save writing decent Haskell far surpasses the time I spend mucking with its internals, though I've been at it for 5 or so years now. Honestly, though, you're right about the Windows bit. The GHC implementation for Windows relies on tools that I personally would avoid using (MinGW) given the option of using their more mature bretheren in their native environments (LLVM/GCC/Linux/OS X). I've found that probably the best Linux distro to use Haskell with ease has been Arch Linux hands down.
All in all, no, its package system is no RubyGems or what-have-you, but I think the (IMO) language makes it worth it.
HN may hate me forever for saying this, but after using Haskell, I could never imagine switching to a LISP. There's too many thumbnail clippings involved, and not enough types ;-).
> I love the Haskell language and miss all its lovely features. But if the language requires me to change my OS to develop anything useful, that's a nonstarter.
Perhaps you may want to give https://github.com/organizations/Frege a try. It's a bit more powerfull than Haskell 2010, through support for higher rank types. OTOH, due to lack of supporters, the libraries available are far from complete.
But it compiles to Java, and so is interoperable with Java or any other JVM language.
The global package index meant (for me) that pretty much everything I installed broke something else I needed. Once they've got the sandbox stuff working (and when you can start a repl in that sandbox, imo it's nearly useless if you can't), it should be much better.
> Shen is a splendid compromise, but I found myself wishing it came with Haskell's uncompromising purity and Standard ML's straightforward type system.
I guess Haskell/ML minded people have a reason to think this way, but I'd be more interested to know whether someone is doing anything interesting with Shen. Have you got any "production" code written in it and running to show off?
I would dismiss Shen just for the weird license. What if I like it and want to use it in an embedded product or somewhere? I'd have to make sure any derivative work adheres to his spec even if it's only intended to run one library.
Bingo; I already have, after trying to convince the author to at minimum make the license short enough to be less ambiguous. He's absolutely sure it's not a problem no matter how many people tell him it is.
This could also be a factor in the BODOL author's dismissal.
Further down he writes the rationale is to prevent fragmentation of the community; from what I have seen it prevents developers who otherwise might from joining the community.
It's not like e.g. Python or Haskell have a fragmentation problem despite permissible licenses. Scheme has fragmentation problems but that seems to be a symptom of simple implementation and lack of a common set of libraries.
I think Shen is interesting, but if I used it I'd only be mining it for ideas to apply in other languages. It would make far more sense to me if he allowed derivatives iff they did not use the name.
I don't disagree with the goal of preventing fragmentation, and I'd even be willing to play the flag game (i.e. have my version focus on SMP functionality in both senses of the latter, while observing the official spec with a different flag).
It's the long winded, says the same things at least 5 (!) different times and ways, and therefore subject to many interpretations license, which is then subject to whims of the author or an external board (compare to Java, where in theory passing a suite of tests allows you to offer a version of Java(TM)).
I'm not going to invest serious effort in something that might get killed simply for political reasons including different interpretations of the license; even if I trusted the good will of everyone involved, sooner or later the players will change.
Focus on "compromise", splendid or not. Shen is less functional than Clojure; the author has absolutely no interest in making SMP systems sing, last time I checked he's going down the actor message passing route.
You could create a more functional version as long as it had a switch or default to the normal behavior and it was blessed by the powers that be (see comments on license), but you'd be "fighting the system" and all that.
I thought the same at first. But as I was checking out the language, I simply created keybindings in my .vimrc to both characters. Now if I press ALT+f, I get the ƒ character. Less key presses than any other function definition keyword from any language I ever used. And I know I could have done the same thing for the other languages, but the fact is that I never did.
> Looks cool but why the λ and ƒ? That's not practical for anyone.
> it seems strange to include characters that aren't on the keyboard as part of the default syntax
You say "not practical for anyone" and "seems strange", but maybe restricting a computer language to ASCII only is stange. With Pinyin IME's (e.g. Google's or Baidu's) people can easily and quickly enter thousands of characters not on the keyboard. It would be easy to someone to develop an IME to enable symbols to similarly be entered - (I mean a non-specific-IDE-based one, i.e. without having to go thru Eclipse or Emacs).
Great, I think it has a very clean, elegant syntax, which is very nice; I don't write much lisp but it seems that every time I look at a piece of code in lisp, I tend to appreciate even more how minimalist lisp is.
It seems like it would mess with macros, because you'd now need to parse the sexpr to make sure it isn't a ->. And what is the point of homoiconicity if you cannot easily rewrite any given expression? I dunno, it strikes me as odd, but there's also a high chance I'm missing something important about ->'s.