I think the first one is a single function that has pattern matching for the parameter while the second one is defining the function twice and relying on the language to call the right one based on the parameter.
I think in Erlang the function identification is the name and the arity and this is as close I know of what you would like to have.
Semicolons separate partial definitions for a single function of a single arity, and periods end those definitions. Removing the punctuation might make parsing a bit more difficult (will there be more partial definitions?), so it's understandable to not offer it.
I'd imagine you could compile them both down to the same thing, as there has to be a conditional at some point to determine what code to run (unless there was some kind of dynamic dispatch), I think it's just a matter of preference
I guess it depends a lot on your coding style. When I started allowing hyphens in variable names, I already had about 10,000 LOC of Earl Grey, including the whole compiler for it (seriously, it was a pretty late addition). In all that code, I only had to change one or two lines if I allowed hyphens, so I decided to just go ahead and do it.
And not even because of CSS, just because I think "do-thing" looks better than both "doThing" and "do_thing". Of course, I would think differently if I tended to write "a-b" instead of "a - b", but at the moment I quite enjoy this feature.
I'd love to hear why people prefer underscores over hyphens. I think I grew out of syntax fanboy-ism, but C and python __private__, _special_variable are everything but readable to me. They break the visual line too much. Even historically it was a weird symbol, originally a line break that made it into non-space separator in PL/1, then almost everywhere. Before that it was pure formatting, a typewriter glyph to be overtyped/composed on words to underline them.
It's funny in the end. Parts of languages syntax reinforce other parts. In lisps you cannot write a-b as an arithmetic expression. Sequences of characters are symbols, and that's it. But also, lispers don't really care about infix notation[1] they have abstract polyadic operations: - + / * all take a list of zero or more arguments. (- x y z ...)
[1] And IMO people shouldn't, basic arithmetic isn't that frequent or complex enough to need such a dedicated treatment. But that's another lisp thing, you care more about having DSL opportunities for anything rather than a fixed set.
Just wondering though, how often do people write subtraction as "stuff-thing" instead of "stuff - thing"? I find myself typing the latter almost systemically, so hyphens never get in my way, but of course it's all too easy to be blind to the habits of others.
If you have `stuff - thing` it’s no big deal, but once you start having `(a0-b0)/(a1-b1) + (x0-y0)/(x1-y1)` or whatever, then being able to save all the spaces starts to be kind of nice, especially if your math expressions get to be 60 characters long. There are a few times where I’ve definitely ended up with more readable code by using the presence or absence of a space as a way to group expressions. Also, I generally prefer to write things like `array[n+1]` or `array[n-1]` without the extra spaces.
This code example doesn’t actually have any examples of unspaced subtraction in it, but there are a bunch of other binary operators with space removed. In my opinion adding spaces around all the operators in this file would make the code less readable, especially if trying to follow along from the formal published spec describing the algorithm: https://github.com/jrus/chromatist/blob/master/src/ciecam.co...
I think using spacing for this can be a little misleading since it can obfuscate priority. To give an extreme example, 1e100 * 1e300/1e300 is infinity, because the multiplication is done before the division, but the formatting suggests it's the other way around. It's not too bad in that case but if you were to accidentally group an addition instead the mistake would be harder to spot. I would rather use parentheses all the time to be safe.
Still, you make a good point with the space savings. I guess I just find hyphens nice enough that I don't mind the tradeoff :)
I gave these kind of semantic spaces meaning when I wrote another implementation of AsciiMath[1] because I found it so intuitive. So `1-2 / 3-4 = (1-2) / (3-4)` but `1 - 2/3 - 4 = 1 - (2/3) - 4` like you'd expect.
In long and complicated mathematical expressions I find myself selectively using whitespace to identify logically matching pieces of code, similar to LaTeX' \left( and \right). If I couldn't write (x-y) that wouldn't be possible in the same way.
But see, being used to hyphens in variable names, I must tell you I can't figure out which one I find easier to read. On one hand, the presence of hyphens and subtraction is a tad confusing, but on the other, I find "balance-sheet" to be generally nicer to read than "balanceSheet", so it feels like a tie to me. Then, on the other hand, you have things like "xs.for-each(x -> x + 1)" vs "xs.forEach(x -> x + 1)", where I really like the hyphen.
Anyhow, I feel that this is the kind of feature you just stop noticing after a while. I think we often tend to assume that what we are used to is more readable than what we aren't used to, but the brain will adapt to nearly anything. Doesn't mean we ought to go crazy with changes, but hey, I like hyphens.
> I find "balance-sheet" to be generally nicer to read than "balanceSheet"
Me too, and I've wondered if it's because it's activating the same part of my brain that gets aggravated when people decide to use capitals on seemingly random words within the sentence.
To me it's almost cognitively dissonant to see a capitalized letter in the middle of a word. In written language grammars, we have only capitals at the beginnings of new sentences (new ideas), so my brain sees camelcase terms and wants to separate the words into separate terms.
Also, it feels morally wrong that the first word typically doesn't get the capitalization (and often languages ask/require programmers to follow some arbitrary first letter capitalization rules...). Yes, that's right, it's about ethics in programming language design.
in a complex expression, I tend to use white space, and absence of it, to make it clearer, as well as parentheses. So, sometimes x2-x1 happens.
A search on github etc might give some idea of how common it really is - though their search wasn't precise enough last time I tried. Google code was (regex) but I think they closed down? Or you could just clone a few of your favorite projects and grep it.
One one hand, this is yet another syntactic reskin of ideas from modern imperative-land. On the other hand, it looks like a good one, by a thoughtful author who also cares about integration with existing platforms, and that’s not nothing.
EG and Haxe share:
- compiles to Javascript
- Macros
- Pattern Matching
- async/await (Haxe requires a 3rd party macro lib, but it integrates well)
EG has:
- cleaner syntax
- document-building DSL
- "One of EG's primary goals is to be as compatible as possible with existing JavaScript libraries and frameworks and the node/iojs ecosystem." (Haxe makes compromises here to support other platforms)
Haxe has:
- static type-checking
- dead-code elimination
- support for more platform targets (PHP, Python, C#, Java and more)
- years of experience
I really like how the tooling to create a full application is already there, it isn't like "here's my new language, but there are no libraries, good luck!".
> Global variables need to be declared to be accessible:
> globals:
> document, google, React
If I'm getting this correctly, what a brilliant idea! Contain the shittiness by having one single location where all globals are declared. So very helpful.
The "fact(match)" is very strange. In Haskell you have "lambda-case" and in Ocaml you have "fun" as syntaxes to define lambdas that pattern-match on the argument.
That said, I really like the inclusion of async and a DSL for documents. These two are things that benefit a lot from having language support.
"fact(match)" is just making use of a generic feature: the "match" keyword in a pattern dictates that the body defines a sequence of sub-patterns to match at that position. An argument uses pattern syntax, so it works there, but it works in other situations, for example this contrived example:
f(x, y) =
match x:
{m, n} ->
match m:
<= 0 -> n + y
else -> m + n + y
n -> n + y
Can be rewritten:
f(match x, y) =
{match m, n} ->
<= 0 -> n + y
else -> m + n + y
n -> n + y
So you can match hierarchically like that (sorry if the example is strange, it's not supposed to mean anything).
I haven't seen this specifically, but I have seen other languages move common top level behaviors within functions into parameter lists.
Ruby, dart, coffescript allow setting of instance variables from the parameter list:
class Example
constructor: (@name) ->
# do stuff
class Example
constructor: (name) ->
@name = name
# do stuff
Jonathan Blow's language allows the `using` keyword in parameter list, which desugars in much the same way.
I'm personally not a big fan of using this technique for pattern matching, but its better than requiring explicit match blocks everywhere (like Scala and rust).
I haven't seen the feature elsewhere. The second example looks better to me (syntax highlighting helps) and reduces redundancy and indent, but I guess views may vary on this. Good to know!
If your goal is to reduce redundancy than I think a "lambda-case" syntax might be a better fit. As a bonus it also lets you use this feature on anonymous functions!
Yes. The compiler detects whether the return value is used or not to determine whether to accumulate the results in an array or not. Also, unlike forEach and map you can use continue or break.
I guess I wanted macros and pattern matching without Lisp syntax, and I wanted to retain and use the JS/node ecosystem.
There's also a few language features I couldn't find anywhere else that I wanted to try out (my % operator, ad hoc exception classes, some pattern matching features like coercion and "match" inside a pattern to define sub-patterns, the each operator, some features of the macro system that I have yet to document, etc.)
Because folks feel there is a need for them, and it's a really good way to get one's feet wet in the compiler pound. Besides, a programming language (dialect) is to a programmer what different size wenches, pipes, or hoes are to a plumber. You need different tools for different jobs and a lot of it depends on your personal preference and what it will take you to be efficient. Besides, how we as engineers solve problems is just as much an expression of our personality as it is a statement about the correct operation of systems. But that's just my view, and I'm definitely biased on the subject because I'm working on a flavor of coffeescript that will compile to msft's typescript as we speak (well, in my spare time).
I love the name but it will get mixed up a lot on Google searches. Just like it was with the TV series "24" that matched every 24 that was to be found online.
As someone who currently uses LiveScript professionally, I look forward to trying out Earl Grey. At first glance it looks very promising.
I'm aware there are many people who don't see the value in creating "yet another compiles to JS language," or those who view them as simply "syntax sugar," but I for one very much appreciate the work done by yourself and others on similar projects.
This is just extremely exciting. I love the language design and it looks usable as all hell. First compile-to-js language I've ever seen that got me excited, let alone the first one that i'd actually use outside of the browser. Congrats.
Gives me a good sip of the entirety: the syntax, macros, patmatching, integrating with React, and even how to gulp it and write tests with it. Yet succinct enough to scroll in one swipe.
Looks like another Coffeescript. Braces/parens are not bad, they structure the code and make it easier for the eyes. Lack of punctuation is as bad as too much punctuation.
I've thought milk in tea was so silly for years, and you've changed my mind in one sentence. Thank you, did not realize there was a practical reason for making it... taste worse.
True story: One day I was out of milk, but figured it wouldn't be too bad to go without and have my tea anyway. Dropped the kids off at school, lost my stomach in the parking lot. 8:00am, lookin' classy at the private school.
I googled this because I find it interesting, but black tea does not contain tannic acid. It contains tannins, but of a completely different type (antioxydants which actually seem to be healthy). I couldn't find anything definitive on whether adding milk is a good idea or not. Maybe it helps for some people.
That's interesting. I wasn't 100% when I posted, it was from memory. On a little checking, it seems we're both right -- tannic acid is a subset of tannins. In any case, milk seems to help, whatever the mechanism.
I think it's unfortunate that this is currently the top comment on this post. Someone puts hundreds of commits of work into a nifty language with a nice feature set (including compile-to-js) and awesome integrations, and this is what they get on HN...
I think this looks like a great project, we can never have too many programming languages to play around with.
With what crystal ball? If you pick based on history, since you kinda have to, then you pick a more mature (older) language. If you're not careful with that you pick languages just before they go obsolete.
(Like a company I know which picked VB 6 and still hasn't fully migrated away from it. The same company picked Microsoft's AJAX demo as a basis for a JS framework and is still developing that even though MS long since abandoned it. Would it be better to use jQuery, Angular, Ember, React, Riot, etc? Well, those are too new and untested. Very conservative leadership.)
Anyway CoffeeScript and JavaScript work well together and you might want your code to be in CoffeeScript but you have to use a library, say Ember (also a real project at a different company) which means certain improvements, bug fixes, etc. to Ember are done in JS. There's two. I'm sure you could easily end up with multiple libraries written in multiple compile-to-JS languages.
Long-term code maintainability is a bit of a hard problem. You probably need to be constantly refactoring, rewriting, and re-inventing so you don't have too much old code in play anyway. Maybe. What do I know?
Why not? Maybe most of us are not going to use it, but the free market of growing programming languages will have an overall positive effect by inspiring other langs, such as many features of ES6 being inspired by CoffeeScript et cetera.
I like seeing the trends of what features show up frequently in new languages. These features manage to work their way into more popular languages, either as changes to existing ones like Java or C++11 or new languages like Swift and Go.
More like "write once, run in the browser". I don't think most compile-to-js languages care too much about "anywhere", they are just trying to get nicer alternatives in what is basically a platform (the browser) closed to anything except JS.
It's better than the JVM. It provides an excellent, high-performance runtime for dynamic languages (JS), and an excellent, high-performance runtime for static, memory-unsafe languages (asm.js)!
Does a VM of a PC count as a JIT compiler? Can we compile VirtualBox to JS using one of those C->JS compilers? I'm sure we can come up with some stack of turtles here ...
I'm not a fan of using -> like CoffeeScript does. It doesn't seem to mean function-body / algorithm-to-do-the-thing to me. That operator makes sense for pointers or for function return value type expressions.