This is the spec for the language Douglas Crockford (author of the book "JavaScript: The Good Parts", the JSON specification[1], JSLint[2]) had explained in his famous talk: "The Next Programming Language"[3].
The "big things" in the language are the Actor model, favouring immutability and capabilities-based security.
Presumably, there will be a flashy website later that actually motivates why you should use this language and what's cool about it. Notice this looks similar to the JSON spec website[4].
If you choose language based on syntax instead of semantics you miss everything with useful semantics which doesn't happen to use syntax you're familiar with. That seems a terrible loss.
E.g. I deeply dislike the syntax of makefiles and xslt, but the declarative model is so good where it fits that it's worth dealing with the visual discomfort.
There are more than enough options that there is rarely a compelling reason to suffer. Useful semantics rarely remains confined.
I spent too much time working with syntax to tolerate one that makes the code more time consuming to read or write (and that includes XSL; I worked a lot with XSL, and I'm not doing it again - if I need its semantics I'll implement something to do it with a less insane syntax).
It turns out XSLT is a trivial programming model (pattern match on the tree) with crazy aesthetics and a near-useless stdlib, which shipped a MVP called 1.0 and then got basically abandoned for json. It's a bit of a disaster of history really. There are newer and saner specifications out there which I am totally ignoring. A bit too much of the world is written in java but you can get xsltproc which is a tiny C program built on libxml2 and run that commandline style. As far as esoteric language go it's great.
I'm doing dubious things involving representing ASTs in XML which I'll probably post to github. Taking an example from that in the meantime. The setup is roughly a bunch of calls to xsltproc to turn one XML representation into another:
I'm liking having a schema for the before and the after forms and generally having a much better time where each individual transform doesn't do very much. The general pattern is recognise the bits you're interested in and copy everything else through unchanged, and view a given .xsl transform as a XML->XML function in weird syntax. Google will find you an "identity transform" which looks like nonsense, a lot of functions can start by copying that and then adding a match for something you care about.
One of my data representations is a tree where the information of interest is all in the leaves. That gets turned into a flattened list, then that list gets turned into a text file, then something else goes "oh that text file has C in it, awesome". Tree flattening looks like a reasonable thing to copy&paste in here, thus:
<?xml version="1.0" encoding="UTF-8"?>
<!-- name some extensions xsltproc knows about -->
<xsl:transform version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:str="http://exslt.org/strings"
xmlns:ext="http://exslt.org/common"
extension-element-prefixes="str ext"
>
<xsl:output method="xml" indent="yes"/>
<!-- Change the root node from tree to list -->
<xsl:template match="/TokenTree">
<TokenList>
<xsl:apply-templates select="node()|@*"/>
</TokenList>
</xsl:template>
<!-- Copy attributes to the output unchanged -->
<xsl:template match="@*">
<xsl:copy>
<xsl:apply-templates select="@*"/>
</xsl:copy>
</xsl:template>
<!-- Transform elements on the way through -->
<xsl:template match="node()">
<!-- If there are no attributes, it's part
of the tree structure we're flattening,
throw away the element and keep the contents -->
<xsl:if test="not(@*)">
<xsl:apply-templates select="node()|@*"/>
</xsl:if>
<!-- If it does have attributes, leave it alone -->
<xsl:if test="@*">
<xsl:copy>
<xsl:apply-templates select="node()|@*"/>
</xsl:copy>
</xsl:if>
</xsl:template>
</xsl:transform>
So yeah. The syntax is not wonderful. You write xsl: a lot, or at least your editor does. The documentation on this stuff all seems to be a bit java themed and the prevailing attitude seems to be that XML is an ugly thing from the before times. Though see also https://www.defmacro.org/ramblings/lisp.html. I'm working by trial and error instead of documentation but that's going well enough. The "oh, that's a tree? Have a declarative DSL for functional transforms to other trees" is a really compelling example of wondrous magic hidden behind insane syntax.
(for a nice bonus effect, emacs is really clear on what editing tree structured documents means, and there's a "relax-ng" schema which you can use to find errors in the XML and to have emacs tell you lots of stuff about the document as you type it)
I see the appeal of that - I used to toy with a language whose default representation was XML (with two way translation from/to text as well as a diagram based editor), but XSL is way to verbose a syntax for me to interface with what is a very simple core you can build out as a library to write the same kind of tree rewrites.
Today I'd pick that option over actually using XSL anywhere - to me the only redeeming feature of XSL itself is/was the built-in support for applying XSL to XML in browsers (I worked on a web app ~2006 where XML was translated to HTML using XSL on the frontend, and you could turn off the server-side transformation and let the browser do it instead, which meant your source view was the underlying XML, which was very handy for debugging).
I think space-style nonsense was a great thing to have up front, because it immediately informed me that I will hate this language and never use it. Genuinely appreciate them saving me time.
I think they share my distaste of languages (like python) that use indentation instead of brackets to signify a code block. People have different preferences and it's totally valid, whatever the reason for those preferences.
What I don't get is, even if you have brackets, you should be indenting and organizing your code.
It seems like people that fight against indenting must have some bad habits, and are upset that the compiler is enforcing any change.
Like rehab, they have an addiction (sloppy indenting, poor organization), and the compiler is rehab (forcing you to deal with your addiction to give up bad habits).
I do prefer brackets as they are more 'clear'. That doesn't mean different peoples code should have wildly different ways of indenting. We all follow the same traffic rules or it turns into chaos.
Copy-pasting Python code can easily end up in wrong indentation and broken code. Copy-pasting code in bracketed languages usually pastes and then auto-formats that section automatically.
Sure, but now you're perpetually stuck with the visual noise of unnecessary brackets _all_ the time. Maybe it's because I use vim, but I don't see this use case as terribly important because I can easily reindent with >> and << of a selection.
If you use vim and hate brackets that much, write a linter that will hide the brackets from working view and apply them according to how you white space.
It’s strange that the syntax is not afraid of using non-ASCII Unicode characters (e.g. « », ≤, ≈, ƒ) but then uses the ASCII digraphs /\ and \/ for logical AND and OR instead of ∧ and ∨.
Maybe Crockford is a Mac guy. It looks like all the extended characters are in the MacRoman set and -- on a mac keyboard -- typing ƒ is no more difficult than typing F. ∧ and ∨ are not easily available.
Willing to at least look at it, given that it's Crockford.
On first glance, I like the patterns. It's long past time that regex got replaced with something that looks less like line noise from an old dialup modem.
I worked on a Java codebase a while ago that had identifiers which were longer than 80 characters. Perfectly fine in our era of huge display bandwidth, but once upon a time, those identifiers would not have fit on a full-width display without wrapping; at an even earlier time one would have waited a noticeable amount of time for them to print out on a TTY.
(and an earlier HN thread suggested that even in our era, those with vision problems who bump display magnification may not appreciate 80-character idents)
- network-crossing actor model, but with private addresses and built in routing and security capabilities
- object component security
- null means null
- immutability
- AWK-like pattern DSL
- functino is a cool way to have your infix operators cake and eat it too as prefix functions
The bad:
- no type checking on variables, parameters, record fields, record shapes, actor messages, etc
- practically need to buy a new keyboard to type all the symbols like '≈', '≠', 'ƒ', etc
- null punning seems great until you're looking at a null three function calls later and have no idea where it came from
Unsure:
- No reserved names means it's very easy to accidentally overwrite a primordial with no warning, I suspect someone will instantly build a linter that makes that a rule because this just looks like a foot-gun
Ultimately, this looks like JavaScript without all the foot-guns. Add in some modern features like actors, immutability, and a pattern matching DSL. Add in some new foot-guns like primordial renaming and null punning.
If I could snap my fingers and today be able to write Misty in the browser, I'd definitely use it for performance intensive code alongside Typescript until TypedMisty came out, then I'd probably switch for good.
However, I'd be absolutely shocked if any major browsers ever support Misty. So it'll probably remain a server side scripting language, which I definitely do not need. Why would I use this on the server for scripting over F#, Clojure, Elixir, or Go?
> - practically need to buy a new keyboard to type all the symbols like '≈', '≠', 'ƒ', etc
You just need to configure a Compose key. I encourage everyone to learn how to do this, it opens up a huge character repertoire that can be easily remembered how to type.
This. Although the most ergonomic compose key sequences tend to be assigned to accented characters rather than symbols, so IMO not very convenient for programming.
On Windows I use Capslock (and/or the right Shift key) as a custom-defined modifier to combine with character keys to enter my most frequently used unicode characters for programming personal projects.
Lots of fun characters available for use even when you're dealing with a language where identifiers are limited to ID_Start and ID_Continue (e.g. Javascript), for example:
Fair enough, but having to always remember to configure that on all my work machines is kind of a pain. Although I do use an ergodox with QMK firmware a lot so I suppose I could support that inside the keyboard.
But then my 5 work laptops all need the same thing configured for working on the train
Here's hoping someone talks him into just supporting normal ASCII symbols.
I see it as part of the suite of tools that one ends up installing on all client machines anyway. Most people don’t use just a stock OS installation without any additions.
Also, some editors have this built in, like Vim with Ctrl+K [0], and a Misty IDE presumably would have some equivalent.
ASCII is quite limiting, so it would be nice if we could move a little bit out of that lowest common denominator.
Other languages that make heavy usage of non-ASCII Unicode characters (such as Lean) often have tooling support such that one can type '\' along with some combination of ASCII characters to generate characters like '≈', '≠' and 'ƒ'. Along with searchable documentation for the whole mapping of shorthand codes to the mapped Unicode values, of course.
Code is read more than written, so I have grown to appreciate programming languages that lean into non-ASCII characters for semantic clarity :)
Look forward to checking it out more, but I found the Turkish 'i' functions particularly interesting. Seems oddly specific - are there not other languages with similar situation? Why isn't this abstracted, I wonder, or why is it included at all? Seems "kitchen sink"-y. Maybe there's some explanation somewhere. Anyways, love Crockford, helped save a project with his JS deepcopy implementation (in addition to JSON and all the other work he's done with the programming community)
"The language is quite strict in its use of spaces and indentation."
Not this again. Please stop.
Code structure should be explicitly denoted with brackets or whatever. Code formatting is cosmetic, can be applied automatically, and serves as a 'double-entry book-keeping' type check on the structure coded in characters - ie you can easily spot structure errors by pretty-printing.
> Functions can not be sent in messages to other actors
Oh, so not first class. Mutable closures are tricky to implement though, so fair enough.
> Function objects are immutable values.
Huh. That means you can easily send a function to another actor. Can even serialise it and send it over a network. That stuff is a real pain for closures over mutable state but totally straightforward for immutable values.
That seems like a implementation limitation turning up in the language spec instead of fixing the implementation.
Overall there's very little to understand from the page in terms of motivation, sample examples etc. But, one interesting thing: The math module allows choosing between radians, degrees and more importantly cycles. I only know of one more project, Pico8 fantasy console, which offers this correct "API" for trigonometry.
We need better languages than the current ones. In many dimensions. We need lots of new languages because the design tradeoffs are viscously difficult and the language design field is only really progressing by trial and error.
Even if you're hardcore obsessed with AI as the one true path to everything, having it write JavaScript is definitely not a global optimum.
That's very lackluster. Show me a Fizzbuzz or an advent of code. Tell me a story of why it exists.
Now the only discussion I can have is why are there no reserved words and can we call our functions and variables "set", "call" and "def" then?