I am begging these language designers to show me what your language looks like right off the bat. It should be the first thing I see on your language's homepage. I shouldn't have to click through vaguely-named pages to see single-line snippets. Show me a full program!
Think of it like how the expectation for apps is that there'll be a screenshot (or video!) on the homepage of the app doing its job — i.e. of the user experience of the app.
The first thing you should shove in my face, when I land on the home page of a programming language, is the user experience of writing code in the language. Which is to say: the syntax.
But also, if your language has a REPL, or an IDE, or a weird graphical virtual machine like Smalltalk, then that's part of the language's UX as well — so show that too! (And if you can, integrate the two together. Show me a cohesive example of what the syntax looks like inside the REPL/IDE.)
Everybody else (Rust, PHP, Java, Lua, etc.) gets an eyebrow raised in suspicion, because their home-pages feel like they're ashamed of what their syntax looks like :)
That might be contentious. I don't care at all what the syntax is, I want to know what choices have been made in the language design space, and if they look reasonable/interesting, how sound the implementation looks. The top level about link followed by source was ideal for that. Only after sanity checking the implementation source would I look for some example of what the language looks like.
On the other hand, language A but looks like language B is definitely a thing and I suppose those should put syntax front and center as it's their selling point.
I'm pretty sure, if the user experience of coding in a language is bad enough, that you wouldn't use said language, no matter how interesting it is design-wise. There's a threshold below which most programmers would rather reimplement the cool design features in a different language with better syntax than use the original language.
Does anyone write in APL? No; it's basically impossible, given the syntax requires characters that aren't on your keyboard. But maybe you write in J. Or, more likely, you use a vector-operations/dataframe library in some arbitrary language.
Do people write in Erlang? Well, some do; but many others think that connecting expressions with semicolons and commas is too painful to want to deal with (though not me; I enjoy "systems Prolog.") And some of those people got together to design Elixir, so they could use the Erlang runtime without having to write Erlang.
Does anyone voluntarily pick up IBM JCL these days? Or COBOL? RPG? MUMPS? These archaic 1970s languages all have interesting design choices — heck, MUMPS assumes a database as part of the language runtime! — but they're just too awful to actually write in. It's far easier to just read about these languages as historical artefacts, and then implement their good ideas into a new language, than to actually use them.
Does anyone use FORTRAN? Well, yes, if they have to, to extend OpenBLAS with something. But mostly as little as possible. And there are fewer reasons to do so these days, with e.g. cuBLAS not being written in FORTRAN.
You put a syntax example on the home-page not to impress people with how cool your language looks; but rather, to show that you're not trying to bait-and-switch a prospective developer, by talking them into using a language by describing its cool features, only to reveal at the last moment that the syntax is awful.
I think this is pretty close to Smalltalk from before classes were added. Is that basically right? Syntax looks simpler and there's the same slightly off-putting "macros aren't necessary because our syntax is flexible" sentiment.
Thanks! The inheritance model is definitely from Self. I'm having some trouble finding the compiler/jit infra in the github repo, would be really interested to see if you've gone down the same dynamic optimisation path Self did
I think Ruby [1] does a great job of this, making code snippets front and center.
I think the worst way to do it would be to use 13px Times New Roman text with no syntax highlighting on a secondary page. But that's what Io went with [2].
This guy started furiously implementing it at the beginning of the year but stopped at the end of February. ( My guess is he's got a new job ).
There have been many attempts at implementing the language in a "more solid" base, ie. on top of "languages" that take care of garbage collection like JS, Go, Java/JVM.
I suspect a complete and community supported Graal implementation will be the ticket for Io. It would be an incredible dynamic language paired with an incredible strong and robust VM and ecosystem( think libraries, tools, manpower, etc. )
I really hoped I had the chops at language implementation to make this a reality, but after watching the development I'm waayy out of this league.
The linked github looks like a compiler written in C to a virtual machine written in C to me. Slightly sad it isn't self hosted, sometimes you get a VM in C and the compiler in the language itself.
Found some other implementations under https://iolanguage.org/links.html. None claim to be complete or working, one link was dead :(
Io was one of the inspirations that first got me interested in programming languages. It's a beautiful little language. For me, the syntax is a sweet spot of being extremely simple and elegant but no so simple that it gets awkward and verbose like Smalltalk and Lisp can.
The semantics are equally beautiful and elegant.
It's just a cool language.
The main downside is that the semantics are so dynamic that it's virtually impossible to optimize.
I think the "dynamic so had to be slow" assumption has been quite badly savaged by luajit and the various javascript engines. "Assume stuff doesn't change all that quickly and recover transparently if it did" goes a long way to statically compiling dynamic languages. Worst case the parts that actually do very dynamic things end up interpreted, but that's stuff you otherwise can't do without writing an interpreter.
Io is way more dynamic than almost any other language you're thinking of. Yes, even Lisp and Smalltalk.
Arguments are passed to methods as unevaluated syntax trees and a method can decide at runtime whether or not to evaluate the argument expression, or not, or introspect over it, or mutate it.
It's like every function is a macro but macros can also choose at runtime to selectively evaluate some of their arguments.
That exact way of being dynamic was identified as a poor idea by around the end of just the first decade of Lisp. This is all documented.
Early Lisp was able to bind an operator symbol two four kinds of procedures:
EXPR SUBR
FEXPR FSUBR
The left column is the exprs; the right subrs. The top row means "machine coded / intrinsic"; the F prefix on the bottom ones refers to interpreted mode: the F is of unclear origin. It might refer to "form" or "formula", conveying that those procedures are made of symbolic expressions, not machine code.
The subrs are just functions: arguments are evaluated and passed to them.
The exprs operate on expressions: they receive expressions unevaluated, like in Io.
Roughly speaking EXPRs implement special operators that are built into Lisp, written in assembly language.
FEXPRS are defined by the program: they add new kinds of special operators, whose actions are interpreted; they decide at run time what to do with the argument expressions.
Loosely speaking, today's mainstream Lisp dialects still have the equivalent of EXPRs (special operators) as well as SUBRs (built-in/compiled functions) as well as possibly FSUBRS (uncompiled functions in some Lisp dialects that support interpretation). The FEXPR has disappeared; it is not found in e.g. Scheme or Common Lisp.
About macros, macros can evaluate things in their own runtime. In a compiled setting, the run-time of a macro is compile time. Macros don't make decisions in the run-time of the code they expanded; macros are gone at that point. All decisions have to be expressed as generated code which refers to information available at run-time in that environment.
Yes, older Lisps messed fexprs up. Kernel fixes this. The vau calculus used by Kernel is simply a lambda calculus that, unlike CBN and CBV, doesn't implicitly evaluate arguments. The rest of the calculus is the same.
What this means is you get powerful hygienic metaprogramming (arguably as powerful or even more powerful than Scheme's most advanced macro systems) at a low low price and with very elegant theoretical properties. In Kernel, hygiene is achieved simply with the usual lexical scope that's already in lambda calculus.
So vau calculus is simpler than the CBV lambda calculus used by Lisps. Because it doesn't evaluate arguments, so it does less than those calculi. And by doing less it gains the great power of being able to do hygienic metaprogramming in the same calculus, without second-class contraptions like macros.
> Things have changed. Fexprs are coming back in a big way.
I agree that Lisp dropped the ball on fexpr. It seems to be a combination of dynamic scoping, well founded performance concerns and some conflation of syntax and semantics. The special forms must appear under their own names thing is a related performance hack.
Lisp no longer has performance concerns. SBCL is excellent. Speculative JIT techniques would roll right over the "is this a function or a fexpr?" branch in apply with no trouble. I'm convinced there's no inherent overhead there.
I don't see much evidence that fexpr's are coming back though. Kernel is superb, though I'm unconvinced by the handling of cycles, but doesn't seem to be in use. Qi/shen is a similar sort of do-things-right effort with limited uptake. What do you have in mind?
My working theory is that lisp is niche enough that and a better lisp (which I contend kernel is - replacing the layers of macros with fexpr guts a lot of complexity from the language, and that was meant to be the driving motive behind Scheme) hits a subset of the first niche.
I last looked at that maybe eight years ago? The web page has undergone some updates, but the PDF paper is still from 2009.
Older Lisps didn't mess up fexprs. The developers wanted (ahead of time) compiling and moved on.
Using lexical scope in the context of fexprs is only a minor (and obvious) improvement. If fexprs made a comeback in, say, Common Lisp, it is painfully obvious they would be functions, whose parameters and locals are lexical by default.
Under a single dynamic scope, what it means is that when a fexpr is evaluating the argument code, its own local variables are possibly visible to that code. If the fexpr binds (let ((X 42) ...) and inside there it calls EVAL on an argument which contains X, that argument's X resolves to 42.
That could be fixed by using two dynamic scopes: the fexpr having one implicit dynamic scope for its own execution (perhaps newly created for each call to the fexpr), and using an explicit dynamic scope for evaluating the argument material (that scope coming in as an argument).
Under a single dynamic scope, if the number of fexprs in the system is small, they can stick to some namespace for their own variables, and all non-FEXPR routines stay out of that namespace. fexprs have to then be careful when they pass pieces of their own code to other FEXPRS.
In a program with large numbers of fexprs, symbol packages would solve the problem: there would be multiple modules providing FEXPRS, which would use identifiers in their own package. Then only fexprs in the same package could clash when they use each other, which is resolved by inspection locally in the module. (E.g. use unique symbols across all the FEXPRS.)
I don't suspect hygiene was a problem in practice; during the heyday of fexprs, there wouldn't have been programs with huge numbers of FEXPRS (let alone programs with churning third-party libraries containing fexprs). Everything would have been done locally in one site, by a small number of authors working in their own fork of Lisp as such.
Thus, I don't think this was the main problem identified of fexprs; it was really that impediment to compiling. Schutt's paper doesn't seem to attack this problem at all.
Hygiene is a non-problem; we are still cheerfully using Lisps without hygienic macros in 2023, whereas fexprs not being compilable was clearly a pain point in 1960-something already.
A) An exciting research problem! Shutt himself says that he doesn't see any fundamental obstacles to compiling them. It's just that nobody has done it yet.
B) Actually not a big deal for many applications. Take PicoLisp, which has been cheerfully used in customer-facing applications for decades. It's an ultra-simple interpreter (its GC is 200 LOC https://github.com/picolisp/picolisp/blob/dev/src/gc.c ) The same architecture can be used for Kernel implementations.
Covers the right sort of thing. It makes a guess at what a function call will be, inlined that guess, and if it turned out to be wrong restarts from the start of the function in the interpreter. Doesn't explicitly call out the function calling convention but guarded inlining has much the same effect.
Maybe worth noting that inlining a fexpr is semantically very close to expanding a macro. Identical if symbol renaming is done the same way for each.
Tcl has that. Early lisps did too, sometimes called fexpr or fsubr. More recently Kernel had the same. There's another esoteric lisp that does too but the name escapes me.
That's a pretty compelling answer to why it doesn't need macros though. It's got the upgraded version built in.
I believe that is amenable to the same style of speculative compilation found in jit compilers for dynamic languages. The above reference to graalvm does seem a good idea in that context. I can't point to an existence proof though.
edit: someone is going to object to "upgraded macro", but nevertheless lisp macros are exactly the subset of fexpr that can be statically resolved at compile time, i.e. a subset of a more powerful facility.
"Io is just an incredibly hushed secret. (Perhaps because it is impossible to Google stuff about it.) Did you know that Io’s introspection and meta tricks put Ruby to serious shame? Where Ruby once schooled Java, Io has now pogoed."
Seeing Io lang mentioned was a trip to memory lane, back in 2005/2006 when I started to rely much more on JS for frontend interaction and client-side rendering (AJAX/XmlHttpRequest was the new shit back then, kids!) I remember studying Io as another prototype-based language, as that concept was pretty new to me and I liked Io quite a lot at the time. Unfortunately there was no use for it whatsoever in professional-world and I dropped it quite quickly.
I had completely forgotten about it until this popped up on HN so thanks for that.
Same... _why with his blog/book, Ruby, DSLs and small languages were the cool thing. Steve Dekorte with this project, Slava Pestov & a bunch of excited folks around Factor, Yegge blogging every week, LuaJit with Mike Pall, Guile development, Ola Bini was not famous yet...
A code snippet showing a simple program right on the home page and "selling" whatever features makes it special would go a long way. It's quite off-putting to have to delve deep into a guide in order to get a feel for a language.
I used this language in a game jam in 2005. It sparked joy. Here's some sample code I just put up on GH from back then. Enjoy.
https://github.com/fictorial/wordster
Is there nothing on 70% of the page with contents only on the left side? Just making sure, wondering if it is supposed to load some web editor with code examples because without it, it is a bit confusing what am I supposed to do.
It's like the website design was an exercise in trying to be as minimal as possible. Even the content is following this odd philosophy, it doesn't bother with explaining things.