Think of it like how the expectation for apps is that there'll be a screenshot (or video!) on the homepage of the app doing its job — i.e. of the user experience of the app.
The first thing you should shove in my face, when I land on the home page of a programming language, is the user experience of writing code in the language. Which is to say: the syntax.
But also, if your language has a REPL, or an IDE, or a weird graphical virtual machine like Smalltalk, then that's part of the language's UX as well — so show that too! (And if you can, integrate the two together. Show me a cohesive example of what the syntax looks like inside the REPL/IDE.)
Language home-pages doing this right: https://www.ruby-lang.org/en/ , https://elixir-lang.org/ , https://www.python.org/ , https://www.haskell.org/ , https://nim-lang.org/
Language home-pages that get a B+, for putting the example on the homepage, but below the fold: https://go.dev/ , https://crystal-lang.org/
Everybody else (Rust, PHP, Java, Lua, etc.) gets an eyebrow raised in suspicion, because their home-pages feel like they're ashamed of what their syntax looks like :)
On the other hand, language A but looks like language B is definitely a thing and I suppose those should put syntax front and center as it's their selling point.
I'm pretty sure, if the user experience of coding in a language is bad enough, that you wouldn't use said language, no matter how interesting it is design-wise. There's a threshold below which most programmers would rather reimplement the cool design features in a different language with better syntax than use the original language.
Does anyone write in APL? No; it's basically impossible, given the syntax requires characters that aren't on your keyboard. But maybe you write in J. Or, more likely, you use a vector-operations/dataframe library in some arbitrary language.
Do people write in Erlang? Well, some do; but many others think that connecting expressions with semicolons and commas is too painful to want to deal with (though not me; I enjoy "systems Prolog.") And some of those people got together to design Elixir, so they could use the Erlang runtime without having to write Erlang.
Does anyone voluntarily pick up IBM JCL these days? Or COBOL? RPG? MUMPS? These archaic 1970s languages all have interesting design choices — heck, MUMPS assumes a database as part of the language runtime! — but they're just too awful to actually write in. It's far easier to just read about these languages as historical artefacts, and then implement their good ideas into a new language, than to actually use them.
Does anyone use FORTRAN? Well, yes, if they have to, to extend OpenBLAS with something. But mostly as little as possible. And there are fewer reasons to do so these days, with e.g. cuBLAS not being written in FORTRAN.
You put a syntax example on the home-page not to impress people with how cool your language looks; but rather, to show that you're not trying to bait-and-switch a prospective developer, by talking them into using a language by describing its cool features, only to reveal at the last moment that the syntax is awful.
I think the worst way to do it would be to use 13px Times New Roman text with no syntax highlighting on a secondary page. But that's what Io went with .
This guy started furiously implementing it at the beginning of the year but stopped at the end of February. ( My guess is he's got a new job ).
There have been many attempts at implementing the language in a "more solid" base, ie. on top of "languages" that take care of garbage collection like JS, Go, Java/JVM.
I suspect a complete and community supported Graal implementation will be the ticket for Io. It would be an incredible dynamic language paired with an incredible strong and robust VM and ecosystem( think libraries, tools, manpower, etc. )
I really hoped I had the chops at language implementation to make this a reality, but after watching the development I'm waayy out of this league.
Found some other implementations under https://iolanguage.org/links.html. None claim to be complete or working, one link was dead :(
The semantics are equally beautiful and elegant.
It's just a cool language.
The main downside is that the semantics are so dynamic that it's virtually impossible to optimize.
Arguments are passed to methods as unevaluated syntax trees and a method can decide at runtime whether or not to evaluate the argument expression, or not, or introspect over it, or mutate it.
It's like every function is a macro but macros can also choose at runtime to selectively evaluate some of their arguments.
Early Lisp was able to bind an operator symbol two four kinds of procedures:
The subrs are just functions: arguments are evaluated and passed to them.
The exprs operate on expressions: they receive expressions unevaluated, like in Io.
Roughly speaking EXPRs implement special operators that are built into Lisp, written in assembly language.
FEXPRS are defined by the program: they add new kinds of special operators, whose actions are interpreted; they decide at run time what to do with the argument expressions.
Loosely speaking, today's mainstream Lisp dialects still have the equivalent of EXPRs (special operators) as well as SUBRs (built-in/compiled functions) as well as possibly FSUBRS (uncompiled functions in some Lisp dialects that support interpretation). The FEXPR has disappeared; it is not found in e.g. Scheme or Common Lisp.
About macros, macros can evaluate things in their own runtime. In a compiled setting, the run-time of a macro is compile time. Macros don't make decisions in the run-time of the code they expanded; macros are gone at that point. All decisions have to be expressed as generated code which refers to information available at run-time in that environment.
Things have changed. Fexprs are coming back in a big way.
You need to check out John Shutt's Kernel language https://web.cs.wpi.edu/~jshutt/kernel.html
Yes, older Lisps messed fexprs up. Kernel fixes this. The vau calculus used by Kernel is simply a lambda calculus that, unlike CBN and CBV, doesn't implicitly evaluate arguments. The rest of the calculus is the same.
What this means is you get powerful hygienic metaprogramming (arguably as powerful or even more powerful than Scheme's most advanced macro systems) at a low low price and with very elegant theoretical properties. In Kernel, hygiene is achieved simply with the usual lexical scope that's already in lambda calculus.
So vau calculus is simpler than the CBV lambda calculus used by Lisps. Because it doesn't evaluate arguments, so it does less than those calculi. And by doing less it gains the great power of being able to do hygienic metaprogramming in the same calculus, without second-class contraptions like macros.
I agree that Lisp dropped the ball on fexpr. It seems to be a combination of dynamic scoping, well founded performance concerns and some conflation of syntax and semantics. The special forms must appear under their own names thing is a related performance hack.
Lisp no longer has performance concerns. SBCL is excellent. Speculative JIT techniques would roll right over the "is this a function or a fexpr?" branch in apply with no trouble. I'm convinced there's no inherent overhead there.
I don't see much evidence that fexpr's are coming back though. Kernel is superb, though I'm unconvinced by the handling of cycles, but doesn't seem to be in use. Qi/shen is a similar sort of do-things-right effort with limited uptake. What do you have in mind?
My working theory is that lisp is niche enough that and a better lisp (which I contend kernel is - replacing the layers of macros with fexpr guts a lot of complexity from the language, and that was meant to be the driving motive behind Scheme) hits a subset of the first niche.
Older Lisps didn't mess up fexprs. The developers wanted (ahead of time) compiling and moved on.
Using lexical scope in the context of fexprs is only a minor (and obvious) improvement. If fexprs made a comeback in, say, Common Lisp, it is painfully obvious they would be functions, whose parameters and locals are lexical by default.
Under a single dynamic scope, what it means is that when a fexpr is evaluating the argument code, its own local variables are possibly visible to that code. If the fexpr binds (let ((X 42) ...) and inside there it calls EVAL on an argument which contains X, that argument's X resolves to 42.
That could be fixed by using two dynamic scopes: the fexpr having one implicit dynamic scope for its own execution (perhaps newly created for each call to the fexpr), and using an explicit dynamic scope for evaluating the argument material (that scope coming in as an argument).
Under a single dynamic scope, if the number of fexprs in the system is small, they can stick to some namespace for their own variables, and all non-FEXPR routines stay out of that namespace. fexprs have to then be careful when they pass pieces of their own code to other FEXPRS.
In a program with large numbers of fexprs, symbol packages would solve the problem: there would be multiple modules providing FEXPRS, which would use identifiers in their own package. Then only fexprs in the same package could clash when they use each other, which is resolved by inspection locally in the module. (E.g. use unique symbols across all the FEXPRS.)
I don't suspect hygiene was a problem in practice; during the heyday of fexprs, there wouldn't have been programs with huge numbers of FEXPRS (let alone programs with churning third-party libraries containing fexprs). Everything would have been done locally in one site, by a small number of authors working in their own fork of Lisp as such.
Thus, I don't think this was the main problem identified of fexprs; it was really that impediment to compiling. Schutt's paper doesn't seem to attack this problem at all.
Hygiene is a non-problem; we are still cheerfully using Lisps without hygienic macros in 2023, whereas fexprs not being compilable was clearly a pain point in 1960-something already.
A) An exciting research problem! Shutt himself says that he doesn't see any fundamental obstacles to compiling them. It's just that nobody has done it yet.
B) Actually not a big deal for many applications. Take PicoLisp, which has been cheerfully used in customer-facing applications for decades. It's an ultra-simple interpreter (its GC is 200 LOC https://github.com/picolisp/picolisp/blob/dev/src/gc.c ) The same architecture can be used for Kernel implementations.
Covers the right sort of thing. It makes a guess at what a function call will be, inlined that guess, and if it turned out to be wrong restarts from the start of the function in the interpreter. Doesn't explicitly call out the function calling convention but guarded inlining has much the same effect.
Maybe worth noting that inlining a fexpr is semantically very close to expanding a macro. Identical if symbol renaming is done the same way for each.
That's a pretty compelling answer to why it doesn't need macros though. It's got the upgraded version built in.
I believe that is amenable to the same style of speculative compilation found in jit compilers for dynamic languages. The above reference to graalvm does seem a good idea in that context. I can't point to an existence proof though.
edit: someone is going to object to "upgraded macro", but nevertheless lisp macros are exactly the subset of fexpr that can be statically resolved at compile time, i.e. a subset of a more powerful facility.
"Io is just an incredibly hushed secret. (Perhaps because it is impossible to Google stuff about it.) Did you know that Io’s introspection and meta tricks put Ruby to serious shame? Where Ruby once schooled Java, Io has now pogoed."
I once saw _why perform/present at a Ruby event. The coolest and weirdest thing I've ever seen conference in over 25 years of being in this industry.
I had completely forgotten about it until this popped up on HN so thanks for that.
Some examples done right:
The others being Ruby, Erlang, Scala, Clojure, Haskell and Prolog.
So the language has been dead since 2008?
last commit merged to master Nov 2022. It's not exactly the most active project in existence but not dead either.