This one is hopelessly outdated. See https://github.com/c3d/xl for the current state of things. A bit of history of the language here: https://github.com/c3d/xl/blob/master/doc/HISTORY.md.
Warning: this is currently under heavy repair / refactoring, so Nothing Works™.
An interesting (if old, 2010) application of the language here: http://tao3d.sourceforge.net.
Contributions and ideas welcome.
Personally I have given up creating a new programming language about a decade ago because I realized no matter what language I create, people will not use it, it it comes from a random github page.
Also, existing languages cover almost all business needs and while languages like LX provide a lot of improvements, the cost of change seems to be greater than the cost of not having these improvements.
As for ideas, what I'd like to see in a programming language is signal-based programming: the execution flow of a program is not 'one instruction after the other' but reactions to signals. A program shall be built by wiring output signals to input signals.
I'd also like to see types morph based on run-time conditions.
The most fundamental problem of all the current programming languages is that types are too rigid and don't allow for catching all the possible outcomes. For example, a File morphs between a closed file and an opened file, however in current systems any operation can be applied to a file, irrespectively of whether the file is actually opened or closed.
I believe that the combination of signal-based programming and type morphing will increase the reliability of programs, leaving only bugs that cannot be proven to be solved due to their nature (aka like the halting problem).
I may not be necessarily correct, but I have a strong belief in the above.
Speaking as a fellow sufferer, how did perl, python, php, lua, ruby start getting used? They were pre-github, but jq really did start that way: https://github.com/stedolan/jq.
The key to being used is a usage: a problem to apply it to. If there's a problem that people need solved, and your language is much better than the alternatives, then people will use it, post about it, rave about it. If JSON wasn't popular, jq wouldn't be popular.
OTOH your actual interest is in more abstract ideals. This is crucially important work, but you may be right that no one will use it... instead, they'll be interested in the ideas you are interested in. Finding a needed application of your ideas will generate use.
In the pre-github days, candidate programming languages were fewer and solved yet-almost-unsolved important problems. The more you go into the past, the easier it is to get your foot in the door.
But you don't find them that way - you look for a solution to your problem, and you come across them. That's why they say "scratch your own itch"; someone else might have that itch too.
You search for JSON transformation, you get jq (that's how I found it, some years ago).
You're right that it was easier earlier. But there are new-unsolved-important problems all the time --- and "solutions to problems" radically narrows the search space.
But not in Rust. As a matter of fact anything based on intuitionistic logic could do that for you (e.g. use-only-once, must-use, etc).
These days my thoughts drift towards novelty and niche domains. What would be the most interesting way of describing a subset of things? The most fun? The most intuitive? What's a way to break a specific domain down into a vocabulary that lets you populate it while forgetting you're even using a computer?
I think there's still a lot to explore even if you aren't shooting for maximum productivity or even general-purpose programming.
Signal based processing is Observables/reactive streams, which provide a sane interface to use.
This is (one of) the problem with new languages: library ecosystem access. It's why Scala/Clojure/Kotlin is one camp of languages and Zig/Rust is in another - both Java and C have a ton of great libraries written in them, with ones that add signals-first approaches via libraries.
Do you have at hand a link where I can read more about this? Thank you in advance!
For Tao3D, you have https://tao3d.sf.net.
I was actually looking for something specific about parse trees vs S-expressions. That would be https://github.com/c3d/xl/blob/master/doc/HANDBOOK_1-syntax....
But I'm reading the whole thing now. Very interesting language! I don't see myself using it in production any time soon, but it looks like a wonderful tool for programming languages research. Thank you again!
Uh, Lisp caught on quite well. Rebol was held back by coming out when being open source (or at least having an open spec) was expected for languages, but it wasn't until most of the interest had passed.
Forth I gather was pretty popular, but fell of its peak even before Lisp did.
Of course, while not with Lisp style macros, pervasive metaprogramming was a key distinguishing feature of both Python and Ruby, which both reached fairly intense levels of popularity. And of course Perl...
> Me, I find macros and implicits repulsive.
And it's fine that that is your taste, but it's clearly not as universal as you would like to pretend.
Lisps are still here (think Clojure). Rust has macros. New-style C++ is mostly about metaprogramming.
And what's an 'implicit'?
I would trade real macros for almost any other language feature, as having them means being able to patch over whatever is missing.
The more powerful a language is, the less popular it will be. Couldn't be any other way, since only people with enough experience appreciate and feel comfortable dealing with them.
My point is, readability of code increases proportionately not just to the complexity and size of code, but also proportionately to the size of syntax. In a language with no macros, the latter is fixed and you get a linear dependency. In extensible PLs it's a quadratic one with an unknown second factor, jeesh! I think the approach with a more powerful but almost fixed language (eg Haskell where macros are hard and mostly used to generate boilerplate definitions) is better than a small but extensible language with every programmer on a contest of who makes a wittier and sleeker new control structure that is like the builtin ones but SO much cooler...
Why didn't it happen? Because programmers overall are reasonable folks. So, yes, in theory, you can define a factorial function and call it "Not_a_Cat" (technically, it is true), but nobody does that.
Similarly, giving the ability to define `X in Y..Z` makes code more readable if you use it right, not less.
There's no need to choose between Zig and LX. Use the one that better fits the problem. If for your case that's Zig, good for you.
I'm trying to work on that now, which is why I'm presently spending so much time on documenting where I want to go. The idea is that if I first write a "spec", then a community can build to implement it, and it's not just my own (limited) time.
So if you are interested in helping right now, I'd say: read the doc (in progress) and send comment / doc fixes.
.NET has T4 templates, attributes, reflection, expression trees, and F# adds code quotations to the mix.
This strategy is kind of "punting" a lot of design work onto the user, which has a lot of downsides.
Metaprogramming doesn't make code less readable.
Metaprogramming has a lot of "magic" associated with it which makes code hard to debug and understand sometimes.
You cite the success/popularity of Ruby as an example of why metaprogramming does not make code less readable. The popularity of a programming language does not mean that the language does not have significant warts. There are cultural and historical factors that make a language popular. Availability of web frameworks (e.g. Rails) are another powerful influence.
For the large part, Ruby code _is_ understandable/readable if you avoid metaprogramming. Once you have a large codebase and advanced frameworks that make heavy use of metaprogramming you can quickly get lost. There are lot of "effects at a distance" i.e. one library changes something and that can break something far away and potentially not even connected. Would you say your code is readable? No, the code is not readable because it runs in a totally unexpected manner.
My theory is that beginners decided to learn Ruby because it was easy to start with and it had a lot of powerful framesworks (e.g. Rails). The learning curve is gentle. But pretty soon when you start writing advanced ruby or start using advanced frameworks things can get complex.
LX: Langage expérimental (~1992) Ada like with "pragmas" extending the language
LX: Langage extensible (1998): Rewrite with documented object-oriented parse tree
LX/XL: Extensible language (2000) based on object-oriented parse tree, cross-language framework called Mozart, that also had a Java front-end called Moka
XL2: Extensible language (2002) based on simple parse tree, 8 node types, Ada-like in syntax
XL2: Self compiling compiler (2005?) with same 8-node parse tree, Ada-like
XLR: LLVM-based functional variant (2008-2009), same parse tree, functional, very simple, library-defined if-then-else
Tao3D: real-time 3D graphics running on XLR (2010-2015)
ELiOT / ELFE: Distributed programming (2015?), based on interpreted version of XLR
XL: Reconvergence of all of the above (WIP), all merged, implemented as a library with a tiny front-end.