Here are some unfinished notes about what the diffs mean: https://gist.github.com/laarc/a07799e89940dc7c31c9 It's interesting how short the diffs are vs how many features and fixes they deliver.
Did I miss it? Are people actively using it? I want to play with it, but I would hesitate to use a language with such a small community for a project that was mission critical.
It turns out that if you use arc for something critical, the language itself won't break. The programs written in Arc might contain bugs, but as far as I can tell, the underlying infrastructure is solid.
This is going to sound like bullshit, but a couple years ago I wrote a site in Arc and left it running by mistake. It was set up to push hourly backups to Tarnsap, and the site was doing useful things, but I never ended up needing it.
Turns out the site continued running for 1.4 years by accident. There weren't any scripts to restart the server on a reboot -- it simply ran for that long. Presumably it halted because Amazon rebooted the micro instance or the account ran out of money. When I checked the logs in Tarsnap, I was pretty amazed to find that it had served 37k requests, almost all of which were automated fuzzers. I hadn't launched the site; it just happened to be using an obscure URL. The site had a bit of dynamic functionality, so it wasn't merely a static site. But I hadn't written it cleverly or carefully. Arc is what carried the site for those 16 months, not me.
As for the community: correct, but I intend to change that. If the growth exponent flips from negative to positive, it won't be small for very long. In the meantime, though, they're some of the nicest and most helpful people ever, so feel free to post whenever you get stuck, or shoot me a message.
As a non-Lisp programmer, can you sell the benefits of Arc to me?
(mac awhen (expr . body)
`(let it ,expr (if it (do ,@body))))
some_value = my_function()
It's possible because the code is written in the same data structure that you use to manipulate normal values. I.e.:
(let x 1
So: (some_macro (prn 1)). some_macro would receive a list of two elements (prn and 1) and could do some operation to it and returns some new code.. such as (prn 2), which will then get evaluated to print 2. Whereas (some_fn (prn 1)) would evaluate (prn 1) (which returns nil), and pass nil to some_fn.
That's why it's so hard to impress a lisp programmer with any other languages because if a feature doesn't exist, they can add it with a macro. I.e. Actors or OO, or whatever really.
Is it possible to use this technique to introduce, say, infix arithmetic operators, or ObjC's dot syntax? Or can you just trade one S-expression for another?
Yes and no. The fact that the language is a list (which is the same data structure that you use to manipulate value) is what makes it that powerful. Once you start adding more syntax, then it's not a list anymore.. so you can't just pass that code to your macro and it becomes trickier.
For instance, you could receive your AST, manipulate and return it, but now you're getting into implementation detail about how the lexer and parser are working. Compare that with lisp where you get a list and you return a list. The fact that your code is your AST is that makes it so magical.
One problem that I see with Lisp is that since everything looks the same, it's hard to quickly parse it when reading a file. I feel like reading a Python file is really fast because it's separated in classes / method, or even the methods are clearly separated in "paragraph". While in lisp, everything has the same syntax and to know what's happening you need to read almost every line. Basically, I feel like it's lacking visual clue. Clojure fixes that a little bit by adding a few more data structure.
</ end of rant.
All that being said, I think Arc (And other lisps) are amazing and I'm sad that they're not more popular.
Sure. That's easy in Lumen: https://github.com/sctb/lumen/commit/6410c8cc2de66d1976515bd...
Is it possible to use this technique to introduce, say, infix arithmetic operators?
Yep! I wrote a macro to swap any instance of `x with the previous expression. Meaning, it allows you to write:
(foo `= (x `+ y))
(= foo (+ x y))
But more generally, it's easy to write a macro to do whatever you want. If you really want infix expressions all of the time, I could imagine writing a macro which matches the pattern (... + ... + ...) and replaces it with (+ ... ... ...)
That's made-up notation, but the idea is just "look for some set of symbols, like + or /, which occur within a list in between all of the other elements, and then move the symbol to the front of the list."
The power is available to you, but whether you choose to use it is a separate question. Prefix notation tends to be a win for everything except math. And even for math, it's usually a matter of code formatting. Rather than
(- (* 2 (/ (+ x y) (+ a b c))) 1)
(- (* 2
(/ (+ x y)
(+ a b c)))
(let u (+ x y)
(/= u (+ a b c))
(*= u 2)
let u (+ x y)
/= u (+ a b c)
*= u 2
1. You might be interested in how my arc-inspired language did indent-sensitivity and infix: http://akkartik.name/post/wart. For example, here's bresenham's line-drawing algorithm in wart: https://gist.github.com/akkartik/4320819. There are more code samples at http://rosettacode.org/wiki/Category:Wart.
2. Do you have a username on arclanguage.org? Perhaps we've met!
Wart is freaking cool! Thanks for showing that. Whoaaa, you use tangle/weave! How do you like it / what do you think about literate programming? I was batting around the idea of writing code that way, but it seemed like tangle might not be the best solution. But I love your 000prefix naming scheme; I instinctively do that for branches. After reading through a few files starting at https://github.com/akkartik/wart/blob/master/literate/000org... it became pretty clear how wart worked, pretty quickly. I'm mainly wary of the "compile" step that using Tangle implies; do you find it slows you down much? Would you choose to do it that way if starting fresh?
What would a language ecosystem be like without any backwards-compatibility guarantees, super easy to change and fork promiscuously?
It's fun to find someone has been mulling over the same problems!
I was considering taking Laarc in a similar direction:
It would also rewrite its own source code to look like:
(import "akkartik/bar" "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15")
So it's static linking, basically. It's a guarantee that "This source code will always work, the same way (+ 1 2) will always yield 3. You could copy-paste this source code into a gist, and whoever ran it would end up running the same program."
That removes dependency hell / worrying about versions entirely. But it's not the right solution, and I've been searching for a better one. What do you think?
The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written.
I was thinking of following that model, but using the convention that libraries are expected to use git tags in order to indicate major.minor.patch, and (import "akkartik/bar" "f1d2d2f924e986ac86fdf7b36c94bcdf32beec15") will bump itself to the next commit if it's tagged as an increment of the patch version.
What kind of things have surprised you about Wart during development? Any unexpected pitfalls or interesting discoveries along the way? Thanks again for pointing it out!
By the way, were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it.
"What kind of things have surprised you about Wart during development?"
I stopped working on wart at some point for two reasons. First, I realized that I was falling into a blind spot shared with many contemporary programmers: blindly assuming that the way to improve the state of programming was by creating a new language. But there is more to programming than languages. Languages just happen to be memetically tempting sirens to chase after. Second, I painted myself into a corner with wart because I made it too late-bound, so late-bound that it was incredibly inefficient and so hard to optimize. (I think it might still be possible, but I lost patience partly because of the previous point.) I still like my infix scheme and have a particularly soft spot for the way wart allows keyword arguments anywhere in any function call. See, for example, the use of :from in a webserver implementation that makes everything so much clearer: https://github.com/akkartik/wart/blob/ce64882c69/071http_ser.... However, I've forced myself to harden my heart and ruthlessly ignore considerations of syntax, and focus on what I think are more impactful directions: conveying global rather than local structure of programs. More details: http://akkartik.name/about. In brief, I'm working on ways to super-charge automatic tests (by making them white-box, and by designing the core OS services to be easy to test) and version control (using the literate layers you noticed). It's a whole new way of programming where a) readers can ask "why not write this like this?", make a change, run all tests and feel confident that the change is ok if all tests pass; and b) readers can play with a simple version of the program running just layer 1, then gradually learn about new features by running layers 1+2, 1+2+3, and so on.
"The problem is that it throws away the ability for the code to pull updates from library maintainers. If the import is "pinned" at commit f1d2d2f9, then you push a security-related patch to akkartik/bar, most existing code will stay vulnerable forever, because most code is never maintained after being written."
I hadn't considered hard-coding hashes in imports, because it's historically been very hard to truly distinguish compatible changes from incompatible ones without introducing bugs. I think it's more important to give people control over upgrades than to try to improve over the current advisory+pull method of communicating security issues.
You're right that most code is never maintained after being written. But most code is also utterly unimportant to anyone including the author. So that failure of security is ok :) Trying to force concern about security only increases the moral hazard -- people get used to other people thinking about their security for them.
"were there any ambiguities with indentation-as-parens? I haven't thought about it rigorously, so I was just wondering if there are any corner cases to watch out for when I add it."
The big insight, I think, is to not fear parentheses too much. A common failure mode of such approaches (like http://readable.sourceforge.net and http://dustycloud.org/blog/wisp-lisp-alternative) is to create too many new token types and lose the simplicity of lisp syntax. My approach instead is to a) just use parens sooner than look "outright barbarous" (https://www.mtholyoke.edu/acad/intrel/orwell46.htm), and b) take it a step further and disable all indent-sensitivity inside parentheses.
Anyways, that's my two cents. The precise rules I use are at https://github.com/akkartik/wart/blob/master/004optional_par.... (They're really in the first couple of sentences there.)
"Whoaaa, you use tangle/weave! How do you like it / what do you think about literate programming?"
Like I said above, I like it very much, especially in combination with tests and a notion of layers. Here's my take on why literate programming has failed so far: http://akkartik.name/post/literate-programming
You're the only one, then.
> Is it possible to use this technique, say, infix arithmetic operators
Take a look: http://srfi.schemers.org/srfi-110/srfi-110.html#examples
Well, certainly, but on what basis? Is a "gut feeling" enough of an argument to go against established definitions and facts?
I didn't want to be rude - I wanted to provoke a bit of research on the part of millstone. I might have utterly failed, but my intentions were good...
No no, if anyone failed, it was me! Don't worry about it. If I'd done a better job showing people, or if I'd given them something to play with, then words would've been unnecessary. We're all in it together. Part of the challenge is making everyone feel like they can do great things.
Is a "gut feeling" enough?
Counter-intuitively, it seems like the answer is yes, for two reasons: (a) It prevents me from fooling myself. If I say "Look how beautiful this design is," people will push back if most disagree. (b) If young people don't want to build cool stuff with Lisp, then that's alarming to someone who intends to grow Lisp, as I do. And young people turn out to be the most likely to vocalize a negative gut reaction.
The trick is to be aware when (a) happens, but to brush it off if you're right and the world is wrong. It's as hard as it sounds.
I usually run the language past some friends and tell them to be as cynical as possible, then think of ways their critique is justified. The discoveries from that process have been so cool. For example, the outermost parens of a Lisp expression are unnecessary:
(def even (x)
(is (mod x 2) 0))
def even (x)
(is (mod x 2) 0)
Whether it's a good idea to formally drop them is a different question, but now I know to check. It'd break all existing Lisp auto-indent plugins. But that could be worthwhile; it depends.
User testing has been pretty great.
Here's an example in concrete terms of the power of this technique: https://github.com/laarc/monki/blob/master/sudoarc.l
That's a file which adds many of Arc's constructs to Lumen: printing (pr, prn), list building (accum), structure (w/uniq, aif, awhen), etc.
To put it another way: At the top of the file, the language available to you is Lumen. But at the bottom of the file, the language available to you resembles Arc. (It doesn't redefine anything from Lumen, though, so it's not a destructive change.)
That must sound pretty dubious. It probably sounds like it's useful only for lone-wolf programmers rather than a team. If you give people the power to reshape the language, wouldn't it wreak havoc in a team setting?
The answer turns out to be no, and it's surprising enough to write an essay about. Having macros available to your team ends up making your team more powerful, not less. It's simply a technique to generate patterns of code.
When coupled with a good debugger, there's no mystery when things break. When an error happens, you get a debug breakpoint right where the error occurred. It makes no difference whether the error occurs during a macro expansion or during a function call, because macros are simply functions that answer "What should happen next?"
It's true that there are ways to abuse macros which make them difficult to understand or debug. But this is rare. The most common case is a short, simple transformation like d0m showed: "here is a pattern; please generate it."
def awhen(func, callback):
val = func()
val = func()
if val: # standard form
code(val) # of if
aif(func, code) # not obvious that it does the same thing
# and limits you to using a function defined
# elsewhere, or Python's crippled one-line lambda.
;; regular when
(let (val (func))
(when val ; standard form
(func val))) ; of when
(awhen (func) ; standard form of
(func it) ; when, with added bound value
; possibly other
; code goes here
If you don't like magic, you could invent an awhen that took an extra parameter that designates the name to bind the result to, e.g. (awhen val test).
Yes - you can achieve the same thing, as in any programming language that is Turing complete.
No - you cannot get the same syntax. And syntax affects readability and familiarity.
You're right that I could have used a better example, such as `when` or whatnot, but that's the first thing that came to me ;-)
The difference between using a decorator and a lisp macro is that using a decorator is "using the syntax of the language to solve your problem", while a macro is "adding a new syntax".
I do think there are way in Python (or in C++) to use meta-programming to tweak the language but it always feel very hacky. I.e. I'd hate to jump into a python code-base that isn't standard, such as using @awhen decorator instead of a normal if ;-) What about you? But in Lisp, there's no difference in the syntax.. and adding missing features to the language instead of repeating the same pattern is the standard way to code.
I should start by clarifying that I'm not associated with Arc in any way. Everything that I show should be thought of as an eager derivative, at best. But for several months a colleague and I have been working on a derivative, which we've been calling laarc, which incorporates a few additions that make it a bit easier to write programs with. Here's a debugger similar to pdb.set_trace(), for example: http://i.imgur.com/WWKa4kw.png Except when it breaks, it shows a list of the values of all the local variables. It's really useful for stepping through macro expansions and understanding them, because there's no longer any mystery when something breaks. Laarc isn't anywhere near ready yet, so it's too early to know whether it'll be useful generally, but the goal is to discover a way to make programming feel like a Tesla.
So, what are the benefits? Well, here's an example of a real-world program. It's all pimply and has a few hacks in it, but on the other hand, you can get a gist of how it might look: https://github.com/laarc/monki/blob/master/main.l
Note that this was made two days ago, and I've been working very fast. So it's pretty rough. The program (monki) is essentially "cp for git", and while it works, it's not quite finished. Monki is Yet Another alternative to git subtrees / submodules; think of it like "cp for git". `monki clone voltagex/foo foo` would create a clone of that repo into the subdir "foo". But that subdir can be within one of your existing repositories.
Here's the scenario it's useful for: https://github.com/tlbtlbtlb/startuptools
This requires an installation of tlbcore (https://github.com/tlbtlbtlb/tlbcore) symlinked to this directory.
With monki, you'd just type `monki clone tlbtlbtlb/tlbcore tlbcore` from within a clone of startuptools, and then you'd commit the result. Monki takes care of pulling updates, so that whenever tlbcore changes, startuptools would also get the new stuff. But you can `monki clone` a specific commit instead, if you want to pin to a "known working" version.
Anyway, that monki business was the result of trying to answer "What's a simple way where I can share code between all my repositories, pull in updates as I make them, and not have to worry too much?" And it was a good chance to exercise... Lumen!
Confused yet? Ok, so Arc runs on Racket. So did Laarc. Then we had the good fortune to hear about Lumen: https://github.com/sctb/lumen
That should catch anyone's attention -- the FFI is literally "copy paste some C declarations and you're done."
I want to take that further. In Lumen, it's already possible to interface with node libraries or with lua libraries as-needed. What if we extend it to work with python and go libs as easily as it works with C? You might end up with a language that could do any given task very easily, because you'd simply use whatever library happens to be convenient regardless of whether it's a C lib, a node package, or a Ruby gem.
So why have I been yakking about Lumen? Because it seems like it offers a bunch of interesting benefits as a runtime. I've been porting Laarc from Racket to it, slowly. It's extremely early, but here's an early version of the compiler: https://github.com/laarc/lumen-arc/blob/master/arc.l ... It's essentially ac.scm. I wanted to make it match as closely as possible, both because ac.scm is a good answer to "What's a good way to write a lisp compiler in a lisp language?" and because it's dangerous to assume you can write from scratch something that a smart team has been thinking about and working on for 13 years.
Sooo... That's a whirlwind tour of stuff that's not even ready to tour. On the plus side, the plan is to get an in-browser repl running soon (arc can already run in the browser, so it's a matter of coaxing CodeMirror into submission). There's a server running Lumen over at http://arc.lol:9999 which does absolutely nothing useful yet. The plan is to get a REPL up and to dub it "Arc dot tiefighter" since "lol" obviously looks like a tie fighter.
Imagine a language whose startup time was faster than python / ruby / node, yet could be bent into whatever shape you wanted, thanks to macros. If you install luajit and run `make; make test`:
$ time make test
525 passed, 0 failed
525 passed, 0 failed
- Arc embraces macros.
- You can redefine anything. (assign + -) (+ 3 4) evaluates to -1. Not only are you an equal partner to the language, but the language is short enough to carry in your head.
- Green threads. Here's a snippet from Laarc which does hot code reloading:
(on-err autoerr (fn ()
But green threads make it so easy to do that in parallel. I never once had to worry about synchronization issues in the case of the autoreloader.
- There's no FFI for Arc. However, I'm in the process of porting Arc from Racket to Lumen, which has a very interesting FFI thanks to LuaJIT: https://github.com/sctb/motor/blob/master/motor.l
It's literally "copy-paste these C declarations. Now you can call those C functions." It quite simply rocks.
If you're interested in Arc, consider Lumen as well. https://github.com/sctb/lumen
It has many similarities to Arc, like a powerful macro system, and has some interesting distinctions, like keyword arguments and the FFI facilities.