Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Bel (paulgraham.com)
1288 points by pg on Oct 12, 2019 | hide | past | favorite | 456 comments



Whenever I see a new programming language, this list of questions by Frank Atanassow comes to mind:

    1. What problem does this language solve? How can I make it precise?

    2. How can I show it solves this problem? How can I make it precise?

    3. Is there another solution? Do other languages solve this problem? How?
       What are the advantages of my solution? of their solution?
       What are the disadvantages of my solution? of their solution?

    4. How can I show that my solution cannot be expressed in some other language?
       That is, what is the unique property of my language which is lacking in
       others which enables a solution?

    5. What parts of my language are essential to that unique property?
Do read the whole post (http://lambda-the-ultimate.org/node/687#comment-18074), it has lots of elaboration on these questions.

From a skim of the Bel materials, I couldn't answer these questions. Maybe PG or someone else can take a stab at the answer?


I think the point of a high-level language is to make your programs shorter. All other things (e.g. libraries) being equal, language A is better than language B if programs are shorter in A. (As measured by the size of the parse tree, obviously, not lines or characters.) The goal of Bel is to be a good language. This can be measured in the length of programs written in it.

Lisp dialects have as a rule been good at making programs short. Bel is meant to do the same sorts of things previous dialects have, but more so.

It's also meant to be simple and clear. If you want to understand Bel's semantics, you can read the source.


Thanks for the response! Which features of Bel do you think will contribute the most to making programs shorter or clearer, compared to other Lisp dialects?


It's so broadly the goal of the language that many things do, often in small ways. For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them. And the where special form and zap macro make it much easier to define operators that modify things. The advantage of Bel is a lot of small things like that rather than a single killer feature.

Making bel.bel shorter was one of my main goals during this project. It has a double benefit. Since the Bel source is a Bel program, the shorter I can make it, the more powerful Bel must be. Plus a shorter source is (pathological coding tricks excepted) easier to understand, which means Bel is better in that way too. There were many days when I'd make bel.bel 5 lines shorter and consider it a successful day's work.

One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.


> For example, it turns out to be really convenient that strings are simply lists of characters.

Statements such as these are very academic and concerning - to me - when it comes to new languages. And they make me wary of whether or not the language will ever move out of the land of theory and into practice.

Continuing with the "strings are cons lists" theme... Other, very notable languages have tried this in the past: Erlang and Haskell immediately come to mind. And - without exception - they all end up regretting that decision once the language begins being used for real-world applications that require even a moderate level of performance.

Lisp programmers (among which I count myself) are very fond of pointing to new languages and identifying all their "features" that were implemented in Lisp decades ago (so not "new"). And they also bemoan when a language design doesn't take a moment to learn from the mistakes of those that came before them. Strings as lists is very much in the latter case.

The above said, the idea of streams as a fundamental type of the language (as opposed to a base class, type class, or what-have-you) is quite intriguing. Here's hoping they are more than just byte streams.


As I read about strings-as-lists, I tried to maintain sight of one of the premises of PG's exercise -- that the language is climbing a ladder of abstractions in the non-machine realm.

The reality of 2019 is that strings are not random access objects -- they are either empty or composed of the first char and the rest of the chars. A list is the proper primary abstraction for strings.

That is, if "list" is not a data structure but a mathematical concept -- based on the concept of "pair." If I were a Clojure or Swift programmer -- and I'm both, I'd say there are protocols that embody Listness and Pairness, and an implementation would have multiple dispatch that allows an object to deliver on the properties of those protocols. There are other fundamental concepts, though, that deserve inclusion in a language.

Is something suitable as the first argument of `apply`? Functions obviously are, but to Clojure programmers, hash tables and arrays are (and continuations are to Schemers) just as function or relation-like as a procedure. This is so obviously a good idea to me that I am irritated when a language doesn't support the use of random-access or dictionary-like collections as function-like objects.

Which brings us to random-access and dictionary-like objects. Those should have protocols. And given that sets are 1) not relations and 2) are incredibly important, set-like and perhaps bag-like itself deserve support.

At minimum, a designer should think through the protocols they want to support in a deep way and integrate those into the language. Maximally, protocols would be first-class features of a language, which is what I'd prefer, because a protocol-driven design is so often so much more productive than an OO one.

PG's plans with respect to all of the above are what really interest me. IIRC Arc embodied some of these concepts (arrays-as-functions), so at the moment I'm content to watch him climb this ladder that he's building as he goes and see what comes of it.

[Edit: various typos and grammatical errors.]


I don't think they're claiming that strings should be random access. Rather, I think they're objecting to the notion that strings should be sequences at all, rather than scalars (a la perl).


Speaking of performance and streams, it seems streams are not even byte streams: they are bit streams (rdb and wrb).


> For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them.

Why is it preferable to couple the internal implementation of strings to the interface “a list of characters”? Also, since “character” can be an imprecise term, what is a character in Bel?


> Also, since “character” can be an imprecise term, what is a character in Bel?

Exactly. How is unicode and UTF-8 treated by this language?


While we're down here in the weeds[0]...

A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

0. "This is not a language you can use to program computers, just as the Lisp in the 1960 paper wasn't."[1]

1. https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...

2. http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Bounda...


> A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

I would agree. However, for the sake of argument,

The width of a grapheme cluster depends on whether something is an emoji or not, which for flags, depends on the current state of the world. The combining characters (u)(k) are valid as one single grapheme cluster (a depiction of the UK's flag), which is not valid if the UK splits up, for example. This is only one single example, there are many others, that demonstrate that there is basically no 'good' representation of 'a character' in a post-UTF8 world.


At this point Bel doesn't even have numbers. A character is axiomatically a character, not some other thing (like a number in a character set encoding). Isn't it premature to talk about character set encoding or even representation yet? A character is a character because it's a character. But I haven't nearly read through these documents so perhaps I'm not understanding something.


As far as I know a "character" is a number in a character set. "A character is a character because it's a character" doesn't make any sense, at least not in the context of a programming language. A character is a character because it maps to a letter or glyph, and that requires a specific encoding.

The Bel spec contains the following:

    2. chars

    A list of all characters. Its elements are of the form (c . b), where
    c is a character and b is its binary representation in the form of a 
    string of \1 and \0 characters. 
In other words, even in Bel a character is still a "number" in a character set encoding. The "binary representation" isn't mentioned as being a specific encoding, which means it's assumed to be UTF08 or ASCII.

Also, Bel does have numbers, apparently implemented as pairs. Go figure.


You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

But I am trying to read this in the spirit in which it's intended, as irreducible axioms. And so I don't think it's true that a character needs a representation from the perspective of Bel.

Consider if pg had approached this more like I would have, and started with integers and not characters as a built-in. Does an integer need a "representation"? At least, in a language that is not designed to execute programs in, like Bel? An implementation (with the additional constraints like width, BCD vs two's complement vs whatever, radix, etc) would need one, but the formal description would not.

Thanks for exploring the subject.


> You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

Can I ask why you claimed that it didn't have numbers or characters? It seems an odd thing to claim without actually having that knowledge in the first place.


To be fair, numbers aren't listed in the spec initially as one of its basic types (and they aren't, they're implemented as pairs, which are a basic type) and no one is disputing that Bel has characters, just whether it's appropriate to mention character encodings for what seems to be intended to be mostly an academic exercise.


That's actually an interesting point for a Unicode mailing-list, but as long as I can farm grapheme-splitting out to I'm happy.


> the shorter I can make it, the more powerful Bel must be

Yes, well, until some limit, when the code becomes harder and harder and eventually too hard and concise, for humans to easily understand? :- )

I wonder if humans sometimes understand better, read faster, with a little bit verbosity. I sometimes expand a function-style one-liner I wrote, to say three imperative lines, because, seems like simpler to read, the next time I'm at that place again. — I suppose though, this is a bit different from the cases you have in mind.

Best wishes with Bel anyway :- )


> One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.

How do Bel's higher order facilities compare to Haskell's? E.g. https://wiki.haskell.org/Pointfree


I have to disagree; this is very clearly too simplistic. There are many dimensions in which a language can be better or worse. Things like:

* How debuggable is it?

* Do most errors get caught at compile time, or do they require that code path to be exercised?

* How understandable are programs to new people who come along? To yourself, N years later?

* How error-prone are the syntax and semantics (i.e. how close is the thing you intended, to something discontinuous that is wrong, that won't be detected until much later, and that doesn't look much different, so you won't spot the bug)?

* How much development friction does it bring (in terms of steps required to develop, run, and debug your program) ... this sounds like a tools issue that is orthogonal to language design, but in reality it is not.

* What are the mood effects of programming in the language? Do you feel like your effort is resulting in productive things all the time, or do you feel like you are doing useless busywork very often? (I am looking at you, C++.) You can argue this is the same thing as programs being shorter, but I don't believe it is. (It is not orthogonal though).

* What is your overall morale of the code's correctness over time? Does the language allow you to have high confidence that what you mean to happen is what is really happening, or are you in a perpetual semi-confused state?

I would weigh concision as a lower priority than all of these, and probably several others I haven't listed.


One answer to this question (and an exciting idea in itself) is that the difference between conciseness and many of these apparently unrelated matters approaches zero. E.g. that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

I'm not sure how true that statement is, but my experience so far suggests that it is not only true a lot of the time, but that its truth is part of a more general pattern extending even to writing, engineering, architecture, and design.

As for the question of catching errors at compile time, it may be that there are multiple styles of programming, perhaps suited to different types of applications. But at least some programming is "exploratory programming" where initially it's not defined whether code is correct because you don't even know what you're trying to do yet. You're like an architect sketching possible building designs. Most programming I do seems to be of this type, and I find that what I want most of all is a flexible language in which I can sketch ideas fast. The constraints that make it possible to catch lots of errors at compile time (e.g. having to declare the type of everything) tend to get in the way when doing this.

Lisp turned out to be good for exploratory programming, and in Bel I've tried to stick close to Lisp's roots in this respect. I wasn't even tempted by schemes (no pun intended) for hygienic macros, for example. Better to own the fact that you're generating code in its full, dangerous glory.

More generally, I've tried to stick close to the Lisp custom of doing everything with lists, at least initially, without thinking or even knowing what types of things you're using lists to represent.


Most programming I do is exploratory as well. Sometimes it takes a couple of years of exploring to find the thing. Sometimes I have to rewrite pretty hefty subsystems 5-7 times before I know how they should really work. I find that this kind of programming works much better in a statically-typechecked language than it ever did in a Lisp-like language, for all the usually-given reasons.

I agree that there is such a thing as writing a program that is only intended to be kind of close to correct, and that this is actually a very powerful real-world technique when problems get complicated. But the assertion that type annotations hinder this process seems dubious to me. In fact they help be a great deal in this process, because I don’t have to think about what I am doing very much, and I will bang into the guardrails if I make a mistake. The fact that the guardrails are there gives me a great deal of confidence, and I can drive much less carefully.

People have expressed to me that functions without declared types are more powerful and more leveragable, but I have never experienced this to be true, and I don’t really understand how it could be true (especially in 2019 when there are lots of static languages with generics).


> Most programming I do seems to be of this type

> language in which I can sketch ideas fast

Can I ask, what are you programs about?

And sketching ideas? Is it ... maybe creating software models for how the world works, then input the right data, and proceed with simulating the future?


I'm still trying to sort this out here, not sure if I will manage, so take my apologies if it's a little confused.

I agree with this idea to a degree. However, there are limits to this "relation". Imagine a sophisticated software that compresses program sources. Let's say it operates on the AST level, and not on the byte level, since the former is a little bit closer to capturing software complexity, as you mentioned somewhere else. Now, I know that that's almost the definition of a LISP program, but maybe we can agree that 1) macros can easily become hard to understand as their size grows 2) there are many rituals programmers have in code that shouldn't get abstracted out of the local code flow (i.e. compressed) because they give the necessary context to aid the programmer's understanding, and the programmer would never be able (I assume) to mechanically apply the transformations in their head if there are literally hundreds of these macros, most of them weird, subtle, and/or unintuitive. Think how gzip for example finds many surprising ways to cut out a few bytes by collapsing multiple completely unrelated things that only share a few characters.

In other words, I think we should abstract things that are intuitively understandable to the programmer. Let's call this property "to have meaning". What carries meaning varies from one programmer to the next, but I'm sure for most it's not "gzip compressions".

One important measure to come up with a useful measure of "meaning" is likelihood of change. If two pieces of code that could be folded by a compressor are likely to change and diverge into distinct pieces of code, that is a hint that they carry different meanings, i.e. they are not really the same to the programmer. How do we decide if they are likely to diverge? There is a simple test, "could we write any of these pieces in a way that makes it very distinct from the other piece, and the program would still make sense?". As an aspiring programmer trying to apply the rule of DRY (don't repeat yourself) at some point I noticed that this measure is the best way to decide whether two superficially identical pieces of code should be folded.

I noticed that this approach defines a good compressor that doesn't require large parts of the source to be recompressed as soon as one unimportant detail changes. Folding meaning in this sense, and folding only that, leads to maintainable software.

A little further on this line we can see that as we strip away meaning as a means of distinction, we can compress programs more. The same can be done with performance concerns (instead of "meaning"). If you start by ignoring runtime efficiency, you will end up writing a program that is shorter since its parts have fewer distinctive features, so they can be better compressed. And if the compressed form is what the programmer wrote down, the program will be almost impossible to optimize after the fact, because large parts essentially have to be uncompressed first.

One last thought that I have about this is that maybe you have a genetic, bottom-up approach to software, and I've taken a top-down standpoint.


> Imagine a sophisticated software that compresses program sources...

Isn't that the definition of a compiler?


https://en.m.wikipedia.org/wiki/Partial_evaluation#Futamura_... are an interesting point of view for the definition of a compiler


> that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

Busting a gigantic nut when I see a blank file


I have a feeling this post got way more upvotes than it really deserves, partly because, duh, it's pg we're talking about.

Many other languages have been introduced here (e.g. Hy) that actually solve new problems or have a deeper existential reasons, not just to "shorten stuff".


IDK, this is exactly the kind of thing I enjoy reading while sitting down with a cup of coffee on sunday morning.

Sure, I'm not going to start programming in Bel but the thought processes in the design document are definitely something I could learn a thing or two from.


"Many other languages have been introduced here (e.g. Hy) that actually solve new problems"

Adding s-expression syntax to Python solves an important program?


Don't take my sentence out of context. My sentence continues to state "or ..." which you clearly don't want to understand.


Ya, here's why I think "make your programs shorter" is a good criterion.

If a language compresses conceivably-desirable programs to a smaller AST size, then for all N, a higher fraction of size-N ASTs (compared to other languages) represent conceivably-desirable programs.

So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write". For example, failing to free your memory is bad code, and it's impossible(ish) to write in managed-memory languages.


> So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write".

It reminds me "The Zen of Python" "There should be one — and preferably only one — obvious way to do it"

https://jeffknupp.com/blog/2018/10/11/write-better-python-fu...

https://towardsdatascience.com/how-to-be-pythonic-and-why-yo...


Make your programs as short as possible, but no shorter. Failing to free your memory makes for a shorter program. Using GC makes for a shorter program, but there is a whole class of problems that just can't be solved with GC.


I think deciding to not free your memory or use GC (instead of not) would count as having a different program.


"language A is better than language B if programs are shorter in A", so something like APL (not the weird characters, but rather the way each operator operates on entire arrays, and naturally chain together) or maybe Forth (or Factor, for a more modern Forth), since they are a bit more general (I've never heard of a website being built in an array-oriented language, for example) would be the holy grail?


Hello,

do you think you could find time make a table of content ? I like .txt files but a language spec is just a tad too long.

Or maybe for the lispers around have a short summary of what was inspiring you to make bel after arc ? what were your ideas, problems (beside making programs more expressive shorter).

Thanks


I disagree that measuring program length by the size of the parse tree is a good measure of conciseness. To an author and reader, it is the number of words you read that matters, so i use word count as the benchmark value. People who write newspaper articles are given a word count to hit, you type a certain number of words per minute, and people read at a certain speed. So words are the units of measurement that are easily counted and there will be no dispute.

A parse tree is not particularly comparable between languages; most modern languages make extensive use of powerful runtimes to handle complex things that are part of the OS. There is typically over a million lines of code behind a simple text entry field.

In my Beads language for example, i go to great lengths to allow declarations to have a great deal of power, but they are not executable code, and thus have hardly any errors. Lisp is not a particularly declarative type of language and is thus more error prone. The more code you don't execute the more reliable the software will be, so declarative programming is even better than Lisp.

Lisp derivative languages are notorious for their poor readability; hence the avoidance of Lisp by companies who fear "read-only" code bases that cannot be transferred to a new person. Lisp, Forth, APL, all win contests for fewest characters, but lose when it comes to the transfer phase. But like a bad penny, Lisp keeps coming back over and over, and it always will, because 2nd level programming, where you modify the program (creating your own domain specific language basically for each app), is the standard operating methodology of Lisp programmers. That one cannot understand this domain specific language without executing the code makes Lisp very unsuitable for commercial use.

Yes there are a few notable successful commercial products (AutoCad) that used Lisp to great results, but for the next 5-10 million programmers coming on board in the next few years, I laugh at anyone with the audacity to imagine that Lisp would be remotely suitable. Lisp is an archaic language is so many ways. It cannot run backwards. It has no concept of drawing; it is firmly rooted in the terminal/console era from which it sprang. With no database or graphics or event model, you have to use API's that are not standardized to make any graphical interactive products, which is what the majority of programmers are making.


>> Lisp derivative languages are notorious for their poor readability

Is it though?

I find at least 2 other languages more unreadable.

- Scala, with operators everywhere, you need to have a cheat sheet

- Perl, you know that joke that this is the only language that looks the same before and after RSA applied on it?

Lisp, on the other hand, is pretty readable, at least to me. I only used Clojure from the Lisp family and it had a great impact on how I think and write code in other languages. The result is more readability. For a long time MIT tought CS courses using Lisp and SICP is also a pretty amazing read.


Scala and Perl are even worse, i agree with you there. Some people love Haskell too, but i think it's awful.

MIT has replaced Lisp with Python, because even though they pushed forced it upon their students for decades they had to admit Lisp was archaic and not particularly readable. The Pharo IDE is arguably the most sophisticated IDE around today, but the fact remains that Lisp doesn't permit easy interchangeable parts, which is a major goal of the new programming languages being developed. Although very bright people can get very good at Lisp, the average person finds it extremely hard. Remember you are reading it from the inside-out which is highly unnatural to someone who reads books which read left-to-right.


93% of Paint Splatters are Valid Perl Programs

See: https://famicol.in/sigbovik/

--

So ignoring AST/character measures of shortness; is perl reaching optimality on the noise-to-meaning ratio?

Am I allowed to say this makes perl even more meaningful than other languages?

I'm not especially serious, but in the spirit of my own stupidity, let me suggest that this is how, finally, painters can truly be hackers


> i use word count as the benchmark value

Words are called 'symbols' in Lisp. Making extremely expressive domain level constructs is reducing the code size a lot in larger Lisp programs.

> Lisp is not a particularly declarative type of language

Just the opposite. Lisp is one of the major tools to write declarative code. The declarations are a part of the running system and can be queried&changed while the program is running.

> Lisp ... all win contests for fewest characters,

Most Lisp since the end 70s don't care about character count. Lisp usually uses very descriptive symbols as operator names. Lisp users invented machines with larger memory sizes in the late 70s, to get rid of limitations in code and data size.

Paul Graham is favoring low-character count code, but this is not representative for general Lisp code, which favors low-symbol count code through expressive constructs.

> understand this domain specific language without executing the code

the languages will be documented and will be operators in the language. Other programming languages also create large vocabularities, but in different ways. Lisp makes it easy to integrate syntactic abstractions in the software, without the need to write external languages, which many other systems need to do.

For example one can see the Common Lisp Object System as a domain-specific extension to write object-oriented code in Lisp. It's constructs are well documented and widely use in Lisp.

> it/is firmly routed in the terminal/console era...

Use of a Lisp Machine in the 80s:

https://youtu.be/gV5obrYaogU

Interactive graphical systems were early used in Lisp since the 70s. There is a long tradition of graphical systems written in Lisp.

For example PTC sells a 3d design system written with a C++ kernel and a few million lines of Lisp code:

https://www.ptc.com/-/media/Files/PDFs/CAD/Creo/creo-element...


Also don’t brains pursue an analogous, maximally compressed encoding of information via the use of abstractions/ideals and heuristics of probably many many kinds (of course lossy compression would evolve over a “lossless” compression which would grant minimal gains over lossy and be much more sophisticated/energy expensive)?

I wonder if psychology in this topic could inform programming language design as to a “natural” step-size of abstraction, or an optimal parse-tree measure which would correspond to an equivalent mental model.


Well, for high-level languages like you say. I guess you could say for two languages of the same “abstraction level class”, as terseness would be valued in two low level languages as well - from the humans POV, though! Since this is all related to easier conceptual expression.


I think you're unable to answer those questions as the language does not seem to aim to try to offer any answer to those questions. Bel seems to be an experiment, and maybe once the "specification" phase has been done, some of those questions could be answered.

Here is the relevant parts from the language document:

> Bel is an attempt to answer the question: what happens if, instead of switching from the formal to the implementation phase as soon as possible, you try to delay that switch for as long as possible? If you keep using the axiomatic approach till you have something close to a complete programming language, what axioms do you need, and what does the resulting language look like?

> I want to be clear about what Bel is and isn't. Although it has a lot more features than McCarthy's 1960 Lisp, it's still only the product of the formal phase. This is not a language you can use to program computers, just as the Lisp in the 1960 paper wasn't. Mainly because, like McCarthy's Lisp, it is not at all concerned with efficiency. When I define append in Bel, I'm saying what append means, not trying to provide an efficient implementation of it.


These questions aren't useful for evaluating pg's work (or frankly that of most PL implementors) because it concerns things, like syntax, libraries, user culture, etc., that is outside of the rather narrow domain Atanassow cares about.

He believes languages are utterly defined by their type systems, saying in the LtU comment you linked: "Perl, Python, Ruby, PHP, Tcl and Lisp are all the same language". I'd assume that he would say the same about js, lua, etc.. AFAICT, he's quite knowledgeable about PLT, formal methods, static typing, category theory, etc., but he disregards everything else.

It's worth considering whether he is right to do so. A waspish answer is to observe that companies built on languages he deems to be "in the corner" (c, java, dynamic languages) constitute the entirety of companies with significant market capitalization, and ask (apologies to Aaron Sorkin) "If your type system is so smart, why do you lose so always". A better answer is to note that a bunch of stuff that he deems trivial (generally anything outside of the type system, but specifically libraries, syntax, bindings to existing systems, etc.) matters, so javascript is different than python despite being identical in his eyes. Specifically, js runs in the browser and has a pretty good runtime for building web services, while python has exceptional ecosystem support for data science and machine learning.


If it's a language for general programming, I immediately ask if it has sum/product types, pattern matching, and static typing.

I'm not really interested in learning yet another dynlang that is missing all those.


What are sum/product types?



In terms of C, a sum is a union and a product is a struct.


Can this language render block quotes correctly on mobile in HN? Then I think it's a real win.


> How can I show that my solution cannot be expressed in some other language?

This is not easy. The solution is certainly computable in other languages, and expressivity is subjective.

We can quantify it like this: a solution is highly expressive if it is given in terms of elements mostly from the problem domain.

For instance, if we have to "malloc" some memory and bind it to a "pointer", but the problem domain is finance, those things don't have anything to do with the problem domain and are therefore inexpressive.

Even if we quantify expressivity, the proposition that "this has the best expressivity for a given problem domain than all other tools, known and unknown" is mired with intractability.


Maybe Bel is Blub?


That's fair, but I love PG so I'm biased lol


> 5. (where x)

> Evaluates x. If its value comes from a pair, returns a list of that pair and either a or d depending on whether the value is stored in the car or cdr. Signals an error if the value of x doesn't come from a pair.

> For example, if x is (a b c), > > > (where (cdr x)) > ((a b c) d)

That is one zany form.

1. How is this implemented?

2. What is the use of this?

3. What does (where x) do if x is both the car of one pair and the cdr of another, eg. let a be 'foo, define x to be (join a 'bar), let y be (join 'baz a), and run (where a).


It's used to implement a generalization of assignment. If you have a special form that can tell you where something is stored, you can make macros to set it. E.g.

  > (set x '(a b c))
  (a b c)
  > (set (2 x) 'z)
  z
  > x
  (a z c)
which you can do in Common Lisp, and

  > (set ((if (coin) 1 3) x) 'y)
  y
  > x
  (y z c)
which you can't.


I am diliriously happy to see you here commenting about lisp. I grew up with your writing and your work and... I don’t know, I just wanted to express excitement.

Thanks for the new dialect!


Thanks for sharing, Paul.

Still knee-deep in the source. Gonna steal some of this.

What made you settle on (coin), is that a LISP trope? I flopped back and forth between naming it (coin) and (flip) in my own LISP before finally settling on (flip). I'd honestly like to divorce the name entirely from its physical counterpart.


I originally did call it flip, but then I took that name for the current flip:

  (def flip (f)
    (fn args (apply f (rev args))))


> I'd honestly like to divorce the name entirely from its physical counterpart.

How about (bit) or (randbit)


(randbit) isn't bad. My implementation also takes a float 0 <= x <= 1 to determine the probability of the outcome, so (bit) would probably be too ambiguous. I do like the brevity of a 4-letter function, though. A lot of my lisp coding is genetic and probabilistic so it gets used a lot.


dice?


Would be the result of (where (cadr x)) still be ((a b c) d)? Is where basically tracking the most recent traversal?


(where (cadr x)) would be ((b c) a).

where tells you what to set, and setting the cadr of (a b c) means setting the car of (b c).


Ah, now it's crystal clear.


Reproduced from feedback that I gave pg on an earlier draft (omitting things he seems to have addressed):

When you say,

> But I also believe it will be possible to write efficient implementations based on Bel, by adding restrictions.

I'm having trouble picturing what such restrictions would look like. The difficulty here is that, although you speak of axioms, this is not really an axiomatic specification; it's an operational one, and you've provided primitives that permit a great deal of introspection into that operation. For example, you've defined closures as lists with a particular form, and from your definition of the basic operations on lists it follows that the programmer can introspect into them as such, even at runtime. You can't provide any implementation of closures more efficient than the one you've given without violating your spec, because doing so would change the result of calling car and cdr on closure objects. To change this would not be a mere matter of "adding restrictions"; it would be taking a sledgehammer to a substantial piece of your edifice and replacing it with something new. If closures were their own kind of object and had their own functions for introspection, then a restriction could be that those functions are unavailable at runtime and can be only be used from macros. But there's no sane way to restrict cdr.

A true axiomatic specification would deliberately leave such internals undefined. Closures aren't necessarily lists, they're just values that can be applied to other values and behave the same as any other closure that's equivalent up to alpha, beta, and eta conversion. Natural numbers aren't necessarily lists, they're just values that obey the Peano axioms. The axioms are silent on what happens if you try to take the cdr of one, so that's left to the implementation to pick something that can be implemented efficiently.

Another benefit of specifying things in this style is that you get much greater concision than any executable specification can possible give you, without any loss of rigor. Suppose you want to include matrix operations in your standard library. Instead of having to put an implementation of matrix inversion into your spec, you could just write that for all x,

    (or
     (not (is-square-matrix x))
     (singular x)
     (= (* x (inv x))
        (id-matrix (dim x))))
Which presuming you've already specified the constituent functions is every bit as rigorous as giving an implementation. And although you can't automate turning this into something executable (you can straightforwardly specify a halting oracle this way), you can automate turning this into an executable fuzz test that generates a bunch of random matrices and ensures that the specification holds.

If you do stick with an operational spec, it would help to actually give a formal small-step semantics, because without a running implementation to try, some of the prose concerning the primitives and special forms leaves your intent unclear. I'm specifically puzzling over the `where` form, because you haven't explained what you mean by what pair a value comes from or why that pair or its location within it should be unique. What should

   (where '#1(#1 . #1))
evaluate to? Without understanding this I don't really understand the macro system.


This is similar to the feedback Dave Moon gave to PG's previous language, Arc, more than a decade ago. http://www.archub.org/arcsug.txt

Representing code as linked lists of conses and symbols does not lead to the fastest compilation speed. More generally, why should the language specification dictate the internal representation to be used by the compiler? That's just crazy! When S-expressions were invented in the 1950s the idea of separating interface from implementation was not yet understood. The representation used by macros (and by anyone else who wants to bypass the surface syntax) should be defined as just an interface, and the implementation underlying it should be up to the compiler. The interface includes constructing expressions, extracting parts of expressions, and testing expressions against patterns. The challenge is to keep the interface as simple as the interface of S-expressions; I think that is doable, for example you could have backquote that looks exactly as in Common Lisp, but returns an <expression> rather than a <cons>. Once the interface is separated from the implementation, the interface and implementation both become extensible, which solves the problem of adding annotations.

This paragraph contributed a lot to my understanding of what "separating interface from implementation" means. Basically your comment is spot on. Instead of an executable spec, there should be a spec that defines as much as users need, and leaves undefined as much as implementors need.


In Clojure, some data structures are implemented as "sequable", which means that they are not implemented as sequences, but they can be converted to sequences if needed. Functions for head and tail are also defined for these structures, for example by internally converting them to sequences first. That means that any function that works with sequences can work with these structures, too.

This seems to me like the proper way to have "separation of interface from implementation" and "everything is a list" at the same time. Yeah, everything is an (interface) list, but not necessarily an (implementation) list.


Python and Rust both have similar "duck"y features where you can Impl iteration or add the necessary __special__ methods to get iteration features.


An object type can implement interface types, as you say.

Object type, and identity, can also be decoupled from implementation type. Dynamically, statically, or with advice. V8 javascript numbers and arrays can have several different underlying implementations, which get swapped depending on how the thing is used (and with extensions, you can even introspect and dispatch on them).

Typing can also be fine grained. So things can narrowly describe what they provide, and algorithms what they require. "I require a readable sequence, with an element type that has a partial order, and with a cursor supporting depth-one backtracking".

Object identity can also be decoupled from type. Ruby's 'refinements' permits lexically scoped alternative dispatch, so an object can locally look like something else. That, and just a few other bits, might have made the Python 2 to 3 transition vastly less painful - 'from past import python2'.

We are regrettably far from having a single language which combines the many valuable things we've had experience with.


Re: operational semantics, you can in fact "cheat" and lots of languages do. It's just difficult. Luajit and other tracing JITs optimize away implementation details or use specialized datastructures in the cases where no code depends on introspection behind the curtain, with trace poisoning and falling back to "proper" evaluation in cases where it's needed.


Right - a Lisp list is conceptually a linked-list. How the compiler chooses to implement the list is up to it - it can use an array as long as it can meet the specification and give useful performance.


> I'm having trouble picturing what such restrictions would look like. [...] there's no sane way to restrict cdr.

I guess you could question its sanity, but the obvious way would be to say that any object (cons cell) produced by make-closure or (successfully) passed to apply is/becomes a "closure object", and car/cdr will either fail at runtime or deoptimise it into its 'official' representation.


A couple of things on my checklist for mathematical purity are (a) first-class macros and (b) hardcoded builtins. It looks like Bel does have first-class macros. As for (b)...

> Some atoms evaluate to themselves. All characters and streams do, along with the symbols nil, t, o, and apply. All other symbols are variable names

The definitions of "ev" and "literal" establish that nil, t, o, and apply are in fact hardcoded and unchangeable. Did you consider having them be variables too, which just happen to be self-bound (or bound to distinctive objects)? nil is a bit of a special case because it's also the end of a list, and "(let nil 3 5)" implicitly ends with " . nil"; o might be an issue too (said Tom arguably); but apply and t seem like they could be plain variables.

P.S. It looks like you did in fact implement a full numerical tower—complex numbers, made of two signed rational numbers, each made of a sign and a nonnegative rational number, each made of two nonnegative integers, each of which is a list of zero or more t's. Nicely done.


Welcome back PG!

HN: How do you get an intuitionistic understanding of computation itself? While Turing Machines kind of make sense in the context of algorithms, can I really intuitively understand how lambda calculus is equivalent to Turing machines. Or how Lambda Calculus can solve algorithms? What resources helped understanding these concepts?

I'm currently following http://index-of.co.uk/Theory-of-Computation/Charles_Petzold-... and a bunch of other resources in the hope I'll "get" them eventually.


I can share my experience, because I was asking myself the same question 6 years ago...

My approach was to try and build a Lisp -> Brainfuck compiler. My reasoning was: Brainfuck is pretty close to a Turing machine, so if I can see how code that I understand gets translated to movement on a tape, I'll understand the fundamentals of computation.

It became an obsession of mine for 2 years, and I managed to develop a stack based virtual machine, which executed the stack instructions on a Brainfuck interpreter. It was implemented in Python. You could do basic calculations with positive numbers, define variables, arrays, work with pointers...

On one hand, it was very satisfying to see familiar code get translated to a large string of pluses and minuses; on the other, even though I built that contraption, I still didn't feel like I "got" computation in the fundamental sense. But it was a very fun project, a deep dive in computing!

My conclusion was that even though you can understand each individual layer (eventually), for a sufficiently large program, it's impossible to intuitively understand everything about it, even if you built the machine that executes that program. Your mind gets stuck in the abstractions. :)

So... good luck! I'm very interested to hear more about your past and future experiences of exploring this topic.


(Binary) Lambda Calculus can interpret brainfuck in under 104 bytes:

http://tromp.github.io/cl/Binary_lambda_calculus.html#Brainf...


Are you aware that GNU Guile, which is self hosted (written mainly in scheme), can interpret brainfuck?

https://www.gnu.org/software/guile/manual/guile.html#Support...


I wasn't aware, no. However, interpreting Brainfuck code is the easy part, as I've learned. The hard part was creating a "runtime" that understands memory locations aka. variables.

See this question that I asked (and later answered myself) around that time: https://softwareengineering.stackexchange.com/questions/2847...

Most of the project was figuring out things similar to that. You get to appreciate how high-level machine code on our processors really is! When things like "set the stack pointer to 1234" are one instruction, instead of 10k.


Quite intrigued with your approach; will look into trying something similar for visualising an embedded system.


I wrote a book to help answer these questions: https://computationbook.com/. It won’t necessarily be right for you (e.g. the code examples are in Ruby) but the sample chapter may help you to decide that for yourself.


I had a small homework on lambda-calculus and Turing machines equivalence when I was an undergrad (during my third year of uni iirc). It's in French but you may be able to follow by looking at the "code" parts. In it I use pure lambda-calculus to simulate a universal Turing Machine. It's probably not done the best not the more canonical way, but it's what I got while discovering lambda-calculus at the same time, so it might be good as a starting point for where you're at now. You can find the paper here: https://pablo.rauzy.name/files/lambdacalcul.pdf. Hope it helps!


I really like how type checking is implemented for parameter lists. I think there's a more generalized extension of this.

Specifically, I think that there exists a lisp with a set of axioms that split program execution into "compile-time" execution (facts known about the program that are invariant to input) and a second "runtime" execution pass (facts that depend on dynamic input).

For example, multiplying a 2d array that's defined to be MxN by an array that's defined to be NxO should yield a type that's known to be MxO (even if the values of the array are not yet known). Or if the first parameter is known to be an upper-triangular matrix, then we can optimize the multiplication operation by culling the multiplication AST at "compile-time". This compile-time optimized AST could then be lowered to machine code and executed by inputting "runtime" known facts.

I think that this is what's needed to create the most optimally efficient "compiled" language. Type systems in e.g. Haskell and Rust help with optimization when spitting out machine code, but they're often incomplete (e.g., we know more at compile time than what's often captured in the type system).

I've put "compilation" in quotes, because compilation here just means program execution with run-time invariant values in order to build an AST that can then be executed with run-time dependent values. Is anyone aware of a language that takes this approach?


https://github.com/idris-lang/Idris-dev/blob/5965fb16210b184...

Idris is not a Lisp and I've never used it, but it has dependent types (types incorporating values) and encodes matrix dimensions into the type system (I think only matrix multiplications which can be proven to have matching dimensions can compile). I think the dimension parameters are erased, and generic at runtime (whereas C++ template int parameters are hard-coded at compile time). IDK if it uses dependent types for optimization.


I'm not sure, but this seems a bit like how Julia specialize functions based on type of arguments? Or maybe it's the inverse - as Julia creates specialized functions for you (eg add can take numbers, but will be specialized for both int32 and int64 and execute via appropriate machine instructions).

In fact, I think Julia is a great example of taking some good parts of scheme and building a more conventional (in terms of syntax anyway) language on top.


Yes, Julia has some of it. But you're still required to specify the template parameters of a type (unless I'm mistaken). Whereas what I'm talking about is that any value of a data type could be compile time known. For example, some or all of the dimensions of an nd-array, as well some or all values of said nd-array.


Julia has explicit parameterization, but will also interprocedurally propagate known field values at compile time if known (which happens a lot more because our compile time is later), even if they weren't explicitly parameterized. Since this is so useful (e.g. as you say for dimensions of nd arrays - particularly in machine learning), there's been some talk of adding explicit mechanisms to control the implicit specialization also.


Ah, that's good to know. This sounds exactly like what I'm looking for. Thanks will read up on this in the docs!


"I think that there exists a lisp with a set of axioms that split program execution into "compile-time" execution"

Common lisp macros? Pre-hygienic macros in scheme? Did I get you wrong?

The question is what is missing to implement _static_ type checking as macros and to allow the compiler to leverage the generated information.


I'm not sure, but I think it's different. Specifically, I think you would do macro evaluation first, then fully evaluate the resulting program on run-time independent values, and only then evaluate the resulting program on run-time dependent values.

Edit: Also, run-time independent evaluation would need to handle branching differently. For example, in this expression: (if a b c). If `a` is not known at "compile time" then this expression remains in the AST, and run-time independent value propagation continues into `b` and `c`. If `a` is known at "compile time" then only `b` or `c` remain in the AST depending on whether `a` is true or false.


Short direct intro, link to guide for language, here is a link to code examples. This is for me are the things I look for when reading about some new computer language release upon here or anywhere and this just wins upon that first impression. It's kinda like the early days of quality usenet posts nostalgia in the direct and to the point aspect, and I love that.


>Bel is an attempt to answer the question: what happens if, instead of switching from the formal to the implementation phase as soon as possible, you try to delay that switch for as long as possible? If you keep using the axiomatic approach till you have something close to a complete programming language, what axioms do you need, and what does the resulting language look like?

I really like this approach and have wondered in recent years what a programming language designed with this approach would look like. There are a few that come close, probably including Haskell and some of the more obscure functional languages and theorem prover languages. Will be really interesting to see a Lisp developed under this objective.

>But I also believe that it will be possible to write efficient implementations based on Bel, by adding restrictions. If you want a language with expressive power, clarity, and efficiency, it may work better to start with expressive power and clarity, and then add restrictions, than to approach from another direction.

I also think this notion of restrictions, or constraint-driven development (CDD), is an important concept. PG outlines two types of restrictions above. The first is simply choosing power and clarity over efficiency in the formative stages of the language and all the tradeoffs that go with that. The second is adding additional restrictions later once it's more clear how the language should be structured and should function, and then restricting some of that functionality in order to achieve efficiency.

Reminds of the essay "Out of the Tarpit" [1] and controlling complexity in software systems. I believe a constraints-based approach at the language level is one of the most effective ways of managing software complexity.

[1]:https://github.com/papers-we-love/papers-we-love/blob/master...


    Bel has four fundamental data types:
    symbols, pairs, characters, and streams.
No numbers?

Then a bit further down it says:

    (+ 1 2) returns 3
Now suddenly there are numbers.

What am I missing?


Numbers are represented using pairs. Specifically

  (lit num (sign n d) (sign n d))
where the first (sign n d) is the real component and the second the imaginary component. A sign is either + or -, and n and d are unary integers (i.e. lists of t) representing a numerator and denominator.


Is an implementation compliant if it ignores this definition for the sake of performance?

One nit with this definition is that it implies the car of a number would be a well-defined operation. For complex numbers, it would be natural for car to return the real component.

I admit, it was surprising you are defining numbers at all. It’s tricky to pin down a good definition that isn’t limiting (either formally or for implementations).

I once got most of Arc running in JavaScript, almost identical to your original 3.1 code, and FWIW it was very fast. Even car and cdr, which were implemented in terms of plain JavaScript object literals, didn’t slow down the algorithms much.

But I suspect that requiring that (car x) always be valid for all numbers might be much more tricky, in terms of performance.

I apologize if you have already explained that implementation isn’t a concern at all. I was just wondering if you had any thoughts for someone who is anxious to actually implement it.

EDIT: Is pi written as 31415926 over 1000000?


I don't expect implementations to be compliant. Starting with an initial phase where you care just about the concepts and not at all about efficient implementation almost guarantees you're going to have to discard some things from that phase when you make a version to run on the computers of your time. But I think it's still a good exercise to start out by asking "what would I do if I didn't count the cost?" before switching to counting the cost, instead of doing everything in one phase and having your thinking constrained by worries about efficiency.

So cleverness in implementation won't translate into compliance, but rather into inventing declarations that programmers can use that will make their programs dramatically faster. E.g. if programmers are willing to declare that they're not going to look inside or modify literals, you don't have to actually represent functions as lists. And maybe into making really good programming tools.

As I said elsewhere, half jokingly but also seriously, this language is going to give implementors lots of opportunities for discovering new optimization techniques.


Something like 5419351 / 1725033 requires fewer digits (or bits) and gives a much better approximation, off by e-14 instead of e-8.

http://mathworld.wolfram.com/PiContinuedFraction.html


Doesn't this allow four distinct representations of zero?


An infinite number, in theory. Even (sign n d) does, since you can have anything in d.


It’s best not to get hung up on formality. Lisp (and expecially arc) are powerful due to what they can do, not due to what they define. Racket might disagree with that though.

From a prototyping perspective it would be a waste of time and overly constraining to define what numbers are or what you can do with them. That’s best left to the implementation.

I can hear everyone groaning in unison with that, but trust me. Nowadays every implementation will choose some sensible default for numbers. If you transpile bel to JS, you get JS’s defaults. Ditto for Lua. But crucially, algorithms written for one generally work for the other.


But Bel obviously has a numeric type if arithmetic works...


Sure, but all languages do.

In any case, I was mistaken. Numbers are defined.


Numbers are defined further down, in terms of those primitives. Search for the phrase "Now we come to the code that implements numbers."


I've only read the first twenty pages approximately. I'd like to see a little more disentangling of characters and strings as inputs to automata from text as something people read. The narrow technical meaning is implied in the text, but the use of "string" as a synonym for "text" is common enough that it might be worth being a little more explicit or didactic or pedantic or whatever.

The second thought is that lists smell like a type of stream. Successive calls to `next` on `rest` don't necessarily discern between lists and streams. The difference seems to be a compile time assertion that a list is a finite stream. Or in other words, a sufficiently long list is indistinguishable from an infinite stream (or generator).

I'm not sure you can have a lisp without lists, but they seem more like objects of type stream that are particularly useful when representing computer programs than a fundamental type. Whether there are really two fundamental types of sequences, depends on how platonic really really is.

All with the caveat, that I'm not smart enough to know if the halting problem makes a terminating type fundamental.


It's strange that the description of the language [0] starts using numbers without introducing them at first and then far later in the file says that they are implemented as literals. I didn't get to see the source code yet (I'm on mobile and have to randomly tap the text of the linked article to find links…), but I don't understand the point of this nor if it's just a semantic choice or actually implemented that way (I don't see how).

[0] https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...


I like the (English) syntax of the article. It seems to have the same concise feeling as the language itself. I like to think that's why pg chooses falsity over falsehood.

It's a little strange to have (id 'a) return nil. Also having car and cdr as historical holdouts when most of the naming seems to aim at an ahistorical stylistic.

Not very deep remarks, since I would need more time to digest.


Genuine questions that probably most here want to put but seem to be afraid of:

Why Bel? What are the problems that Bel wants to solve? Is this an hobby project or something more serious?


https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157... explains the purpose of the project. It's an attempt to axiomatize a complete programming language the way that McCarthy's Lisp did for computability.

Sounds serious to me. Why can't it be both?


I can't answer the main question, but my guess from pg's other work is that there is no dichotomy between hobby and serious. Frivolous seeming starting points can be powerful in unexplored ways. The www, most everything in it, and the stuff it's built out of are all rich with examples.


This seems relevant here, in his own words:

> Don't be discouraged if what you produce initially is something other people dismiss as a toy. In fact, that's a good sign. That's probably why everyone else has been overlooking the idea. The first microcomputers were dismissed as toys. And the first planes, and the first cars. At this point, when someone comes to us with something that users like but that we could envision forum trolls dismissing as a toy, it makes us especially likely to invest.

---

Related: https://blog.ycombinator.com/why-toys/


answer is in the guide and is quite interesting.

so, lisp development came in two phases: formal phase -- that's where original 1960 paper described lisp from simple axioms and built on them -- and implementation phase -- formal step is of no use for us because it didn't have numbers, error handling, i/o, etc.

argument here is that the formal phase might be the most important phase, but that the second phase usually takes longer and is more practical. so what if one delays the second phase for as long as possible? what could be discovered and be useful for the implementation phase? that's the raison d'etre of bel i think.

i very much like it :)


The kind of questions that don't make sense in a forum where 'Hacker' is in the title.


I believe that when somebody makes a "Show HN" post, this person wants feedback. You can't give it without understanding the project.


Critical questions have a tendency to be loaded, one way or another. Yours is (unintentionally, I assume) loaded with the assumption that projects are either hobby/toy projects or "serious" ones. The "ethos" OP alludes to is that this is a severely limiting fallacy and a criticism you should actively ignore.


Welcome back to Hacker News


Its been a long time!


"Its you... Its been a long time. How have you been?"

- GLaDOS (Portal 2)


I think at the end of the day, the question is whether a compiler for this type of language could efficiently handle a function like distinct-sorted:

   > (distinct-sorted '(foo bar foo baz))
   (bar baz foo)
This is a function that usually requires efficient hash tables and arrays to be performant, a hash table for detecting the duplicates, an array for efficient sorting. However, both the hash map and array could theoretically be "optimized away", since they are not exposed as part of the output.

A language like Bel that does not have native hash maps or arrays and instead uses association lists would have to rely entirely on the compiler to find and perform these optimizations to be considered a usable tool.


Interesting example of a unit test of sorts for language usability. Got any others?


I'm no language expert, but the there things I can think of that make bel impractical without major compiler trickery are (1) lack of primitive hash tables (2) lack of primitive arrays (3) no support for tail call optimization (though that third thing is probably fixable with the right compiler tricks)

The other concern I have is the lack of a literal associative data structure syntax (like curly braces in clojure) It seems that would negatively impact pg's goal of "code simplicity" quite a bit.


(1) lack of primitive hash tables (2) lack of primitive arrays

I'll note that if there are primitive arrays and the compiler optimizes arithmetic, the rest of the hash table can be implemented in Bel.

Also, maybe a Sufficiently Smart Compiler could prove that a list's cdrs will never change, store it in cdr-coded form, and treat it like an array (with the ability to zoom right to, say, element 74087 without chasing a bunch of pointers).


Is there a version of Bel written in some other language? If not, how do you get started without a bootstrap?


As mentioned in the user guide, just like Lisp, Bel is still in its formal phase, and not usable as a programming language yet. The idea is to extend the formal phase for a longer time, before diving into the implementation.


He uses an unreleased interpreter he wrote for it in arc.


It amuses and pleases me to see that pg will continue to play around with lisp presumably forever, regardless of how wealthy he becomes.

I hope I'll never stop coding passion projects myself.

John Carmack talked about this on Joe Rogan's show recently, about how he still codes and how Elon Musk would like to do more engineering but hasn't much time.

I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

Tim Sweeney is a billionaire and still knee deep in code.

It's Saturday night here, and I'm going to go write some code. Unproductive, unprofitable, beautiful game engine code.

Hope all you other hackers get up to something interesting this weekend.


My father, who is about to turn 80, is still knee deep in code every day -- having retired (many years ago) after 45 years in the industry. The guy remembers paper tape, and is still going strong.

And often, he's way into whatever the latest language is. He builds games, software for organizing pills and bills, and has designed and rebuilt his own editor several times.

Honestly, the guy's my hero.


Does your dad have a website or github with his work? Would be cool to check it out.


No, honestly. He contributes little bugfix patches to projects here and there to various projects, but he doesn't want to maintain software for anyone else (fair, I think -- he did spend an entire career doing that).

He makes (pretty cool) side-scrollers for this grandchildren, tools for himself. He sometimes complains that he's not sure what to build, but he figures it out. He also has a Mac and a Linux box (running Gentoo, I think). That keeps him busy!


This is encouraging for me to hear. I hope to still be hacking away on my projects when I reach that age. (and beyond ideally).


That’s awesome. I’m curious, what are his languages of choice?


You probably can't name one he hasn't played around with (and I'll admit -- I'm an enabler with this). He certainly has his black belt in C++, JavaScript, and Ruby, but he's crazy about Lisp and has recently gone on a tear with Nim.

He's written his own editor ("I want it the way I want it!") maybe four times in different languages, just because. He's always teasing me for being a vi user ("You're stuck in the 70s!")


> has recently gone on a tear with Nim.

That's awesome! I'd love to hear his thoughts :)


He really loved it (and I think plans to write some more stuff with it).

I'm coming up on 50, so all of my friends' parents are roughly the same age. The big crackup for me is that they're all worried about their parents falling for a phishing attack or having to update operating systems that haven't seen an update in ten years.

When I call my dad, he wants to get deep into compiler theory. Or talk about Rust. Or moon about how the world would have been so much better if Brendan Eich had just implemented Scheme. Or he'll talk about how much he loves TeX.

You know, he complains that he's not as sharp as he used to be. He's always warning me to take care of my body, because "mens sana in corpore sano!" But really, I just don't worry about him at all.

He's also one of the happiest people I know, and absolutely one of the most well read (I'm an English professor; I'm surrounded by well-read people. It's kind of amazing how many books he's read in his life).


damn, that's amazing. I don't think my hands or eyes will keep up with me at that age. Hell, they carp is already getting my wrists. At least I'll still have metalworking!


Maybe you won't (alas)! He is also a master-level woodworker, but had to stop due to spinal stenosis.

Being a geek, he has his Apple watch set up to remind him to go walk around every hour, and that tends to keep it at bay. But he really can't work in the shop any more.

And honestly (just to brag on my dad a bit more) the way he said goodbye to all that struck me as really impressive. He loved woodworking, but he didn't weep and mourn or go into a depression when he couldn't do it anymore. He just shrugged and said, "Well, I guess it's on to my other hobbies!"

That's how I want to be.


> regardless of how wealthy he becomes

It may be the reverse that happens here: he is wealthy enough to have time to play with Lisp.


There are a LOT of things like that. Travel is a good example.

For people making a living wage, some think “I don’t have the money to travel.” Then when they have the money, they think “I don’t have the time to travel.” They don’t really travel until they retire, and then they have trouble going anywhere that isn’t friendly to their knees and faltering eyesight, so they buy an RV.

Others travel in college from hostel to hostel. When they get work, they negotiate longer vacations to travel, or they deliberately become giggers so they can travel.

I met a couple in Thailand who climbed, they worked five months a year around the clock as nurses, and climbed seven months a year literally everywhere. They lived on the cheap so they would have more climbing days.

I have similar stories about people who ride bicycles and dive. If you are passionate, don’t wait for some magical day when you can fly the Concorde to go climbing in Europe with your friends (true story about some Yosemite dirtbags who came into illicit cash).

If you’re passionate about a thing, go do that thing. Now.


> they negotiate longer vacations to travel

Does that work? I've tried to negotiate more vacation days with every job offer I've received, and it's never worked.


Coming up on my 4-year anniversary at a company, having just returned from 2 years in a foreign office, I told my Director I'd like a 3rd week of vacation and no more money. Just treat me like I'd been working here an extra year.

He got me a $5,000 raise, no extra vacation time. Said that he did his best and he was the type of person I'd absolutely believe.

Ever since then, in my mind, a vacation day has to be worth more than $1,000. Simple economics, right? Somehow giving me $5,000 maximizes shareholder value more than giving me 5 extra days off.

My next job was more generous with vacation. They used the accrual method with a maximum and there was an extended period where it had become difficult to take time off and they'd stopped putting our PTO balance on our paystubs... so at some point I track down where to find the info and realize I'd lost 10 days of vacation accrual.

I about lost my mind and HR couldn't understand why I felt like they'd cheated me out of more than $10,000.

Felt trapped in the job forever 'cause eventually I'm earning I think 27 vacation days per year, and 10 to start is pretty typical, maybe 15 if you're lucky... And as much as we might like to think everything is negotiable, reality on the ground is that very little besides salary can be successfully negotiated.


I've negotiated with dozens of people who favor vacation over salary increase and I'm glad to give the extra time off to them. It's definitely negotiable.


Are you hiring?


A friend of mine told his boss “I’m going traveling for three months starting in January. Am I quitting or just taking a leave of absence?”. It worked.

To negotiate more vacation days at hiring, you need multiple offers. Then you can tell them “I will accept your offer if it includes six weeks of vacation. Otherwise I’m going to Facebook”. It probably means negotiating a little less hard on pay since you need to focus to get it. But that’s probably fair.


> Then you can tell them “I will accept your offer if it includes six weeks of vacation. Otherwise I’m going to Facebook”

I've used that exact line, and they didn't budge, so I went to Facebook. I guess everywhere I've applied is large tech companies. It might work better at smaller firms.


Negotiation is never a sure thing. Don’t say it unless you mean it, it can always go either way.


How much vacation time did you get at Facebook?


All US employees get 21 days. Not amazing, but it's better than the starting level for many companies.


Facebook budged, and Facebook is a big tech company.


Really? I thought they were 21 days for literally everybody. What did you get?


They won’t ever “budge” all the way up to your actual worth though, you just went from 10% of your worth to 12%.


Nothing is certain, but remember that there's a difference between negotiating "unpaid time off" and "vacation days."

A vacation day is a paid thing. Unpaid time off is not. It's all money and productivity in the end, but in many companies it's easier to negotiate unpaid time off, than to negotiate extra vacation days. Even if the money works out the same in the end.


I’m not sure these are the universal definitions you think they are. I refer to paid vacation as ‘time off’.


And then there’s PTO, which is sometimes “paid time off,” and sometimes “personal time off,” which is also paid:

https://en.wikipedia.org/wiki/Paid_time_off

Perhaps it’s best, as another comment suggests, to always include the adjectives “paid” or “unpaid.”


The modifier matters

Paid time off

Unpaid time off

Leave of absence

Quit and reapply when you get back.


First rule of negotiation is you have to be willing to walk away if you don't get what you ask for. Could be you weren't willing to say "no thanks" if they didn't budge on vacation time.


> First rule of negotiation is you have to be willing to walk away if you don't get what you ask for. Could be you weren't willing to say "no thanks" if they didn't budge on vacation time.

Second would be maybe start asking about that when the offer letter comes in. At which point they are invested in you.


Fwiw, Google allows for unpaid time off of less than 30 days. Your salary stops during that time, but benefits and so on stay on, which is nice rather than going into "Ugh, need to figure out COBRA".

Google also has up to 5 weeks paid vacation in the US, starting at 3 weeks when you join. After 3 years, you move to 4 weeks, and after 5 years you max out at 5. Other countries basically have their government mandated time off rules, but 5 weeks plus unpaid plus holidays is pretty good.

Each week of unpaid time is thus about 2% of salary. That said, I'm definitely in the minority in using unpaid time off (though, even more strange for me is how many colleagues let their vacation accrual reach the maximum and even forfeit days...).


You do have to leave occasionally to have believable leverage. I’ve found that if you announce you’re leaving and will quit if you have to they’re much more amenable.


Link for the lazy: https://www.thecannachronicles.com/the-dirtbags-of-dope-lake...

This is discussed in the Valley Uprising documentary as well (the whole doc is amazing and definitely worth your time).


I haven't checked your link, but John Long told the story in one of his many autobiographical books. He also was credited with the story idea that became the Sylvester Stallone climbing action movie "Cliffhanger," and that is loosely based on the incident.


Oh yeah, I'm definitely not providing the definitive or original account, just throwing out a decent overview for anyone interested in more details.


Your link provides a number of details I do not recall from John Long’s account. Thank you.

On the other hand, John Long is a raconteur par excellence, and I recommend all of his books as entertaining reading about an important time in climbing history.

——

One thing Long wrote/claimed that is not mentioned in the linked post is that two of the dirtbags involved leveraged their haul into an ongoing drug-dealing business. Until they were found, shot dead, in their home.


We want our employees to live first, work second. So we have set it up so people can work from anywhere, and they do. A lot of employees go and work from interesting travel places for a few weeks or months.


He was playing with Lisp before he was wealthy and also while becoming wealthy. HN's software, which runs on Arc, used to run YC too. It was all one codebase. Only after pg retired did YC move the business parts of the software out of Lisp, and it got a lot bigger and more complicated in the process—as any Lisper would expect.


At first it was not just all one codebase, but all one thread. If HN was busy, all our internal software would run a little slower.


have you thought about publishing the full source from then? The HN source was dense but very educational, and it would be really interesting to see internal tools written in arc.


You wouldn't learn much more from reading it, because (to an almost comical extent in retrospect) the internal code was just like HN. In fact, not just like; it was mostly the same code.


Interesting, am I reading this the right way that the larger YC organization's codebase is an outgrowth of the message board tech? You guys post updates on investments/ideas for what startups should be doing just like news links are shared on HN, but actually on HN/not visible to other normal-HN users?


That was once true, but it's all been rewritten.


Bigger and more complicated and handled many more users.


I was in the room when the decision was made, and it wasn't about more users.

That's the kind of thing people say because they assume it must be so. If you do that about unconventional things, which this software was/is, you'll simply reproduce the conventional view.


A rule devised on college summers from a state school. Students had money or time, depending on if they had a summer job. Seems to hold true to most peoples existence.

until you have enough money to make more money without adding much time..


Completely agree. While I have no idea what I'd actually do if I was this wealthy (maybe just enjoy life and give into time consuming addictions), but I have numerous projects that are "on hold" from lack of time.


> I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

I don't have any inside information, but looking at this old post of Joel Spolsky https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev... I guess that Bill Gates is still coding.


I love this. I'm definitely appropriating the phrase "MBA-type." Seems to have fallen out of fashion as MBA-types become increasingly common in tech.


My dad isn't a software engineer, but more of a combination mechanical/aerospace engineer (MA in mechanical, PhD in aerospace), but his official job is a tech-lead right now, which he likes, but he misses "real" engineering".

Pretty much every evening and weekend, he's mucking around with something in FreeCAD and knee-deep in equations that I don't fully understand, because he genuinely loves this stuff, and doesn't want to ever become rusty.

He's 58 now, but I get the impression that he'll be doing this until his deathbed.


That's when you know you've found a passion. A lot of guys work on cars on weekends and it's not a job. My dad was into it, I always found it a pain in the ass when something went wrong with a vehicle. He didnt, it always got him going full bore with equipment, wrenches, lifts, etc. He was an accountant by trade, but cars brought him to life.


Stand-up comedian Jerry Seinfeld mentioned in an interview that he still goes to comedy clubs to perform every week. Obviously he isn't doing it for the money. It's the process of testing out meterials before big shows that makes him good at his acts.

My takeaway from such stories is that Passion for work and keeping in touch with the path that lead them to success helps them stay relevant and successful.


BTW, comedians literally call this "working out"—it's understood that if you don't use it, even for a week, you start to lose it. So along with a need to do it, part of it is that it's more difficult to get back into comedic shape than it is to stay in shape.


It's very similar to programming, right? I guess we can generalize it to any specialized skills/expertise. Use it or you start losing it.


It's fortunate that taking only a week off programming doesn't dull your edge. That sounds nerve wracking!


I bet it does. It's just that most programmers are used to working at 25% or less of their actual capacity due to constant interruptions, so it's less apparent.


He talks about this in the Comedians in cars getting coffee Obama episode (worth a watch!). He basically says he fell in love with the work and that’s what keeps him grounded.


Seinfeld also blames his audience when they don’t like his jokes. He’s a really, really bad example of that (common) practice.


Oh?


Yup.


I always argue the software is a new form of literacy - and if you think of it like that, it is not at all unusual that someone will keep reading and writing well into retirement.

Have fun with your unproductive beautiful coding :-)


Man it’s much easier to code if you don’t do it at work.


>I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

He does (at least as of 2013): https://www.reddit.com/r/IAmA/comments/18bhme/im_bill_gates_...


I'm spending the weekend making a set of custom sleeved PSU cables - I'm in the middle of a crunch at work and writing code is the last thing I want to do ... so instead I'm doing something that's monotonous, but relaxing because of it. Don't need to engage higher brain functions, just cut, crimp, cut, melt, crimp, into the connector, next.

Whilst I do extoll the virtues of working on "passion projects" with programming, sometimes you need to do something else so that you can go back recharged, and that's OK too.


If you follow John in Twitter you'll occasionally see him mention c++ quirks he ran into.


I'll bet Bill Gates can still code some mean, lean C.


Amen


How does this relate to Arc?


The code I wrote to generate and test the Bel source is written in Arc, and Bel copies some things from Arc. Otherwise they're separate.


How is it an improvement over Arc? What issues does Arc have that Bel solves/addresses?


In the same way it's an improvement over other Lisp dialects. There's no huge hole in Arc that Bel fixes. Just a lot of things that are weaker or more awkward or more complicated than they should be.


> The code I wrote to generate and test the Bel source is written in Arc

Was this during bootstrapping, or is this still the case? Or in other words, do you now edit bel.bel directly or are there arc files you edit that compile into bel.bel?


It's still the case. bel.bel is generated by software. Most of the actual code is Arc that I turn into Bel in generating it. E.g. the interpreter and anything called by it, and the reader. But code that's only used in Bel programs, rather than to interpret or read Bel programs, can be and is written in Bel.

I had to change Arc a fair amount to make this work.

Curiously, enough, though, I found doing development in Bel was sufficiently better that I'd often edit code in bel.bel, then paste a translated version into the file of Arc code, rather than doing development in the latter. This seemed a good sign.


I remember reading pg's article on Lisp and startups, and at the time questioned if it was just mere luck. Then having a cursory look at Lisp, I questioned its relevance to the modern world.

... fast forward a decade later and I'm reading books on functional and logical languages for work. After the first chapter of The Little Schemer, I was at first blown away with the content, but then sad after I realised I had put off reading it late in my life.

If you're reading this comment and thinking Lisp and what's the point? Take a deep dive. It you're still questioning why, I highly encourage you to read The Little Schemer (and then all the others in the series). Scheme, Lisp, and now Bel, are a super power... pg's article was spot on.


You and I both like Rust. Concrete performance is one key reason why I choose to use Rust for various things, and all Lisps I know of are just generally slower and use more resources; I’m not aware of any Lisp attempting to address this problem, and from what I do understand of Lisps (though I’ve never seriously coded in one) they seem at least somewhat incompatible with such performance. I’m interested in whether you have any remarks on this apparent conflict of goals.

(I’d love to be credibly told I’m completely wrong, because syntax aside, which I could get used to eventually, I rather like the model of Lisps and some of the features that supports.)


You can build a Lisp to stay close to the metal with zero-cost abstractions. Most people drawn to Lisp are wanting productivity instead of max performance. Besides, the commercial Lisps and fastest in FOSS are really fast.

There was a systems type of Lisp called PreScheme that was closer to what you're envisioning. Carp also aims at no-GC, real-time use. Finally, ZL was C/C++ implemented in Scheme with both their advantages compiled to C. Although done for ABI research, I've always encouraged something like that to be done for production use with tools to automatically make C and C++ library bindings.

https://en.wikipedia.org/wiki/Scheme_48

https://github.com/carp-lang/Carp

http://zl-lang.org/


There's a post about the three tribes of programming. https://josephg.com/blog/3-tribes/

    Tribe 1: You are a poet and a mathematician. Programming is your poetry
    Tribe 2: You are a hacker. You make hardware dance to your tune
    Tribe 3: You are a maker. You build things for people to use
alfiedotwtf falls in the first tribe. You fall in the 2nd. Different tribes have different values, and hence all the back and forth.


What if I'm kind of all of them? Eternal damnation?

I've always strived to write beautiful code - for things people find very useful - in ways that push boundaries for what people think is possible with the machines we use.

I have succeeded in balancing any two of the above, to terrible detriment of the third. Never been able to juggle the 3 at the same time and this bothers me a lot. The upside is this pushes me to learn a lot, downside is I'm never content with my craft.

I'd have loved to solve domain specific important problems with a domain specific language I craft with a LISP and have state of the art performance while doing so.


Love this distinction. Makes a lot of sense, and is probably a key to happiness, to both realize which tribe you're in, and whether you're trying to argue with the tribe with another set of principles.


I like this :)

Programming is my poetry. Damn... that's a nice take


I always thought that SBCL was considered high performance and for example Chicken Scheme compiles to C. Both might be slower than Rust but I always had the impression both of these were more performant than say Java, but I haven't used either in a meaningful way to really know how it would rate for your performance needs.


Chicken's author also created Bones, a Scheme that outputs x86-64 assembly language.

http://www.call-with-current-continuation.org/bones/

README: http://www.call-with-current-continuation.org/bones/bones.ht...


SBCL is quite fast for a lot of things. I used to use it frequently for research projects that required performant code, and it generally wasn't to difficult to get within a factor of 2 of c code, though getting much closer could be hard. I haven't really kept up with it recently, but it regularly impressed me at the time.


There is no conflict of goals, languages offer certain features, some of them offer only "zero-cost" abstractions (C++ and Rust) and others offer more abstractions. You get what you pay for in terms of runtime executable speed / features trade-offs. The other factor is the compiler toolchain - see below.

If Rust had the same features as most LISPs have, then using these features would make Rust programs as "slow" as LISP programs. That is generally true for most programming language comparisons. What also matters for final executable speed is the toolchain, of course. Any languages that compile directly using GCC or LLVM and allows for compile-time typing will be roughly in the same ballpark. If they use their own compiler, such as Chez or Racket, then they usually don't match the performance, because there is not enough manpower in their teams to implement all those nifty optimizations GCC and LLVM have. SBCL is probably the fastest LISP with its own compiler. (Or Allegro?)

To give an example, here are some CommonLisp features that Rust lacks: Full object system with inheritance and multiple dynamic dispatch, dynamic typing, hot runtime code reloading/recompilation, garbage collection.


You seem to be describing the effects of conflicting goals.


You mean trade-offs? The point is that every language and every implementation has them.


I don't think "Lisps are just generally slower and use more resources" is true.

According to "TechEmpower Web Framework Benchmarks" (which might not be perfect but at least give some indication), Clojure is one of the fastest language (in terms of handling responses per second) for building a JSON API with. Take a look at https://metosin.github.io/reitit/performance.html


I don’t think those results are actually particularly meaningful or that they scale up very well, but I’m going to skip that and address other relevant matters.

I mentioned resource usage, and that includes memory usage. The JVM can be surprisingly fast for some types of code, but it’s never a memory lightweight.

I suppose I might as well mention another feature I value: ease of deployment. In Rust, I can produce a single statically-linked binary which I can drop on a server, or on users’ machines, or whatever as appropriate, and run it. For anything on the JVM… yeah, without extremely good cause I will consider dependency on the JVM to be a blocker, especially for a consumer product.


True, but also Clojure is not the only Lisp, just one I remembered was mentioned in a actual benchmark as being fast at some things.

Nowadays you can either chose a Lisp that does compile to a single binary and doesn't require the JVM (like SBCL or something like that), or you could use Clojure but use GraalVM to get the single binary.

Anyways, I agree with your general point, Rust being in general faster. But I don't agree with the whole "Lisps are just generally slower and use more resources" thing, while it's certainly true in some conditions.


TechEmpower is (historically) a JVM shop. No surprise they do not bench the aspects JVM stinks at (startup/warmup, mem usage, total installed size).


It’s rather hard to meaningfully measure any of these figures.

Startup you can measure, but the optimal tuning for faster general operation may start up slower, so now you might want both figures. And then you’re getting perilously close to development considerations, so once you’re doing that surely you should benchmark any required compilation, and so on.

Warmup is even harder to handle well, because it introduces another dimension; even if you were presenting just a single-dimensional figure (which you’re not quite), you’re now wanting to consider how that varies over time while warming up, so now it’s at least two-dimensional. But unless warmup takes ages, you’ll find it hard to get statistically significant figures, because you have so many fewer samples in each time slice.

Total installed size? Problematic because the number isn’t meaningfully comparable, as this is a minimal test, and a number in that scope gives no indication of how much is due to the environment (e.g. JRE, CPython, &c.), and how much further growth there will be as you add more features. People will also start arguing about what should count; if for example you count the delta from a given base OS installation, then you’re providing an advantage to something that uses the system version of, say, Python, and penalising something that requires a different and extra version of Python.

Memory usage? Suppose you pick two figures: peak memory usage, and idle memory usage after the tests. Both are easy to measure, but neither is particularly useful. Idle memory usage, similar problem to disk usage, and so the numbers aren’t usefully comparable. Peak memory usage is perhaps surprisingly the more useless of the two, because the increase in memory usage is strongly correlated with how many requests are being served at once—and so a slower contestant might artificially use less memory than a faster one; and so if you wanted those numbers to be comparable, you’d need to throttle requests to the lowest common denominator, which could then be argued as penalising light memory usage patterns.


You can have a single binary with the JVM and .Net core, so it’s a non issue.


The fastest for these sort of things are usually C++, Rust, C projects[1]. The fastest Java projects are Rapidoid, wizzardo-http, and proteus. The link you provided does not have a comprehensive comparison of different HTTP JSON frameworks.

1. https://www.techempower.com/benchmarks/#section=data-r18&hw=...


and here the author ports a Clojure program to Common Lisp with a x10 speed gain: http://johnj.com/from-elegance-to-speed.html. Sorry, x300.


The issue is memory management. Henry Baker helped popularize linear types in the early 1990s (http://home.pipeline.com/~hbaker1/) in his proposals for Lisp, from which work Rust's affine types derive, but no one wants to write in a dialect with those restrictions. Garbage collection seems like an algorithmically insurmountable problem. The other overheads are constant time, and Common Lisp already includes mechanisms to optimize them away (declarations and efficient arrays).


I'm currently using a CPU-slow machine, and I've discovered everything is slow among the popular scripting languages, except Lua. I've thought JS has good startup time, but some of my Lua scripts that are doing file i/o finish faster than node.js does a ‘hello world.’ Lua is also very lightweight on memory, to my knowledge. I think it can even do true multithreading, via libraries.

So, there's a Lisp on top of Lua―compiled, not interpreted: https://fennel-lang.org

It's deliberately feature-poor, just like Lua: you use libraries for everything aside simple functions and loops. And it suffers somewhat from double unpopularity of Lua and Lisp. But it works fine.


I am using Fennel for this: https://itch.io/jam/autumn-lisp-game-jam-2019

I will not finish ... I suck at lisp. But it works.


Have you looked at Julia?


Julia is compiled to machine code, not interpreted


Many lisps compile to machine code too. Compilation vs. interpretation is a feature of an implementation, not a language.


Scripting languages are by definition interpreted. The question was about fast scripting languages. I'm skeptical that Lisp compiled to Lua would count as in that category, but Julia unequivocally doesn't.


"Scripting languages are by definition interpreted" [[Citation Needed]].

There are Common Lisp implementations of both Python and Javascript (ES3) that compile to Common Lisp and then the Common Lisp compiles to machine code: as I said before, whether or not a language is interpreted or compiled is an implementation detail, not part of its spec.


https://www.webopedia.com/TERM/S/scripting_language.html

I chose this based off a random search on google of "scripting language." You can do the same and would get the same result.

It's not a rigorous term, because (you are right) its a property of the implementation not the language itself, despite the name.

But I don't think Julia has relevance to discussion of the speeds of interpreted Lisp.


That term was a long time ago already seen as problematic in the Lisp community. At the time when Ousterhout used it for marketing TCL.

Basically a scripting language interpreter runs source code files, called scripts.

Lisp does that, too. But many Lisp implementations compile the code and often to machine code, since the runtime includes an incremental compiler. The incremental compiler can compile individual expressions to memory.

Thus when you use a Lisp system, running a script just looks like running an interpreter, even though the code gets compiled incrementally.

Lisp uses the function LOAD, which can load a textual source file and which can then read, compile, execute expressions from the file. Typically this is also available via command line arguments.

Like in SBCL one can do:

  sbcl --script foo.lisp
and it will read/compile/execute expression by expression from a textual source code file, actually incrementally compiling each expression to machine code.


> Scripting languages are by definition interpreted.

I don't think this is correct. Scripting aren't rigorously defined, of course, but it would be reasonable to say they are defined by practice (i.e. a scripting language is practical for scripting).

Most compiled languages knock themselves out of contention in this definition because the startup and/or compile times are too long to be practical. But I've used luaJit in scripting contexts before, so that at least muddies the waters. Also common lisp (a variant that was always compiled).


It's a term for a simpler era where these divides made sense.

I've always understood it to mean interpreted and from a quick google search there are at least a sizeable number of people who agree.

I think now that there are a lot of high level compiled languages, it makes less sense.


That has nothing to do with how "lispy" it might be. Part of the compiler is implemented in femtolispe no?


I suspect this does have to do with whether or not it is a fast scripting language, considering a language compiled into machine code is by definition not a scripting language.

Sure, flag me, but my comment was relevant to the discussion.


Julia is currently (only) jit. AOT is stil work in progress AFAIK.

It's about as "compiled" as Luajit is.


Hey Chris :)

I totally get you with Lisps and performance, but I was more talking about the beauty and elegance of code written in them. Algorithms written in Lisps seem to be pieces of art.

And that's just Lisps... having a look at logic programming languages like Prolog and Mercury, it's amazing to see just how compact yet understandable an algorithm can be. They're beautiful!

But as for performance, take a look at Mercury - its a declarative logic programming language with built in constraint solving, which compiles down to C, and its performance is great. We've rewritten some Mercury in Rust, and yes our Rust is faster, but not by orders of magnitude as you'd think.


How's the editor support with Mercury?


Not sure about other editors, but there's syntax highlighting for Vim in the repo, and mtags (ctags for Mercury).


I use SBCL to build command line applications. Reasonable enough memory use and very fast startup times. I for a while used to use Gambit Scheme for this but decided I like Common Lisp better. We no longer need hardware Lisp machines for good performance. That said, for some applications Rust is clearly better.


Do you have some examples you could share?


There is one in the new edition of my Common Lisp book https://leanpub.com/lovinglisp

You can read it online for free, and clone the github examples repo.


Why do you like Common Lisp better?


I have so much experience with Common Lisp, starting around 1984 on my 1108 Lisp Machine. I have used various Scheme implementations a lot, but I just have more history and experience with Common Lisp. Both are great choices, and there are many high quality Schemes, plus Racket. Experiment, and then choose your own favorite Lisp.


Not exactly lisps, but Carp (https://github.com/carp-lang/Carp) and Scopes (https://bitbucket.org/duangle/scopes/wiki/Home) might be worth a look. I haven't tried the former, and couldn't get the latter to do anything useful, though. (it looks like the author stopped responding to issues, sadly).


"they seem at least somewhat incompatible with such performance"

Lisp is for the other 99.5% of programming that doesn't require you to wring every last drop of performance out of your CPU.


After JavaScript’s success, I’m convinced any language can be made fast given enough time and money.

Some languages are easier to be made faster, but I think JS almost proves there’s no such thing as a language incompatible with high performance by design.

You’d be hard pressed to conceive a more dynamic language and here we are, with crazy fast performance through many JIT tiers, interpreters, compilers, each of which would take years for a bright engineer to understand.


JavaScript is an extremely small, well-defined language with relatively simple semantics and very limited metaprogramming. It’s not a challenge compared to something like Ruby.


When was the last time you looked at it? Javascript is not small anymore. ECMAScript specification is over 800 pages, only a few hundred pages smaller than Java's.


> When was the last time you looked at it?

I’ve read the latest version of the specs of all these languages in detail.

I’ve worked professionally on a team implementing Ruby, the latest version of JavaScript, and Java at the same time, using the same language implementation system, so I can directly compare.

JavaScript is simplest by far. There’s quite a lot of sugar now, but the core semantics are small and simple. The core semantics of a language like Ruby are an order of magnitude more complicated.


You may be wrong in thinking all programs require maximum performance?

I just find your question a little confusing. My house is only two floors, yet meets all my needs. A bigger house would just waste my time in cleaning, cost me more in maintaining, in heating, etc.

It's the same for programs. Target performance at all cost, and for a lot of programs you get a hard to maintain, non scalable, bug ridden, featureless program that took you way too long to build.

That's why for example I use Clojure and Rust. Rust gets the least use, because the programs I tend to write can manage fine with Clojure's performance. In fact, I mostly toy with Rust, because I really never needed to write anything higher performance.

So I'm just not sure what you mean. The world is full of useful programs that meets the needs of their users which aren't high performance. For all these programs, Lisp is a superpower.


Some pointers:

- how to make lisp go faster than C: https://www.reddit.com/r/lisp/comments/1udu69/how_to_make_li...

- https://github.com/marcoheisig/Petalisp "Petalisp is an attempt to generate high performance code for parallel computers by JIT-compiling array definitions. It is not a full blown programming language, but rather a carefully crafted extension of Common Lisp that allows for extreme optimization and parallelization."


I don’t know if you are wrong, but I would give fennel lang a look. It’s a lisp that compiles to Lua, so you can access LuaJIT performance while lisping away.


I don't think OP considers LuaJIT high performance, considering lots of Lisps are just as performant as LuaJIT.


What if there was hardware accelerated Lisp?: https://en.wikipedia.org/wiki/Lisp_machine


a forth array cpu based lisp machine would be a fun toy


>> they seem ... incompatible with such performance

Might answer your question [0]

[0] https://news.ycombinator.com/item?id=2192629


I don’t think so. My simple reading of things there is that that was for one particular micro-benchmark (and the Benchmarks Game is acknowledged to be unrealistic and not actually suitable for comparing language performance), that the results were not actually conclusive, and that it had required telling SBCL to do some stupidly dangerous things that you should probably never turn on on real software, because they may well make it behave catastrophically badly rather than just crashing when you have a bug.

On https://benchmarksgame-team.pages.debian.net/benchmarksgame/... at present, the fastest SBCL implementation, which seems to include these optimisations, is at 2.0, with other SBCL implementations being slower, the first one (whatever that means) being 8.0; meanwhile, the Rust implementations range between 1.0 and 1.2. The SBCL implementations all use at least eight times as much memory as the Rust implementations, too.

I think this demonstrates my point pretty well, actually.


> … required telling SBCL to do some stupidly dangerous things …

No, that isn't required.

On the contrary SBCL is quite insistent that the arithmetic be made safe.

I'm not even a Lisp newbie, but with help from SO I've been able to tweak those spectral-norm programs to make the SBCL compiler happy without destroying the performance:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


The (micro)benchmarks game is a gimmick and should not be taken seriously. You forgot to mention that the SBCL #3 implementation also sits at 2.0 and is fully memory safe.

I fail to see the reason for you presenting the facts in this manner especially the "stupidly dangerous things part" which is of course totally inaccurate. (safety 0) has its uses (e.g. like Rust unsafe).

Your conclusions are not indicative of real world performance.


> … benchmarks game is a gimmick …

What exactly is it designed to attract attention to?

> … should not be taken seriously.

Because?

> … fully memory safe.

Here's the problem, the programs are explicitly required to use function calls but:

;; * redefine eval-A as a macro


> … acknowledged to be unrealistic and not actually suitable for comparing language performance …

By whom?

Maybe there are situations for which it is "realistic" — the point is that we don't know.

What would actually be suitable for comparing language performance?


I just read Fibonacci's Liber Abaci (from 1202). I think I'm not too far off the mark if I say Lisp in our day is what the Indian numeral system was in Fibonacci's day. A new writing system that has significant practical advantages.


10 years after college I finally understood the lisp, ml, prolog ladder

mainstream world is damaging to my soul


I know right :O


A major mistake is thinking that Lisp itself is magic.

The magic is actually in how Lisp changes the way you think.


No, no... that's exactly what I meant :)


> fast forward a decade later and I'm reading books on functional and logical languages for work

What work has you doing that?


I work at a wonderful company called YesLogic, and we try our best to do PDF generation. Mercury works wonders against all the constraints that CSS rendering can throw at us.


if you have any blog posts about that up i'd love to read them!


Can't say when, but we're planning on blogging about it in the near future. You'll most likely see it on "This Week in Rust" :)


If you want to, you can email a draft to hn@ycombinator.com and we might be able to give some tips on what the HN community tends to respond better to.

Same offer goes for anyone who's working on something they hope will interest HN. Just don't be disappointed if you don't hear back for a long time—and as a corollary, don't send it just a couple days before you plan to publish. We have terrible worst-case latency!


Awesome, thanks for that!


Guile + guix.

I work in devops and that replaced a horrible mess of god knows how many containers with a straight forward, boring and reproducible single large instance of Guix SD.


Very interesting. I played with Guile a while ago and came back impressed. I'd love to hear more about your work.


I'd be pretty interested in reading about this work. Do you have a blog post or any code you can share around this case study?


This one seems backwards to me:

    (2 '(a b c))
In addition to being data structures, I like to think of lists/arrays as functions which map integers to the contents. This nicely generalizes to hash tables / associative arrays, and then further to actual functions. If that's all reasonable, then

    ('(a b c) 2)
is the right order for application.

However, maybe pg is just thinking of 2 as a shorthand for cadr or similar.


Initially I would have preferred that. I did it that way in Arc. But since functions are lists in Bel, I couldn't do that, or you wouldn't be able to call a function on a number.

As often happened with things I was forced into, though, I not only got used to putting numbers first but started to prefer it. It means for example you can compose them with other callable things.


Am I reading that right, that:

  (2 '(a b c))
Is equivalent to:

  (second '(a b c))
And that would work for strings as well:

  (5 "Hello, world!")
  > "o"
Which in turn means 'car and' 1 are equivalent? (probably means 'car should be thrown out, because surely '1 is clearer?

Ed: and with some notation to differenciate "element at N" and "tail behind N" you could get even more mileage out of integers? And then to generalize to lists of lists to reference elements and sub-sections (sub dimensions, like cubes) of matrices?

Not sure what would be nice, perhaps star or ellipsis?

  (1... '(a b c))
  >'(b c)

  ('(1..) '(0..2)
    '(
      (a b c)
      (d e f))
  >'(
    (b c)
    (d f))
Or something?


It would not be clearer to use 1 instead of car when you were using a pair to represent a tree, rather than a list, and you were traversing the left and right branches using car and cdr.


Yes, I suppose that's something I've never grown quite used to with lisp - that it's lists (as a special form of "leaning" trees) and trees - not arrays and matrices.

I suppose even proper lists are incidental - it's really all about trees (and also parse trees).


It seems to be borrowing from clojure, in which both (field container) and (container field) do the same thing.


If Bel doesn't have irrational numbers, how do I use pi? Define a function that returns an approximation to a given precision?


I'm curious how you use irrational numbers in any language. I'm not aware of any mainstream languages that support them.


In Python there's SymPy. It simplifies expressions and then you can ask for the answer as a number to N decimal places.

    >>> from sympy import N, sqrt, pi
    >>> N(sqrt(2)*pi, 50) 
    4.4428829381583662470158809900606936986146216893757
https://docs.sympy.org/latest/modules/evalf.html

https://en.m.wikipedia.org/wiki/Computer_algebra


That's a crude approximation of what Mathematica can do. Not all (in fact, very few) irrational numbers are computable. Yet among the computable numbers, what our computers can represent in any language is but a small subset of them. (Although computable numbers are equinumerous with the naturals.)


> Yet among the computable numbers, what our computers can

> represent in any language is but a small subset of them.

This is only true because of physical limitations of the machine (say, it has only finite memory). In the same way not all Turing machines cannot be implemented on actual computers. This is not a restriction of the languages we use. There is nothing stopping an actual programming language representing all computable real numbers.

> (Although computable numbers are equinumerous with the naturals.)

This is only true in classical mathematics. In constructive mathematics we are free to assume that all real numbers are computable (And hence, not equinumerous with the naturals, per cantor’s argument).


There are many possible representations of real numbers which could in theory be used in programming languages.

For instance, you could represent them as functions which given a natural number produces a rational approximation – say f(n) should be a rational number closed than 1/2ⁿ to the number you represent (Cauchy sequences). Then addition of two numbers f and g, would be[0] the function (f+g)(n) ≔ f(n) + g(n), where + denotes rational summation on the left-hand side.

Or you could have a function which given a rational number input produces a boolean which tells you if the real number is less than the input. (Dedekind cuts)

No mainstream language has real numbers as a primitive[2]. In fact, not many languages have even arbitrary integers as a primitive type. But many mainstream languages have functions as first class objects, and therefore there is nothing stopping you from using them to represent real numbers.

Not all of these representations are created equal. For instance if you wanted to represent real numbers as a function which gives you the n-th digit in the decimal expansions, you cannot implement addition. This because you would have to look arbitrarily far into the digits of each number to decide even the first digit.[1]

Someone is bound say "Oh, but that is just computable numbers and there are only countably many of those". This is true if you accept classical logic. In constructive mathematics (which is most relevant to computer science), however, one can only prove that there is a surjection from the natural numbers (but there is no constructive bijection). In fact it is consistent to assume all real numbers are computable.

[0]: Most likely, you want (f+g)(n) ≔ f(n+1) + g(n+1) to get a close enough approximation, but this is a technicality.

[1]: Imagine 0.00000⋯ + 0.99999⋯. At any point later the first number could become non-zero and the first digit would be 1. Or the second number could become smaller than 9 and the result would have to have first digit 0. No way to tell what the result would be without looking at infinitely many digits.

[2]: I know of at least one, non-mainstream, language which had built-in support for real numbers. It is called RealPCF – and I am not sure it was even implemented on a computer, or if it was just a theoretical construct.


Mathematica / Wolfram language. But I'm stretching it to say that it's a mainstream language, I guess.


Go does


You probably confused irrational numbers with complex numbers?


That's correct


No. It doesn't.


There is by definition no irrational number in any language implementation that runs on a finite machine unless it is either a stream (which is still finite when realized) or an approximation.



I like the syntax for function chaining: (dedup:sort < "abracadabra")

I assume that dedup and sort are separate functions.


I am unable to understand the difference between (a b c) and (a b . c)


(a b c) is (a b c . nil). That is, (a . (b . (c . nil))). On the other hand, (a b . c) is (a . (b . c)). (Edit: rewrote post)


() is equivalent to an empty list and nil according to this notation, in which case both would be proper lists. But according to pg, in bel, (a b . c) is not a proper list.

Edit: I think I understand now, hmm.


Think of a proper list like a degenerate tree with all the values on the left and structure on the right. A dotted list puts the last value on the right, where nil would usually be.

Dotted lists are pretty rare. They mostly show up in association lists, where key/value pairs are stored as

  ((name . dave) (type . user))
and adding a pair to the front of the list shadows any other pairs with the same key.


While I think this is the correct interpretation, I do not think it's actually implied by the language specification. In fact, I think the notation (a b . c) is simply undefined -- there's no way to formally deduce what it means from what is defined.


> 2. When the second half of a pair is a list, you can omit the dot before it and the parentheses around it. So (a . (b ...)) can be written as (a b ...).

That defines (a b . c). Edit: Note that (b . c) is a list.


It doesn't. That definition applies only to expressions like (a b c) which do not contain the dot operator at all. (a b . c) is not of the form specified. Moreover, every object of the sort defined via (a b c ...) is a proper list, and (a b . c) is, as described in the specification, not a proper list, so it can't be covered by that definition. It's true that (b . c) is a list, but the form (b . c) does not occur in the expression (a b . c) so this isn't relevant. (You'd need the expression under analysis to instead be (a (b . c)).)


The rule I quoted states that (a . (b . c)) can be written as (a b . c).


That would only be true if (b . c) were a proper list. But it’s not. Strictly speaking, it’s not even a list if you go by the definition of list provided before your quoted rule. It’s later called a “dotted list” to distinguish it from a list, which must be nil-terminated.

Your position boils down to claiming that the rule you quoted is intended to cover lists as well as dotted lists. As stated, it only covers the former, which leaves (a b . c) undefined.


In the usage seen later throughout the document, dotted lists are a kind of list. It doesn't say its examples of how to build a list are exhaustive.


So you agree it’s ambiguous and one cannot formally deduce the meaning of the expression without making assumptions. One must assume that objects that don’t fit the definition of list can be substituted in for lists within some contexts (but not all).

Beyond that, dotted lists are introduced via the undefined example (a b . c), which requires one to go even further and make a second assumption (namely, assume the intention was to refer to (a . (b . c)), then assume the quoted transformation rule applies to certain non-lists allowing that to be rewritten as (a b . c)).


It is not the case that dotted lists don't fit the definition of list. What you point to as a definition wasn't a definition. Just examples. A definition would describe it as the minimal set of objects meeting those criteria. (A non-minimal set could include circular lists and/or pairs whose cdr is not a list.)

Since the document does use list in ways that encompass dotted lists and goes out of its way to define proper lists, we can infer that list includes dotted lists. Also, it's a long-running convention that the term "proper X" implies there are other kinds of X's.


I'd disagree with you on that. If the document can't clearly define the terms it's using, it cannot function as a formal specification of a language. The writing as it stands now is far too ambiguous to serve such a role. You have no reasonable retort to me except to say that the definition wasn't even a definition, and after that there is indeed no more abstract definition you can even point to. You're left using undefined terms to defend your interpretation as the "correct" one. That's not sufficient to defend the document as a specification. In fact, that's just further criticism of it.

Like I said, I'm confident your interpretation is the intended one. But you cannot prove it as such, and that's where we disagree.


I never claimed it was a formal specification. The question was, is the meaning of (a b . c) defined, and the answer is, yes, because (b . c) is a list.


And I say the answer is no, because (b . c) is an arbitrary pair, not a list. It would appear that we’re at an impasse.


Lists are made of cons cells, and a cons cell is simply a two-element array. In a list, the first element of a cons cell contains data and the second element contains a pointer to the next cons cell in the list.

How then should we represent the last cons cell in a list? We have two choices for what goes in the second element of the last cons cell:

1. The last piece of data, or

2. Nothing; i.e. a special non-pointer pointer. In Lisp this is called nil.

If we choose option 1 it's harder for code that traverses the list to be sure it has reached the end. It's also harder to splice a new element (a new cons cell) onto the end of the list dynamically.

Option 1 is the "dotted" form of ending a list and option 2 is the "proper" form.

Option 2 is more common in Lisp. From a type theory point of view, Option 2 restricts the second element of a cons cell to contain either a pointer to a cons cell or nil. This makes reasoning about code, compiling code, and optimizing code easier.


"Also by convention, the cdr of the last cons cell in a list is nil. We call such a nil-terminated structure a proper list. In Emacs Lisp, the symbol nil is both a symbol and a list with no elements. For convenience, the symbol nil is considered to have nil as its cdr (and also as its car)."

https://www.gnu.org/software/emacs/manual/html_node/elisp/Co...


I read the guide from the beginning and (a b . c) stood out to me too.

It's not a form allowed by all the preceding rules, and I wasn't sure what it meant either.


It definitely needs to be defined. Using only the paper, I assumed from the previous definitions that (a b . c) should be parsed as (x y) where x = a and y = b . c which can be re-written using only pair notation as (x . (y . nil) ) or (a . ((b . c) . nil)) and this made it very confusing to try to figure out what is meant by "improper."

Even if a definition for the notation (a b . c) is added, an example of such an improper list fully broken down into pairs would certainly still be appreciated by us readers who don't already think in Lisp :)


Think of the period symbol as the root of a branch in a tree. A true list has all left branches empty. A "dotted list" has to items on both branches at the very tip.


I can't say I'm familiar with Lisp (or its dialects).

How does one get started learning a Lisp variant (in terms of learning resources/guides), and why use Lisp over other languages?


Get a copy of The Little Schemer and follow it step by step in Racket, which is freely downloadable. It is better when learning to experience Lisp in this way than to read an explanation of why Lisp is good. It activates a different part of your brain. For many people, it feels like a dormant part of your brain has just awakened for the first time.


For me, the biggest advantage of Lisp dialects is that they make it easy to introduce very powerful and far-reaching abstractions - and idiomatic good code encourages these. For some tasks that's pretty awesome, for example anything that has to do with classic symbolic A.I. or expert systems. You abstract every data type your program uses and write small functions that operate on these types. It's similar to classical OOP, except that you're not forced to use OOP for everything but can instead use a more functional style.

Another big advantage is the syntax, because it almost doesn't exist. The language allows you to focus on the semantics whereas the syntax of expressions becomes neglectable. S-expressions are also among the main reasons why abstractions work so well in Lisp.

Finally, and this only applies to CommonLisp: CommonLisp is very complete in terms of language capabilities. It has static and dynamic types, lexical and dynamic scoping, has OOP and allows for functional programming, and so on. It has less artificial limitations than any of the more recent languages. It doesn't attempt to "hold your hand" by e.g. disallowing the use of OOP with multiple inheritance because "it's bad for you", or any such nonsense. Most LISP dialects also have garbage collection, which saves you from crashes and saves a lot of development time.


As I understand it, Scheme was intended to be a dialect suitable for teaching programming. The usual first texts are The Little Schemer, How to Design Programs (https://htdp.org/ but apparently a third edition is forthcoming: https://felleisen.org/matthias/HtDP3e/index.html) and the inestimable SICP: http://sarabander.github.io/sicp/ (HN discussion: https://news.ycombinator.com/item?id=13918465)

"A bad day writing code in Scheme is better than a good day writing code in C." —David Stigant

In another man's opinion, "Common Lisp is the best language to learn programming":

https://oneofus.la/have-emacs-will-hack/2011-10-30-common-li...

Another resource for learning Common Lisp is Stuart C. Shapiro's Common Lisp: An Interactive Approach: https://cse.buffalo.edu/~shapiro/Commonlisp/


Harvey and Wright's _Simply Scheme_ is another excellent book that does not get enough mention.


Scheme is the teaching version of Lisp that gets used in a lot of first year university CS classes, you can probably work through an online syllabus pretty quickly if you have experience in another language.


In my experience: just pick one that you'll be using, e.g. Emacs Lisp or something for scripting, and dive into practice. Most of a lisp is little different from other imperative-functional languages, except for a taste of weirdety from the 60s and more functional freedom.

You could pick something like ClojureScript if you're using JS elsewhere―in the Lumo incarnation to avoid JVM's compilation and startup time. Though ClojureScript does add a level of complication. Other transcompiling variants like Fennel or Hy are also feasible but are probably poor on documentation and tools for a beginner.


I found Hy quite an easy start, especially if you are familiar with Python. It's kind of Python in lisp format https://github.com/hylang/hy


Welcome back Paul :) Great to hear from you again. I almost thought you were too busy for HN since you became less active here, it's really nice to see you again!


It’s great to see a technical post from pg after a while!


There's a typecheck function in the standard library, but with no documentation in the Bel source anywhere it's hard to know what it does.

For people like me who want static typing and a powerful type system to catch errors, does lisp have anything to offer? My understanding is that it predates decent type systems and lisps will always be genealogically much closer to Python than, say, a Haskell or even a Kotlin.


What this

  (def typecheck ((var f) arg env s r m)
    (mev (cons (list (list f (list 'quote arg)) env)
               (fu (s r m)
                 (if (car r)
                     (pass var arg env s (cdr r) m)
                     (sigerr 'mistype s r m)))
               s)
         r
         m))
says is, first create a function call

  (list f (list 'quote arg))
in which the function describing the type (e.g. int) is called on the argument that came in for that parameter. Its value will end up on the return value stack, r. So in the next step you look at the first thing on the return value stack

  (car r)
If it's true, you keep going as if the parameter had been a naked one, with no type restriction

  (pass var arg env s (cdr r) m)
and if it's false, you signal an error

  (sigerr 'mistype s r m)


Yeah you can do optional static typing in common lisp: https://medium.com/@MartinCracauer/static-type-checking-in-t...

The typing discipline in strong. I think it also has type inference but I'm not 100% sure


Open to sharing a bit about the name? No mention of it in the guide.


"Characters that aren't letters may have longer names. For example the bell character, after which Bel is named, is

\bel"


There it is! Thanks.


Ah - brings back memories of the old teletype I first used that had an actual mechanical bell that rang if it got that. Similar to https://trmm.net/Model_ASR33_Teletype


My guess would have been a short for BEtter Lisp ;-)


I’ll wait for lim.


Just guessing. His previous language is called arc, it has three letters and starts with an \a.


I am not convinced - ‘\a’ had already been taken by C.


At the risk of some bikeshedding:

    The name "car" is McCarthy's. It's a reference to the architecture of 
    the first computer Lisp ran on. But though the name is a historical 
    accident, it works so well in practice that there's no reason to 
    change it.
While I understand the rationale here, would this not have been a good opportunity to encourage something a bit less vestigial than "car" and "cdr"? Say, "head" and "tail"? Or "left" and "right"? A lot of things come to mind when seeing a bunch of cars and cdrs and such everywhere in Lisp code, and "works so well in practice" ain't exactly one of them, IMO.


As names, car and cdr are great: short, and just the right visual distance apart. The only argument against them is that they're not mnemonic. But (a) more mnemonic names tend to be over-specific (not all cdrs are tails), and (b) after a week of using Lisp, car and cdr mean the two halves of a cons cell, and languages should be designed for people who've used them for more than a week.


> more mnemonic names tend to be over-specific

I feel like this applies just as much (if not more so) to calling something the "contents of the address part of the register" or "contents of the decrement part of the register", especially when in actuality the "car" and "cdr" of a cons cell are implemented in a way that has nothing to do with an IBM 704.

> short, and just the right visual distance apart

Even shorter (and similar visual distance apart) would be cl and cr (for "cons left" and "cons right", i.e. the car and cdr, respectively). Or pl and pr if we swap "cons"/"cell" for "pair". Like car and cdr, these can be combined into other operations, like (cllllr foo) -> (cl (cl (cl (cl (cr foo))))). Heck, we could go even shorter with just "l" and "r". They're even kinda pronounceable ("cull", "curr", "cullullullullurr"). Literally all the upsides of "car" and "cdr" without any historical baggage.

Point being, if Bel is supposed to be a reconceptualization of Lisp, it feels really weird to not reconceptualize how we talk about cons cells and the contents thereof.

> after a week of using Lisp, car and cdr mean the two halves of a cons cell,

This could be true for any chosen terminology here. Unless you meant Lisp in general and not this particular dialect, in which case there are counterexamples to that (namely: Clojure, last I checked).


Since decades Lisp provides FIRST and REST additionally to CAR and CDR. It's considered good style to use FIRST / REST in list operations and CAR / CDR in general cons cell operations.

CAR/CDR work well in practice, because there are also functions like (CDADR ...), which is (CDR (CAR (CDR ...))) abbreviated.


Will someone please teach pg some APL? It seems to be the missing piece of what he is working towards.


Please say more. What would you bring from APL into Lisp?


If Lisp is meant to be a formal model of computation ( a quote from the Bel article by pg), then other formal models of computation should also be examined; perhaps 'compare and contrast' of two different perspectives will lead to a new thought or even a sort of amalgamation.

Lisp and Bel's basic data structure is the list. APL's is the array, which can be seen as a list or in multidimensional form as a list of lists.

Lisp uses a form of notation that is quite different from the APL notation.

If there are advantages to both notations then it makes sense to study both of them.

Bel's description of functions might benefit from looking at APL's niladic, monadic and dyadic functions, as another example.


In another comment above, pg wrote: "language A is better than language B if programs are shorter in A"

APL's terseness due to weird characters aside, since he did say that's not what he meant, APL can do A LOT by chaining a small count of operators together, which, to me, does seem to fit with the above quote rather well.

I would personally also suggest a Forth-derivative like Factor as another language that would meet the criteria of being small in parse-tree terms (and incidentally also in tokens).

As for what it would bring into Lisp.. I dunno. Maybe nothing :)


I love the idea of trying to build a mathematically pure language. I wonder how far it is possible to use optimisation techniques to make Bel efficient. For example, can the representation of strings as pairs be automatically optimised to a sequence of characters?


One thing you can say for Bel for sure is that it will give implementors lots of opportunities to develop new optimization techniques. That's partly humorously euphemistic, but also true.


Yes, I believe lisps virtually always do that in practice, as does Haskell if you use certain string types (fp languages have their roots to lisp even though they have different syntax).

But lisps are dynamically typed, so to make them run fast you'd need to use a speculating jit compiler like GraalVM. They're more like python, performance wise without it.


Very nice for the depth and quality. This could pass for a very good grad school project. Pg, how much effort did it take to figure out the axiomatic approach for previous formal methods. How long and how much effort did it take to produce this?


There are two things I found confusing in the first part of the guide (ie the part up to "Reading the Source"). The first is 'where', which is addressed in another thread on this page. The second is this:

  This is a proper list:
  
  (a b c)
  
  and this is not:
  
  (a b . c)
Could someone clarify how to interpret '(a b . c)'? How would it be represented in the non-abbreviated dot notation for pairs? It's not '(a . (b . (c . nil))' -- is it '((a . b) . c)'? The only Lisp I'm fluent in is Clojure, so I'm not used to the dot notation; otherwise this might be obvious.


(a b . c) is (a b) with the final nil replaced with c.

In other words, it is (a . (b . c)).


Thanks, that clarifies it completely!


O GLORY DAY OUR PAUL HATH RETURNED! Also is Lisp just syntactic sugar over Lambda calculus? All the Lisp people (I.e., the authors of books I’ve skimmed on lisp) worship it, but it’s simply an encoding of a n-ary tree, no? I know you can do elegant things like define lisp in lisp and the whole homoiconicity[?] thing - but don’t these endless opportunities stem from n-ary trees? My point is that of course Lisp can do this and that if it’s simply a representation of a tree - trees capture as models like all the structures of information.


Why does apply evaluate to itself instead of (lit prim apply)?


Because it's not a primitive. It's handled by a special case in the interpreter,

  (= f apply)    (applyf (car args) (reduce join (cdr args)) a s r m)
rather than being something whose behavior you have to assume.


PG is still alive and coding! :)


Paul, this is really nice - I’m glad to see you doing some research about stuff you clearly love.

Other than because you want to, do you have any sort of longer form apology for bel worked out that you want to share?

I ask because I’m curious how you’re thinking about play, work and legacy at this point in your career.


I talk about this in the first section of the The Bel Language: http://paulgraham.com/lib/paulgraham/bellanguage.txt


I did read that before I asked, I promise :)

I was curious more about how you think about your own time at this phase in your career -- are you mostly 'playing'? Is this "serious play"? Is it motivated by anything beyond personal interest?

I ask because I'm monitoring my own projects and time commitments more seriously as I hit my mid 40s and trying to make sense of how and when I've had the best impact, globally, personally, to my own happiness, etc.

One of the keys for me seems to have been to pay attention to what interests/intrigues me, and I'm curious what your experience is on that front as well -- you have a pretty unique amount of experience assessing and watching companies that have in some cases made a major amount of change in the world.

Anyway, I guess I'm just asking for a sort of 'mid career thoughts' essay from you, or at least wondering if you're thinking much about it.


This was something I'd meant to do for a long time, and wanting to work on it was one of the reasons I retired from YC. Being overtly ambitious would have provoked haters, but few will see this thread now, so I'll tell you: the goal was to discover the Platonic form of Lisp, which is something I could always sense lurking beneath the surface of the many dialects I've used, but hidden by mistaken design choices. (T was probably the best in this respect.)

I don't know how much my experience translates to other people, because my "career" has been unusually random, but when I retired from YC what I was thinking was that at 49, if there was something I'd been meaning to do, I'd better do it.


I haven't had a chance to dig in yet, just wanted to say congrats on getting it out into the world!


Seems a bit closer to the lambda calculus than most lisps. Building stuff up from such primitive types.


Aside: Nice to see you hear again, pg.


As for the name, I figured it has a couple connotations.

AT&T's Bell labs.

Belle is French for Beautiful

Bel is the 7th ASCII character and it use to ring an electro mechanical bell in computers, before they had speakers or sound cards.

B is the second letter after A, with which Arc, his first Lisp variant was named.

And both have three letters.


> For example the bell character, after which Bel is named, is \bel


This seems like a DSL masking itself as a programming language. Will this axiomatic approach allow declaration of domain specific operators like credit/debit? Is this is an attempt to make application development axiomatic?


The source is hosted on the yahoo CDN. Interesting.


I always wondered about the back story.

I guess Yahoo Store & migrating away from Yahoo infrastructure?

Who owns the domain now? pg?


It was built using Yahoo Stores, which is what Viaweb became when Yahoo purchased it. He just continues to use it.


So, how does one bootstrap the interpreter?

Is it available as a Racket "language" or is it compatible with most/some lisps/schemes?


Have you considered posting this on HN under some pseudonym (including original website etc) to make experiment more .. interesting?


Hi Paul - looks interesting!

What is the license? My presumption is that you intentionally did not embed a license within either of the documents.


> (dedup:sort < "abracadabra") "abcdr"

I extremely like this format. I think clojure has something similar?


Hyped to have you back in here pg :)


There is not mention of licensing anywhere. Is it freely distributable/modifiable?


How'd you pick the name?


I can't find the definition of id in bel.bel file. Did I miss it? @pg


Wow! So cool to see a new post from PG!

Really curious to see what this project will become.


Is 'sys' really necessary, when you already have streams?


Just when I was curious on learning some Lisp. I was also saddened how much it's dropping on https://www.tiobe.com/tiobe-index/


“Lisp doesn't look any deader than usual to me.”—David Thornley via http://www.paulgraham.com/quotes.html


The actual graphs on the Tiobe can empirically show it's getting deader.


Lisp was around decades before "Tiobe", and will be for decades after.


That wasn’t the point. It’s losing popularity. It’s been in a downtrend that has never recovered. It’s only getting more and more esoteric.

Maybe it’ll come back in fashion one day.


Like BASIC, COBOL or FORTRAN (except less useful).


Exciting! This has a lot in common with Nock/Hoon


Being so simple reminds me of RISC


Wonder why `no` instead of `not`.


Because falsity is also the empty list.


Welcome back pg.

I hope it was a pleasant return.


I just don't think yet another Lisp is going to do it, and I've written a few [0].

Lisp is fine for what it is, one extreme of the spectrum and a child of its time; but far from the final answer to anything.

We would be better off triangulating new ideas than polishing our crufty icons. Lisp was a giant leap, hopefully not the last.

[0] https://github.com/codr7/g-fu


> I just don't think yet another Lisp is going to do it

Going to do what? I wasn't aware there was a specific goal that had to be met here?


Hell, if we're talking about the next emergent layer of software, autonomous code / artificial intelligence, I'd say LISP has a better shot than most languages. Performance-critical code can still be refactored into C but LISP has much more built-in capability for advanced genetic coding and reflection.


Lisp had a shot since 1958, somewhere along the way it even made sense.


Change anything, I thought that was obvious.


If someone picks up Bel and is changed by it, then something has been changed.

YOUR problem is that you have an all-or-nothing perspective that change only matters if it is massive, global, change-at-scale.

Change matters in the small. If building Bel changed nobody except PG for the better, then it is still change and it still matters. Just not to you.

As a bonus—and this is a happy accident, not core to my point—sometimes change in the small unexpectedly leads to change in the large. See microcomputers, web browsers, and so many other things scoffed at as things that simply could never displace the incumbent technologies.


Wish I could have Lisp without all the parentheses. Parentheses just sour the experience for me.


I wished that, too, when I was first learning Lisp. Now, thirty years or so later, I like the parentheses. Lisp did something to me along the way.

Syntax has some quality that affects what it feels like to write code. Call it flavor, maybe; or feeling. Lisp feels good to me. It didn't feel good at first, but it gradually won me over.

I'm not the only one. Your perspective is valid, but most every Lisp programmer I've met shared it at some point, before Lisp changed their minds.

Part of it is that lisp's syntax is simple and regular enough for editors to automate a bunch of simple editing tasks. They indent things for you in a standard way. They can grab whole subexpressions and move them around and transpose them whole and so forth. Most other languages are so syntactically complicated that it's hard to make an editor do that very well.

Part of it is that the same simplicity and regularity make it tractable, even easy, to walk, transform, and generate code from Lisp expressions. That characteristic leads to things like this essay linked in another comment: http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.h...

Nowadays Lisp expressions feel to me like strings of pearls, each one containing meaning, and each one composed fractally of other pearls of meaning. They are simultaneously structurally sound and malleable. I long ago stopped wanting to get rid of the parentheses.

Some time around 1992 or so, Apple acquired a Lisp company named Coral Software and formed a team intended to invent a programming language for its research and development teams. the Advanced Technology Group and other research teams liked Lisp and Smalltalk and similar languages because of how quickly they could build working prototypes with them, but they didn't like the process of converting their successful experiments into Pascal, C, or C++ code. It was too hard, took too long, and lost too many features along the way. They wanted a new language; one that would be as flexible and congenial for experimentation as Lisp and Smalltalk, but that could also ship production code suitable for the more constrained systems that Apple customers mostly had.

The Coral group, renamed Apple Cambridge, designed a language called Ralph. It was a Lisp. More specifically, it was Scheme, but built on a runtime and type system based on a subset of CLOS, with some influence from Smalltalk and functional languages, and with system features for cleanly separating the development environment from delivered programs.

It was my favorite programming language ever. Eventually, amid some amusing shenanigans, Apple officially renamed it Dylan.

Along the way, the Dylan team heard over and over and over from non-Lispers that they wanted a non-Lispy syntax. Initially, the compiler people sort of shrugged and said, "sure, that's easy enough to do." And it is. There's been no shortage of unparenthesized syntaxes designed for Lisp in the roughly sixty years since it was introduced. None of them has ever really caught on (and that should probably tell us something about what a good idea it is, but never mind).

Dylan design meetings discussed creating a parser for an "infix syntax". The general idea was to create such a thing to mollify the non-Lispers. I even supported it. My argument was that it was a superficial matter, and if it attracted more users, it would be all to the good.

I was wrong.

I didn't appreciate just how helpful Lisp syntax is until I got hold of a Lisp that had an infix syntax. It was bulkier and more cumbersome. It was less pleasant to work with. It wasn't as easy to recognize the boundaries of expressions. Editors had a harder time working with it, so they weren't as nimble.

Of course, I could always switch back to the parenthesized syntax...until that was removed.

Okay, the surface syntax was clunkier, but it was still a Lisp underneath...until it wasn't.

The separation between development environment and delivered program got harder and faster. That evolution increasingly restricted what you could accomplish in the repl, until, finally, the repl disappeared altogether. At that point, Dylan really wasn't a Lisp anymore. My favorite Lisp had evolved into just another batch application compiler.

Moreover, it didn't work. The stated goal was to make the language more appealing to more working programmers, so that it would gain broader acceptance.

It didn't. It lost the things that made Lisp and Smalltalk programmers happy, and it didn't gain broader acceptance in the process.

I still like Ralph better than anything else, including Common Lisp, Scheme, Clojure, or Arc. But Ralph is extinct now, so I'll stick with those other Lisps until someone invents a better one.

My new favorite will probably have the parentheses.


Same here. I liked Dylan in the early days when it had parentheses, but when they went away I completely lost interest in the language. Shortly afterwards, Apple and everybody else did too.


An old argument, as you say, but as tech changes, the constraints sustaining it may fall away.

One trend in javascript is using a style checker to enforce a uniquely-defined textual representation for any given ast. To relieve increasing-novice developers of code layout worries. Which makes that textual javascript simply an ast printed representation, an un/dump format. And editing it, is semantic representational editing of an ast.

That's more purely an ast, than even a file of s-expressions would be. Where for instance, rearranging which expressions are on a line ending with a comment can be problematic. Tasteful manual textual layout of code is sacrificed for tooling and humans not having to preserve and care about layout.

Representational editing of an ast means you can use any surface syntax you'd like. Parentheses, infix, operator precedence, concatenative, COBOLish verbosity, or whatever. Any losslessly reversible transformation is fair game. Including conversion between statement-based and expression languages. Layout becomes merely an editor customization, like colorization.

Which isn't to say it's entirely isolated from culture, style, and tooling. And thus that syntax multiplicity could have community costs. But perhaps it makes the choice a less binary thing, more amenable to tuning. The constraint that a language necessarily has exactly one syntax, modulo colors and fonts, seems to be relaxing.


Here is old reply from pg to this very complaint.

>More than anything else, I think it is the ability of Lisp programs to manipulate Lisp expressions that sets Lisp apart. And so no one who has not written a lot of macros is really in a position to compare Lisp to other languages. When I hear people complain about Lisp's parentheses, it sounds to my ears like someone saying: "I tried one of those bananas, which you say are so delicious. The white part was ok, but the yellow part was very tough and tasted awful."

To make this more clear, lets' look at one example of pg's code. In verbatim it looks like this:

    (mac case (expr . args)
      (if (no (cdr args))
          (car args)
          (let v (uvar)
            `(let ,v ,expr
               (if (= ,v ',(car args))
                   ,(cadr args)
                  (case ,v ,@(cddr args)))))))
For Lisp programmer it looks like this:

    mac case (expr . args)
      if no (cdr args)
         car args
         let v (uvar)
            `let ,v ,expr
               if = ,v ',(car args)
                   ,cadr args
                  case ,v ,(@cddr args)
Lisp code relies on indentation for readability. Lisp aware editor knows how to automatically indent the code because it has been explicitly told. Only few of the inner parenthesis convey semantic information to the programmer.


The parens are the shadow cast by Lisp's consistency, which is the reason to wish for Lisp in the first place.

The solution is not to look at the parens. It's a bit like not listening to tinnitus, which is harder for some than for others. Tools help.


> The parens are the shadow cast by Lisp's consistency, which is the reason to wish for Lisp in the first place.

Beautiful. I'm stealing that, with your permission?


Sure. But you should steal it without permission.


<beginExasperation> I don't understand how people who everywhere else love minimalism, refuse to give up the unnecessary parens (whitespace suffices). If the advice is not to look at them, and they are not needed, why have them at all! Every lisp program begins with a syntax error! That doesn't cause alarm bells to go off in people's heads? The commenter above says because parens-less lisp has been tried for 60 years and hasn't caught on it's evidence that it won't work. Sixty years is a blink! Binary notation is 330 years old. Indian numerals about 1,000 years old. Lisp is still in its infancy and things can be improved. </endExasperation>

- cranky new lisper


Who said they aren't needed? They are the structure that supports Lisp's simplicity and consistency, which is what the good things come from. If you take them out, it gets complicated and isn't worth it.


> Who said they aren't needed?

The first good implementation I'm aware of is Egil Möller's from 2003 (https://srfi.schemers.org/srfi-49/srfi-49.html). My independently researched prediction in 2017 was that no one will ever discover a place where parens or other visible syntax is needed. That sets up an easy proof by contradiction (and I've built a database of over 10k languages so far, looking for a single scenario where you would need them, and haven't found one yet). Here are 19 demo languages covering lots of scenarios demonstrating utility without parens (https://github.com/treenotation/jtree/tree/master/langs). A glaring hole though, now that I think about it, is that I don't have a good Lisp-like Tree Language. I think a Bel Tree Language would be an excellent example project (and I'm happy to assist anyone who wants to make an attempt, and if no one has the bandwidth and it's not done by next year perhaps I'll find the time to give it a go).

> They are the structure that supports Lisp's simplicity and consistency, which is what all the good things come from.

I agree with this, but would say they are "one" structure, not "the", and that there are other options (I have one idea in tree notation, but not ruling out that there are more, perhaps better ones).

> If you take them out, it gets complicated and isn't worth it.

I agree with this, but think there is a future tipping point related to tooling. Tree Notation was quite terrible 2 years ago, but now with type checking, syntax highlighting, autocomplete, I hate to use Lisp and other languages (everything feels unfinished, because I know the syntax characters can be removed).

I think we'll hit a tipping point where the tooling for Lisp without parens is good enough that the benefits of dropping them make dropping them a no-brainer. Could I be wrong? Sure, it's a forecast/prediction. But I want to encourage people to invent new alternatives, to try everything under the sun, and not to settle that they are somehow needed. We can make it better.


> "I think we'll hit a tipping point where the tooling for Lisp without parens is good enough that the benefits of dropping them make dropping them a no-brainer."

Any such tooling tipping point would likely make the syntax of the underlying representation irrelevant. What to view the logic in tree notation? Or how about with the parens? Or the parens, but so faint you can't see them? Or collapsing aspects of the code you're not currently concerned with? What gets eventually transformed into the 1s and 0s executed by the machine, and its various representations along the way are really not the concern of the person writing at the top of that stack, at least not most of the time.

Personally I find the added vertical space by tree notation very distracting and that it breaks the scan of my reading. But that's my personal preference, and something that could be addressed with tooling. And that's the point: same thing can be said with the parens of lisp or other editing/IDE tooling.

I happen to like fine-tipped pens, while others enjoy the smoothness of thicker rollerballs. Is one better than another? Do the benefits of one make using it to the exclusion of others a no-brainer? Does my enjoyment of fine-tipped pens preclude others from developing new pen types or new writing implementations all together? Might my evangelism for fine-tipped pens turn people off from exploring the perfection that is the Uni-ball Signo UM-151 0.28mm (in blue-black ink, of course)?

To each their own. Find the medium that best helps you express your message. And that may not be the same for others. And that's okay.


This is a great, analogy thanks (and as a novice to pen culture, but an avid pen and paper user, should I get myself some Uni-ball Signo UM-151 0.28mm?'s).

> Is one better than another?

Rollerballs and fine-tipped pens are probably equivalent, on an order of magnitude scale. However, surely we can agree that both are far superior to feather or dipped pens?

I am forecasting the difference between parens free Lisp and status quo Lisp will resemble the chasm between dip pens and fountain pens.

Lots of Lispers say, "dip pens are fine! After a while you don't even notice you are dipping them." Whereas I see a future where a lot more can get done easier because we don't have to waste time dipping our pens.


I personally find significant whitespace to be even more annoying than superfluous parens, so no thank you.


That’s a good point, without proper tooling it’s a toss up which is better (as in, they are both bad without proper tools).


I get that, but I can't help but view them as visual noise.


On the other hand, other languages use bunch of different characters and they all have different meanings and have to be in different places depending on lots of things.

In Lisp, you just have () and they are always in the same place, meaning the same thing.


This is a good point, in that Lisp is better than the other languages. But why not go all the way, and drop the parens? If the goal of Bel is to be a "good language", with shorter programs, why oh why not also strive for shorter syntax?

I love Lisp and have gotten more good constructive feedback from the Lisp community than anywhere else (with the Haskell community being a close second), but I don't understand why there's only been a dozen or so serious attempts to do Lisp without parens (I-Expressions being the best so far, better than the subsequent Wisp, etc). There should be 1,000x attempts.

This should be the top priority of most Lisp researchers. I of course think my Tree Notation is the solution, but I could easily be wrong, and there could be a better way, but the () need to go!


> But why not go all the way, and drop the parens?

Because Lisp is not about 'parens', but that code is written in a serialized form of Lisp data and that it can be both written as externalized textual data and as actual data (by calling functions which work directly over this data).

> dozen or so serious attempts to do Lisp without parens

MLISP, Lisp2, Logo, ML, Clisp, Dylan, RLISP, SKILL, CGOL, Julia, ...

> This should be the top priority of most Lisp researchers.

There are many more interesting areas for progress. A Lisp without s-expression-based syntax is not a main Lisp and will form a new language group - which has happened multiple times in the past - but it won't replace the main core Lisp languages.


Good references, thanks. I took a fresh look at CGOL and was again impressed.

> Because Lisp is not about 'parens',

Exactly, so why keep them around?

When you ditch the parens, things become easier. You no longer start your programs with a syntax error. You no longer even have syntax errors. Program concatenation is easier. Dictation of programs is easier: you speak "add 2 3" instead of "open parens add 2 3 close parens". Program synthesis is easier....

> A Lisp without s-expression-based syntax is not a main Lisp and will form a new language group

All I'm saying is put S-Expressions into normalized form (indented cleanly), drop the parens, and voila, everything can still work, and you've reduced things to their simplest form.


> Exactly, so why keep them around?

Because Lisp source code is written in a data-structure using nested lists and nested lists are written as s-expressions with parentheses. The same Lisp code can be executed by an s-expression-based interpreter.

> When you ditch the parens

Again, the parens are not what Lisp is about, it is the nested lists as source code: internally and externally. It just happens that the lists are written with ( and ).

What you propose is essentially not just getting rid of 'parens', but dropping the source code idea based on explicit list representation.

> , things become easier.

Code generation, manipulation and transformation becomes harder.

> You no longer even have syntax errors.

How so?

> Program concatenation is easier.

How so? Lisp actually makes many forms of source code transformations easy.

> All I'm saying is put S-Expressions into normalized form (indented cleanly), drop the parens, and voila, everything can still work, and you've reduced things to their simplest form.

But then we no longer have the 'Lisp code is explicitly written in a list-based data structure' idea, which is very powerful and still relatively simple.

Language with syntax based on textual representations already exist many and there is a place for a programming language which works slightly differently.

It's possible to drop the explicit s-expression syntax, as has been demonstrated many times over history, but then the has a different look and feel. It becomes something different then and loses basic Lisp features or makes them considerable harder to use.

One can say 'why keep the wheels around'? We can do that, but either the car won't drive very well or one would transform it into something else: a boat, a plane, a sled, .. It would lose its 'car nature'.


> What you propose is essentially not just getting rid of 'parens', but dropping the source code idea based on explicit list representation.

No, I keep the nested list representation of data (as does I-Expressions: https://srfi.schemers.org/srfi-49/srfi-49.html), you just ditch the enclosing parens in favor of whitespace (or, to be more precise, in favor of 3 syntactic tokens: atomBreakSymbol, nodeBreakSymbol and edgeSymbol; by convention space/" ", newline/"\n", and space/" ").

>> You no longer even have syntax errors.

>How so?

See for yourself: https://jtree.treenotation.org/designer/. Try to generate a syntax error, it's impossible. For the same reason you don't have syntax errors at the binary notation. I think this is a very, very important hint that there is something important going on here that ties into something in nature (I don't know what that is but seems like there is a fancy prize to the person who can explain that in mathy terms). However, using the openParenSymbol and closeParenSymbol style of delimiters, you do have syntax errors--unbalanced parens.

>> Program concatenation is easier.

> How so?

This one is mildly easier, but comes up all the time. We have a Tree Language called Grammar for building other Tree Languages (and yes, Grammar itself has an implementation in Grammar). We have a Tree Language called Hakon that compiles to CSS. And we have one called Stump that compiles to HTML. Want to build a new language called "TreeML" that has both? Just `cat hakon.grammar > treeml.grammar; cat stump.grammar >> treeml.grammar` and you are just about done (just concat your new root node). You don't have to worry about adjusting any parens at the tails. A minor improvement, but lots of little things like that add up.

> but then the has a different look and feel. It becomes something different

This might be true. It seems you are taking the stand that Tree Notation is something different than Lisp (which I actually lean to agreeing with), while most lispers have given me the flippant response "congratulations! you've reinvented lisp/s-expressions". I think both arguments have merit. We will see where it goes and perhaps although the differences with parens S-Expressions are slight, perhaps there will always be a separate niche for parens lisp.


> For the same reason you don't have syntax errors at the binary notation.

Really? So if we dd some block of bytes from /dev/random, that is valid?

If so, that's not a very good requirement to have, I'm afraid.


> I-Expressions

That's not an EXPLICIT list representation, where the nesting is notated by explicit characters.

It's also harder to use in interactive interfaces like REPLs, debuggers, etc., where Lisp lists don't need to have vertical layout.

> you do have syntax errors--unbalanced parens.

if you use a structure editor for Lisp you don't have unbalanced parentheses. In editors with support for s-expressions, unbalanced parentheses are basically not an issue.

> cat hakon.grammar

Yeah, Lisp works very differently. It does not use grammars like that. Lisp uses procedures in readtables to read s-expressions. Parsing the code is then done by the evaluator traversing the source or by the compiler traversing the source, with an integrated source code transformation phase (macro expansion).

We load a bunch of macros into Lisp and they then are available incrementally.

There are syntax-based structure editors (for example they were in Interlisp-D), but they always have limits, since macros can do arbitrary source transformations on s-expressions and those are not based on grammars.

https://www.youtube.com/watch?v=2qsmF8HHskg

You are really looking for a very different language experience. Lisp with s-expressions works very different from what you propose.

That's why I say: it's not about dropping parentheses, you propose to get rid of some of the core parts of Lisp (how programs are represented, parsed, etc.) and give it a different user interface. That's fine, but it is no longer Lisp like we know it.

Other syntaxes and IDEs for those have been done several times in Lisp-based language implementation. For example the Sk8 multimedia development tool from Apple had implemented something like AppleScript on top of Lisp. The multimedia applications were implemented in an AppleScript like language (actually it was kind of the first AppleScript implementation) with an IDE for that - implemented in Lisp.

https://opendylan.org/_static/images/sk8.jpg

Apple Dylan had an IDE written in Lisp for infix Dylan.

https://www.macintoshrepository.org/_resize.php?w=640&h=480&...

https://www.flickr.com/photos/nda/4738803235

http://www.dylanpro.com/picts/dylanProjectBrowser.gif

https://pbs.twimg.com/media/D0KY_rGV4AAJfxH.png

The original dream of Lisp was to have algol-like M-Expressions and use s-expressions only for data inside M-expressions. But the internals of Lisp were implemented with s-expressions and M-expressions were manually translated into s-expressions. Breaking into a Lisp execution and looking into the interpreter state then revealed the s-expression representation of code to the developer.

Then the cat was out of the bag...


> That's fine, but it is no longer Lisp like we know it.

I think this is fine, and now I can direct future commenters who tell me we've just "reinvented lisp" to your thread.

Really helpful links, and I am looking forward to watching that youtube video later this week when I have a moment. Thanks so much for all the information.


"why not go all the way, and drop the parens?" because no one found something that is simpler and shorter than just using parens. I-expressions are made up of invisible characters and my feeling is that not a lot of people who write lisp likes any sort of invisibility in their software.

For me, using parens is not a deal-breaker, as it's a relatively simple to use for dividing up things. Beats using invisible characters or many different characters.


At bottom something is necessary to delimit expressions. Going from () to semantic indentation is trading one delimiter for another, isn't it? Not exactly dropping the parens for free.


Break or fixed width delimiters are different than enclosing delimiters.

There's no longer any parens matching.

And not 2 things to keep track of (notice how Bel is both using parens but also following a whitespace convention for readability--that is then stripped and ignored by the compiler).

And also, your program is never in a syntactic error state--such things no longer exist. Semantic errors are still there, of course, but a whole class of errors goes puff.


With familiarity that goes away. :-)


Ignore the downvoters, you are 100% correct. They are visual noise. Completely superfluous. The biggest Lisp of all will have no parens.


Using a Lisp with Parinfer makes the parens automagic without taking away the benefits they provide.

https://shaunlebron.github.io/parinfer/ https://www.youtube.com/watch?v=K0Tsa3smr1w


Here's a Lisp that's whitespace sensitive to avoid parens: http://dustycloud.org/blog/wisp-lisp-alternative/

Other techniques to make them less prominent is to use a text face or syntax highlighting to make them less prominent, e.g: https://github.com/tarsius/paren-face/

It's not that Lisp has many more parens as that it's simplicity mandates only 1 block structure is used to define s-expressions (lisp's AST), whereas other languages adopt multiple syntaxes for defining their different blocks structures, e.g. () {} [] <>, lisp only uses ().


The Julia language is very lispy at its heart, giving you access to the AST and having "true" macros. The surface syntax however is akin to Python or Matlab. Granted, doing AST manipulations in Julia is more cumbersome than in Lisp, but AFAIK the language creators actually discourage the (over)use of macros, as the resulting mini-languages can make it difficult for outsiders/newcomers to understand or maintain the code.


You can! Our work is related: https://treenotation.org/

It would be straightforward to create a version of Bel that is a Tree Language without parens. I'd be happy to assist.


This probably means your "experience" of lisp was very short.


It admittedly is, just as it is for most other people: there are very few "serious" prorgams I can name that were written in Lisp, in spite of its venerated heritage.

There's no shortage of options in the programming language field, so one would be a fool to stick with a language they don't enjoy working in.


Would you prefer indentations and newlines?

https://chrisdone.com/posts/z/


I'm going to be honest, as a newbie to Lisp, I get it, but you can get get used to them in time.


How do I install this?


Seems goofy, but ok. I'll just use Scheme.


Bit Edgy Language? Nice work!


Some users are reporting that the links don't show up on some mobile browsers. Pending a fix, here they are. (Edit: fixed now.)

The way Lisp began http://paulgraham.com/rootsoflisp.html

A guide to the Bel language https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...

The Bel source https://sep.yimg.com/ty/cdn/paulgraham/bel.bel?t=1570864329&

Some code examples https://sep.yimg.com/ty/cdn/paulgraham/belexamples.txt?t=157...


There’s no link styling, at least on mobile Safari, they’re kind of essential to understand the article.

...sorry to be that guy.



Same on Android Chrome.


On mobile Firefox links are indistinguishable from regular text.

Workaround is to view full site (link at the bottom)


Hopefully there will be a fix soon. In the meantime see https://news.ycombinator.com/item?id=21231416.


I do not see any text file...

Must be hidden somewhere or not showing in my browser (firefox android with ad blockers)



I think the mobile site doesn't show the links for some reason. If you click on the "View full site" link at the bottom, the links show up.


There are links. They are hard to get to because of lack of link styling.


I'm shocked it's the first post of Paul since 2014

I feel lucky now to be online ;)


@pg: hyperlinks aren't visible on mobile web


Almost every language has macros now. Lisp is pointless.


Don't let the name "macro" fool you, Macros in Lisp is completely different than the macros you do in other languages.

In other languages, you're basically just outputting source code.

In Lisp, since the source code is actually the program, you can program the program.

There is a good perspective from a Perl developer about Lisp macros here: http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.h...


In some languages, like C, you don't have the full power of main language inside the macros.

In other languages, you have the whole language inside the macros, but the community and the docs discourage you about writing macros. As a related example, in rust the macros have a ! so it's easy to distinguish macros from function calls.

In the lisp family of languages, the idea is that everyone can write macros. Macros are dangerous because they can do weird things, but functions can do weird things too. Functions are more restricted, but until you look at the source code you are not sure that they will not ignore all the arguments, or format your hard disk, or something weird in between.

I know more about the internal working of racket. One of the open secrets is that it has a lot of macros that pretend to be function. They behave nicely like function, but under the hood they are macros, mostly to generate more efficient code in the common case. For example functions with keywords, of functions with contracts, or functions like `in-list` that can be used in a `for` to iterate a list. (Other macros are weird and do very complex things, in particular `for` and `match` are implemented as macros inside the language.)


Welcome back pg. You should check out Jetbrains MPS, they are doing cool stuff with projectional editing. Which could mean no more brackets. I read via Twitter that they recently announced MPS for Web in Amsterdam yesterday, unfortunately I missed the conference because of my home situation :). Luckily they filmed the presentations so I will be waiting for them on YouTube.


I have a hunch that this is going to become a big deal in the future. Not sure why.


Instead of writing code on the weekend, I instead have been thinking about and visualizing the code that I would like to write. That way I can use my hands for other activities, like playing in the dirt.


paulgraham.com isn't secured by HTTPS. The reason why sites like this should be secured is content manipulation attacks. Now I can't trust that the Bel source code that I see is actually written by pg.


Https doesn’t guarantee that. Someone could manipulate the content on the host.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: