Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Bel (paulgraham.com)
1288 points by pg on Oct 12, 2019 | hide | past | favorite | 456 comments



Whenever I see a new programming language, this list of questions by Frank Atanassow comes to mind:

    1. What problem does this language solve? How can I make it precise?

    2. How can I show it solves this problem? How can I make it precise?

    3. Is there another solution? Do other languages solve this problem? How?
       What are the advantages of my solution? of their solution?
       What are the disadvantages of my solution? of their solution?

    4. How can I show that my solution cannot be expressed in some other language?
       That is, what is the unique property of my language which is lacking in
       others which enables a solution?

    5. What parts of my language are essential to that unique property?
Do read the whole post (http://lambda-the-ultimate.org/node/687#comment-18074), it has lots of elaboration on these questions.

From a skim of the Bel materials, I couldn't answer these questions. Maybe PG or someone else can take a stab at the answer?


I think the point of a high-level language is to make your programs shorter. All other things (e.g. libraries) being equal, language A is better than language B if programs are shorter in A. (As measured by the size of the parse tree, obviously, not lines or characters.) The goal of Bel is to be a good language. This can be measured in the length of programs written in it.

Lisp dialects have as a rule been good at making programs short. Bel is meant to do the same sorts of things previous dialects have, but more so.

It's also meant to be simple and clear. If you want to understand Bel's semantics, you can read the source.


Thanks for the response! Which features of Bel do you think will contribute the most to making programs shorter or clearer, compared to other Lisp dialects?


It's so broadly the goal of the language that many things do, often in small ways. For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them. And the where special form and zap macro make it much easier to define operators that modify things. The advantage of Bel is a lot of small things like that rather than a single killer feature.

Making bel.bel shorter was one of my main goals during this project. It has a double benefit. Since the Bel source is a Bel program, the shorter I can make it, the more powerful Bel must be. Plus a shorter source is (pathological coding tricks excepted) easier to understand, which means Bel is better in that way too. There were many days when I'd make bel.bel 5 lines shorter and consider it a successful day's work.

One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.


> For example, it turns out to be really convenient that strings are simply lists of characters.

Statements such as these are very academic and concerning - to me - when it comes to new languages. And they make me wary of whether or not the language will ever move out of the land of theory and into practice.

Continuing with the "strings are cons lists" theme... Other, very notable languages have tried this in the past: Erlang and Haskell immediately come to mind. And - without exception - they all end up regretting that decision once the language begins being used for real-world applications that require even a moderate level of performance.

Lisp programmers (among which I count myself) are very fond of pointing to new languages and identifying all their "features" that were implemented in Lisp decades ago (so not "new"). And they also bemoan when a language design doesn't take a moment to learn from the mistakes of those that came before them. Strings as lists is very much in the latter case.

The above said, the idea of streams as a fundamental type of the language (as opposed to a base class, type class, or what-have-you) is quite intriguing. Here's hoping they are more than just byte streams.


As I read about strings-as-lists, I tried to maintain sight of one of the premises of PG's exercise -- that the language is climbing a ladder of abstractions in the non-machine realm.

The reality of 2019 is that strings are not random access objects -- they are either empty or composed of the first char and the rest of the chars. A list is the proper primary abstraction for strings.

That is, if "list" is not a data structure but a mathematical concept -- based on the concept of "pair." If I were a Clojure or Swift programmer -- and I'm both, I'd say there are protocols that embody Listness and Pairness, and an implementation would have multiple dispatch that allows an object to deliver on the properties of those protocols. There are other fundamental concepts, though, that deserve inclusion in a language.

Is something suitable as the first argument of `apply`? Functions obviously are, but to Clojure programmers, hash tables and arrays are (and continuations are to Schemers) just as function or relation-like as a procedure. This is so obviously a good idea to me that I am irritated when a language doesn't support the use of random-access or dictionary-like collections as function-like objects.

Which brings us to random-access and dictionary-like objects. Those should have protocols. And given that sets are 1) not relations and 2) are incredibly important, set-like and perhaps bag-like itself deserve support.

At minimum, a designer should think through the protocols they want to support in a deep way and integrate those into the language. Maximally, protocols would be first-class features of a language, which is what I'd prefer, because a protocol-driven design is so often so much more productive than an OO one.

PG's plans with respect to all of the above are what really interest me. IIRC Arc embodied some of these concepts (arrays-as-functions), so at the moment I'm content to watch him climb this ladder that he's building as he goes and see what comes of it.

[Edit: various typos and grammatical errors.]


I don't think they're claiming that strings should be random access. Rather, I think they're objecting to the notion that strings should be sequences at all, rather than scalars (a la perl).


Speaking of performance and streams, it seems streams are not even byte streams: they are bit streams (rdb and wrb).


> For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them.

Why is it preferable to couple the internal implementation of strings to the interface “a list of characters”? Also, since “character” can be an imprecise term, what is a character in Bel?


> Also, since “character” can be an imprecise term, what is a character in Bel?

Exactly. How is unicode and UTF-8 treated by this language?


While we're down here in the weeds[0]...

A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

0. "This is not a language you can use to program computers, just as the Lisp in the 1960 paper wasn't."[1]

1. https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...

2. http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Bounda...


> A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

I would agree. However, for the sake of argument,

The width of a grapheme cluster depends on whether something is an emoji or not, which for flags, depends on the current state of the world. The combining characters (u)(k) are valid as one single grapheme cluster (a depiction of the UK's flag), which is not valid if the UK splits up, for example. This is only one single example, there are many others, that demonstrate that there is basically no 'good' representation of 'a character' in a post-UTF8 world.


At this point Bel doesn't even have numbers. A character is axiomatically a character, not some other thing (like a number in a character set encoding). Isn't it premature to talk about character set encoding or even representation yet? A character is a character because it's a character. But I haven't nearly read through these documents so perhaps I'm not understanding something.


As far as I know a "character" is a number in a character set. "A character is a character because it's a character" doesn't make any sense, at least not in the context of a programming language. A character is a character because it maps to a letter or glyph, and that requires a specific encoding.

The Bel spec contains the following:

    2. chars

    A list of all characters. Its elements are of the form (c . b), where
    c is a character and b is its binary representation in the form of a 
    string of \1 and \0 characters. 
In other words, even in Bel a character is still a "number" in a character set encoding. The "binary representation" isn't mentioned as being a specific encoding, which means it's assumed to be UTF08 or ASCII.

Also, Bel does have numbers, apparently implemented as pairs. Go figure.


You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

But I am trying to read this in the spirit in which it's intended, as irreducible axioms. And so I don't think it's true that a character needs a representation from the perspective of Bel.

Consider if pg had approached this more like I would have, and started with integers and not characters as a built-in. Does an integer need a "representation"? At least, in a language that is not designed to execute programs in, like Bel? An implementation (with the additional constraints like width, BCD vs two's complement vs whatever, radix, etc) would need one, but the formal description would not.

Thanks for exploring the subject.


> You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

Can I ask why you claimed that it didn't have numbers or characters? It seems an odd thing to claim without actually having that knowledge in the first place.


To be fair, numbers aren't listed in the spec initially as one of its basic types (and they aren't, they're implemented as pairs, which are a basic type) and no one is disputing that Bel has characters, just whether it's appropriate to mention character encodings for what seems to be intended to be mostly an academic exercise.


That's actually an interesting point for a Unicode mailing-list, but as long as I can farm grapheme-splitting out to I'm happy.


> One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.

How do Bel's higher order facilities compare to Haskell's? E.g. https://wiki.haskell.org/Pointfree


> the shorter I can make it, the more powerful Bel must be

Yes, well, until some limit, when the code becomes harder and harder and eventually too hard and concise, for humans to easily understand? :- )

I wonder if humans sometimes understand better, read faster, with a little bit verbosity. I sometimes expand a function-style one-liner I wrote, to say three imperative lines, because, seems like simpler to read, the next time I'm at that place again. — I suppose though, this is a bit different from the cases you have in mind.

Best wishes with Bel anyway :- )


I have to disagree; this is very clearly too simplistic. There are many dimensions in which a language can be better or worse. Things like:

* How debuggable is it?

* Do most errors get caught at compile time, or do they require that code path to be exercised?

* How understandable are programs to new people who come along? To yourself, N years later?

* How error-prone are the syntax and semantics (i.e. how close is the thing you intended, to something discontinuous that is wrong, that won't be detected until much later, and that doesn't look much different, so you won't spot the bug)?

* How much development friction does it bring (in terms of steps required to develop, run, and debug your program) ... this sounds like a tools issue that is orthogonal to language design, but in reality it is not.

* What are the mood effects of programming in the language? Do you feel like your effort is resulting in productive things all the time, or do you feel like you are doing useless busywork very often? (I am looking at you, C++.) You can argue this is the same thing as programs being shorter, but I don't believe it is. (It is not orthogonal though).

* What is your overall morale of the code's correctness over time? Does the language allow you to have high confidence that what you mean to happen is what is really happening, or are you in a perpetual semi-confused state?

I would weigh concision as a lower priority than all of these, and probably several others I haven't listed.


One answer to this question (and an exciting idea in itself) is that the difference between conciseness and many of these apparently unrelated matters approaches zero. E.g. that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

I'm not sure how true that statement is, but my experience so far suggests that it is not only true a lot of the time, but that its truth is part of a more general pattern extending even to writing, engineering, architecture, and design.

As for the question of catching errors at compile time, it may be that there are multiple styles of programming, perhaps suited to different types of applications. But at least some programming is "exploratory programming" where initially it's not defined whether code is correct because you don't even know what you're trying to do yet. You're like an architect sketching possible building designs. Most programming I do seems to be of this type, and I find that what I want most of all is a flexible language in which I can sketch ideas fast. The constraints that make it possible to catch lots of errors at compile time (e.g. having to declare the type of everything) tend to get in the way when doing this.

Lisp turned out to be good for exploratory programming, and in Bel I've tried to stick close to Lisp's roots in this respect. I wasn't even tempted by schemes (no pun intended) for hygienic macros, for example. Better to own the fact that you're generating code in its full, dangerous glory.

More generally, I've tried to stick close to the Lisp custom of doing everything with lists, at least initially, without thinking or even knowing what types of things you're using lists to represent.


Most programming I do is exploratory as well. Sometimes it takes a couple of years of exploring to find the thing. Sometimes I have to rewrite pretty hefty subsystems 5-7 times before I know how they should really work. I find that this kind of programming works much better in a statically-typechecked language than it ever did in a Lisp-like language, for all the usually-given reasons.

I agree that there is such a thing as writing a program that is only intended to be kind of close to correct, and that this is actually a very powerful real-world technique when problems get complicated. But the assertion that type annotations hinder this process seems dubious to me. In fact they help be a great deal in this process, because I don’t have to think about what I am doing very much, and I will bang into the guardrails if I make a mistake. The fact that the guardrails are there gives me a great deal of confidence, and I can drive much less carefully.

People have expressed to me that functions without declared types are more powerful and more leveragable, but I have never experienced this to be true, and I don’t really understand how it could be true (especially in 2019 when there are lots of static languages with generics).


> Most programming I do seems to be of this type

> language in which I can sketch ideas fast

Can I ask, what are you programs about?

And sketching ideas? Is it ... maybe creating software models for how the world works, then input the right data, and proceed with simulating the future?


I'm still trying to sort this out here, not sure if I will manage, so take my apologies if it's a little confused.

I agree with this idea to a degree. However, there are limits to this "relation". Imagine a sophisticated software that compresses program sources. Let's say it operates on the AST level, and not on the byte level, since the former is a little bit closer to capturing software complexity, as you mentioned somewhere else. Now, I know that that's almost the definition of a LISP program, but maybe we can agree that 1) macros can easily become hard to understand as their size grows 2) there are many rituals programmers have in code that shouldn't get abstracted out of the local code flow (i.e. compressed) because they give the necessary context to aid the programmer's understanding, and the programmer would never be able (I assume) to mechanically apply the transformations in their head if there are literally hundreds of these macros, most of them weird, subtle, and/or unintuitive. Think how gzip for example finds many surprising ways to cut out a few bytes by collapsing multiple completely unrelated things that only share a few characters.

In other words, I think we should abstract things that are intuitively understandable to the programmer. Let's call this property "to have meaning". What carries meaning varies from one programmer to the next, but I'm sure for most it's not "gzip compressions".

One important measure to come up with a useful measure of "meaning" is likelihood of change. If two pieces of code that could be folded by a compressor are likely to change and diverge into distinct pieces of code, that is a hint that they carry different meanings, i.e. they are not really the same to the programmer. How do we decide if they are likely to diverge? There is a simple test, "could we write any of these pieces in a way that makes it very distinct from the other piece, and the program would still make sense?". As an aspiring programmer trying to apply the rule of DRY (don't repeat yourself) at some point I noticed that this measure is the best way to decide whether two superficially identical pieces of code should be folded.

I noticed that this approach defines a good compressor that doesn't require large parts of the source to be recompressed as soon as one unimportant detail changes. Folding meaning in this sense, and folding only that, leads to maintainable software.

A little further on this line we can see that as we strip away meaning as a means of distinction, we can compress programs more. The same can be done with performance concerns (instead of "meaning"). If you start by ignoring runtime efficiency, you will end up writing a program that is shorter since its parts have fewer distinctive features, so they can be better compressed. And if the compressed form is what the programmer wrote down, the program will be almost impossible to optimize after the fact, because large parts essentially have to be uncompressed first.

One last thought that I have about this is that maybe you have a genetic, bottom-up approach to software, and I've taken a top-down standpoint.


> Imagine a sophisticated software that compresses program sources...

Isn't that the definition of a compiler?


https://en.m.wikipedia.org/wiki/Partial_evaluation#Futamura_... are an interesting point of view for the definition of a compiler


> that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

Busting a gigantic nut when I see a blank file


I have a feeling this post got way more upvotes than it really deserves, partly because, duh, it's pg we're talking about.

Many other languages have been introduced here (e.g. Hy) that actually solve new problems or have a deeper existential reasons, not just to "shorten stuff".


IDK, this is exactly the kind of thing I enjoy reading while sitting down with a cup of coffee on sunday morning.

Sure, I'm not going to start programming in Bel but the thought processes in the design document are definitely something I could learn a thing or two from.


"Many other languages have been introduced here (e.g. Hy) that actually solve new problems"

Adding s-expression syntax to Python solves an important program?


Don't take my sentence out of context. My sentence continues to state "or ..." which you clearly don't want to understand.


Ya, here's why I think "make your programs shorter" is a good criterion.

If a language compresses conceivably-desirable programs to a smaller AST size, then for all N, a higher fraction of size-N ASTs (compared to other languages) represent conceivably-desirable programs.

So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write". For example, failing to free your memory is bad code, and it's impossible(ish) to write in managed-memory languages.


> So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write".

It reminds me "The Zen of Python" "There should be one — and preferably only one — obvious way to do it"

https://jeffknupp.com/blog/2018/10/11/write-better-python-fu...

https://towardsdatascience.com/how-to-be-pythonic-and-why-yo...


Make your programs as short as possible, but no shorter. Failing to free your memory makes for a shorter program. Using GC makes for a shorter program, but there is a whole class of problems that just can't be solved with GC.


I think deciding to not free your memory or use GC (instead of not) would count as having a different program.


"language A is better than language B if programs are shorter in A", so something like APL (not the weird characters, but rather the way each operator operates on entire arrays, and naturally chain together) or maybe Forth (or Factor, for a more modern Forth), since they are a bit more general (I've never heard of a website being built in an array-oriented language, for example) would be the holy grail?


Hello,

do you think you could find time make a table of content ? I like .txt files but a language spec is just a tad too long.

Or maybe for the lispers around have a short summary of what was inspiring you to make bel after arc ? what were your ideas, problems (beside making programs more expressive shorter).

Thanks


I disagree that measuring program length by the size of the parse tree is a good measure of conciseness. To an author and reader, it is the number of words you read that matters, so i use word count as the benchmark value. People who write newspaper articles are given a word count to hit, you type a certain number of words per minute, and people read at a certain speed. So words are the units of measurement that are easily counted and there will be no dispute.

A parse tree is not particularly comparable between languages; most modern languages make extensive use of powerful runtimes to handle complex things that are part of the OS. There is typically over a million lines of code behind a simple text entry field.

In my Beads language for example, i go to great lengths to allow declarations to have a great deal of power, but they are not executable code, and thus have hardly any errors. Lisp is not a particularly declarative type of language and is thus more error prone. The more code you don't execute the more reliable the software will be, so declarative programming is even better than Lisp.

Lisp derivative languages are notorious for their poor readability; hence the avoidance of Lisp by companies who fear "read-only" code bases that cannot be transferred to a new person. Lisp, Forth, APL, all win contests for fewest characters, but lose when it comes to the transfer phase. But like a bad penny, Lisp keeps coming back over and over, and it always will, because 2nd level programming, where you modify the program (creating your own domain specific language basically for each app), is the standard operating methodology of Lisp programmers. That one cannot understand this domain specific language without executing the code makes Lisp very unsuitable for commercial use.

Yes there are a few notable successful commercial products (AutoCad) that used Lisp to great results, but for the next 5-10 million programmers coming on board in the next few years, I laugh at anyone with the audacity to imagine that Lisp would be remotely suitable. Lisp is an archaic language is so many ways. It cannot run backwards. It has no concept of drawing; it is firmly rooted in the terminal/console era from which it sprang. With no database or graphics or event model, you have to use API's that are not standardized to make any graphical interactive products, which is what the majority of programmers are making.


>> Lisp derivative languages are notorious for their poor readability

Is it though?

I find at least 2 other languages more unreadable.

- Scala, with operators everywhere, you need to have a cheat sheet

- Perl, you know that joke that this is the only language that looks the same before and after RSA applied on it?

Lisp, on the other hand, is pretty readable, at least to me. I only used Clojure from the Lisp family and it had a great impact on how I think and write code in other languages. The result is more readability. For a long time MIT tought CS courses using Lisp and SICP is also a pretty amazing read.


Scala and Perl are even worse, i agree with you there. Some people love Haskell too, but i think it's awful.

MIT has replaced Lisp with Python, because even though they pushed forced it upon their students for decades they had to admit Lisp was archaic and not particularly readable. The Pharo IDE is arguably the most sophisticated IDE around today, but the fact remains that Lisp doesn't permit easy interchangeable parts, which is a major goal of the new programming languages being developed. Although very bright people can get very good at Lisp, the average person finds it extremely hard. Remember you are reading it from the inside-out which is highly unnatural to someone who reads books which read left-to-right.


93% of Paint Splatters are Valid Perl Programs

See: https://famicol.in/sigbovik/

--

So ignoring AST/character measures of shortness; is perl reaching optimality on the noise-to-meaning ratio?

Am I allowed to say this makes perl even more meaningful than other languages?

I'm not especially serious, but in the spirit of my own stupidity, let me suggest that this is how, finally, painters can truly be hackers


> i use word count as the benchmark value

Words are called 'symbols' in Lisp. Making extremely expressive domain level constructs is reducing the code size a lot in larger Lisp programs.

> Lisp is not a particularly declarative type of language

Just the opposite. Lisp is one of the major tools to write declarative code. The declarations are a part of the running system and can be queried&changed while the program is running.

> Lisp ... all win contests for fewest characters,

Most Lisp since the end 70s don't care about character count. Lisp usually uses very descriptive symbols as operator names. Lisp users invented machines with larger memory sizes in the late 70s, to get rid of limitations in code and data size.

Paul Graham is favoring low-character count code, but this is not representative for general Lisp code, which favors low-symbol count code through expressive constructs.

> understand this domain specific language without executing the code

the languages will be documented and will be operators in the language. Other programming languages also create large vocabularities, but in different ways. Lisp makes it easy to integrate syntactic abstractions in the software, without the need to write external languages, which many other systems need to do.

For example one can see the Common Lisp Object System as a domain-specific extension to write object-oriented code in Lisp. It's constructs are well documented and widely use in Lisp.

> it/is firmly routed in the terminal/console era...

Use of a Lisp Machine in the 80s:

https://youtu.be/gV5obrYaogU

Interactive graphical systems were early used in Lisp since the 70s. There is a long tradition of graphical systems written in Lisp.

For example PTC sells a 3d design system written with a C++ kernel and a few million lines of Lisp code:

https://www.ptc.com/-/media/Files/PDFs/CAD/Creo/creo-element...


Also don’t brains pursue an analogous, maximally compressed encoding of information via the use of abstractions/ideals and heuristics of probably many many kinds (of course lossy compression would evolve over a “lossless” compression which would grant minimal gains over lossy and be much more sophisticated/energy expensive)?

I wonder if psychology in this topic could inform programming language design as to a “natural” step-size of abstraction, or an optimal parse-tree measure which would correspond to an equivalent mental model.


Well, for high-level languages like you say. I guess you could say for two languages of the same “abstraction level class”, as terseness would be valued in two low level languages as well - from the humans POV, though! Since this is all related to easier conceptual expression.


I think you're unable to answer those questions as the language does not seem to aim to try to offer any answer to those questions. Bel seems to be an experiment, and maybe once the "specification" phase has been done, some of those questions could be answered.

Here is the relevant parts from the language document:

> Bel is an attempt to answer the question: what happens if, instead of switching from the formal to the implementation phase as soon as possible, you try to delay that switch for as long as possible? If you keep using the axiomatic approach till you have something close to a complete programming language, what axioms do you need, and what does the resulting language look like?

> I want to be clear about what Bel is and isn't. Although it has a lot more features than McCarthy's 1960 Lisp, it's still only the product of the formal phase. This is not a language you can use to program computers, just as the Lisp in the 1960 paper wasn't. Mainly because, like McCarthy's Lisp, it is not at all concerned with efficiency. When I define append in Bel, I'm saying what append means, not trying to provide an efficient implementation of it.


These questions aren't useful for evaluating pg's work (or frankly that of most PL implementors) because it concerns things, like syntax, libraries, user culture, etc., that is outside of the rather narrow domain Atanassow cares about.

He believes languages are utterly defined by their type systems, saying in the LtU comment you linked: "Perl, Python, Ruby, PHP, Tcl and Lisp are all the same language". I'd assume that he would say the same about js, lua, etc.. AFAICT, he's quite knowledgeable about PLT, formal methods, static typing, category theory, etc., but he disregards everything else.

It's worth considering whether he is right to do so. A waspish answer is to observe that companies built on languages he deems to be "in the corner" (c, java, dynamic languages) constitute the entirety of companies with significant market capitalization, and ask (apologies to Aaron Sorkin) "If your type system is so smart, why do you lose so always". A better answer is to note that a bunch of stuff that he deems trivial (generally anything outside of the type system, but specifically libraries, syntax, bindings to existing systems, etc.) matters, so javascript is different than python despite being identical in his eyes. Specifically, js runs in the browser and has a pretty good runtime for building web services, while python has exceptional ecosystem support for data science and machine learning.


If it's a language for general programming, I immediately ask if it has sum/product types, pattern matching, and static typing.

I'm not really interested in learning yet another dynlang that is missing all those.


What are sum/product types?



In terms of C, a sum is a union and a product is a struct.


Can this language render block quotes correctly on mobile in HN? Then I think it's a real win.


> How can I show that my solution cannot be expressed in some other language?

This is not easy. The solution is certainly computable in other languages, and expressivity is subjective.

We can quantify it like this: a solution is highly expressive if it is given in terms of elements mostly from the problem domain.

For instance, if we have to "malloc" some memory and bind it to a "pointer", but the problem domain is finance, those things don't have anything to do with the problem domain and are therefore inexpressive.

Even if we quantify expressivity, the proposition that "this has the best expressivity for a given problem domain than all other tools, known and unknown" is mired with intractability.


Maybe Bel is Blub?


That's fair, but I love PG so I'm biased lol


Reproduced from feedback that I gave pg on an earlier draft (omitting things he seems to have addressed):

When you say,

> But I also believe it will be possible to write efficient implementations based on Bel, by adding restrictions.

I'm having trouble picturing what such restrictions would look like. The difficulty here is that, although you speak of axioms, this is not really an axiomatic specification; it's an operational one, and you've provided primitives that permit a great deal of introspection into that operation. For example, you've defined closures as lists with a particular form, and from your definition of the basic operations on lists it follows that the programmer can introspect into them as such, even at runtime. You can't provide any implementation of closures more efficient than the one you've given without violating your spec, because doing so would change the result of calling car and cdr on closure objects. To change this would not be a mere matter of "adding restrictions"; it would be taking a sledgehammer to a substantial piece of your edifice and replacing it with something new. If closures were their own kind of object and had their own functions for introspection, then a restriction could be that those functions are unavailable at runtime and can be only be used from macros. But there's no sane way to restrict cdr.

A true axiomatic specification would deliberately leave such internals undefined. Closures aren't necessarily lists, they're just values that can be applied to other values and behave the same as any other closure that's equivalent up to alpha, beta, and eta conversion. Natural numbers aren't necessarily lists, they're just values that obey the Peano axioms. The axioms are silent on what happens if you try to take the cdr of one, so that's left to the implementation to pick something that can be implemented efficiently.

Another benefit of specifying things in this style is that you get much greater concision than any executable specification can possible give you, without any loss of rigor. Suppose you want to include matrix operations in your standard library. Instead of having to put an implementation of matrix inversion into your spec, you could just write that for all x,

    (or
     (not (is-square-matrix x))
     (singular x)
     (= (* x (inv x))
        (id-matrix (dim x))))
Which presuming you've already specified the constituent functions is every bit as rigorous as giving an implementation. And although you can't automate turning this into something executable (you can straightforwardly specify a halting oracle this way), you can automate turning this into an executable fuzz test that generates a bunch of random matrices and ensures that the specification holds.

If you do stick with an operational spec, it would help to actually give a formal small-step semantics, because without a running implementation to try, some of the prose concerning the primitives and special forms leaves your intent unclear. I'm specifically puzzling over the `where` form, because you haven't explained what you mean by what pair a value comes from or why that pair or its location within it should be unique. What should

   (where '#1(#1 . #1))
evaluate to? Without understanding this I don't really understand the macro system.


This is similar to the feedback Dave Moon gave to PG's previous language, Arc, more than a decade ago. http://www.archub.org/arcsug.txt

Representing code as linked lists of conses and symbols does not lead to the fastest compilation speed. More generally, why should the language specification dictate the internal representation to be used by the compiler? That's just crazy! When S-expressions were invented in the 1950s the idea of separating interface from implementation was not yet understood. The representation used by macros (and by anyone else who wants to bypass the surface syntax) should be defined as just an interface, and the implementation underlying it should be up to the compiler. The interface includes constructing expressions, extracting parts of expressions, and testing expressions against patterns. The challenge is to keep the interface as simple as the interface of S-expressions; I think that is doable, for example you could have backquote that looks exactly as in Common Lisp, but returns an <expression> rather than a <cons>. Once the interface is separated from the implementation, the interface and implementation both become extensible, which solves the problem of adding annotations.

This paragraph contributed a lot to my understanding of what "separating interface from implementation" means. Basically your comment is spot on. Instead of an executable spec, there should be a spec that defines as much as users need, and leaves undefined as much as implementors need.


In Clojure, some data structures are implemented as "sequable", which means that they are not implemented as sequences, but they can be converted to sequences if needed. Functions for head and tail are also defined for these structures, for example by internally converting them to sequences first. That means that any function that works with sequences can work with these structures, too.

This seems to me like the proper way to have "separation of interface from implementation" and "everything is a list" at the same time. Yeah, everything is an (interface) list, but not necessarily an (implementation) list.


Python and Rust both have similar "duck"y features where you can Impl iteration or add the necessary __special__ methods to get iteration features.


An object type can implement interface types, as you say.

Object type, and identity, can also be decoupled from implementation type. Dynamically, statically, or with advice. V8 javascript numbers and arrays can have several different underlying implementations, which get swapped depending on how the thing is used (and with extensions, you can even introspect and dispatch on them).

Typing can also be fine grained. So things can narrowly describe what they provide, and algorithms what they require. "I require a readable sequence, with an element type that has a partial order, and with a cursor supporting depth-one backtracking".

Object identity can also be decoupled from type. Ruby's 'refinements' permits lexically scoped alternative dispatch, so an object can locally look like something else. That, and just a few other bits, might have made the Python 2 to 3 transition vastly less painful - 'from past import python2'.

We are regrettably far from having a single language which combines the many valuable things we've had experience with.


Re: operational semantics, you can in fact "cheat" and lots of languages do. It's just difficult. Luajit and other tracing JITs optimize away implementation details or use specialized datastructures in the cases where no code depends on introspection behind the curtain, with trace poisoning and falling back to "proper" evaluation in cases where it's needed.


Right - a Lisp list is conceptually a linked-list. How the compiler chooses to implement the list is up to it - it can use an array as long as it can meet the specification and give useful performance.


> I'm having trouble picturing what such restrictions would look like. [...] there's no sane way to restrict cdr.

I guess you could question its sanity, but the obvious way would be to say that any object (cons cell) produced by make-closure or (successfully) passed to apply is/becomes a "closure object", and car/cdr will either fail at runtime or deoptimise it into its 'official' representation.


> 5. (where x)

> Evaluates x. If its value comes from a pair, returns a list of that pair and either a or d depending on whether the value is stored in the car or cdr. Signals an error if the value of x doesn't come from a pair.

> For example, if x is (a b c), > > > (where (cdr x)) > ((a b c) d)

That is one zany form.

1. How is this implemented?

2. What is the use of this?

3. What does (where x) do if x is both the car of one pair and the cdr of another, eg. let a be 'foo, define x to be (join a 'bar), let y be (join 'baz a), and run (where a).


It's used to implement a generalization of assignment. If you have a special form that can tell you where something is stored, you can make macros to set it. E.g.

  > (set x '(a b c))
  (a b c)
  > (set (2 x) 'z)
  z
  > x
  (a z c)
which you can do in Common Lisp, and

  > (set ((if (coin) 1 3) x) 'y)
  y
  > x
  (y z c)
which you can't.


I am diliriously happy to see you here commenting about lisp. I grew up with your writing and your work and... I don’t know, I just wanted to express excitement.

Thanks for the new dialect!


Thanks for sharing, Paul.

Still knee-deep in the source. Gonna steal some of this.

What made you settle on (coin), is that a LISP trope? I flopped back and forth between naming it (coin) and (flip) in my own LISP before finally settling on (flip). I'd honestly like to divorce the name entirely from its physical counterpart.


I originally did call it flip, but then I took that name for the current flip:

  (def flip (f)
    (fn args (apply f (rev args))))


> I'd honestly like to divorce the name entirely from its physical counterpart.

How about (bit) or (randbit)


(randbit) isn't bad. My implementation also takes a float 0 <= x <= 1 to determine the probability of the outcome, so (bit) would probably be too ambiguous. I do like the brevity of a 4-letter function, though. A lot of my lisp coding is genetic and probabilistic so it gets used a lot.


dice?


Would be the result of (where (cadr x)) still be ((a b c) d)? Is where basically tracking the most recent traversal?


(where (cadr x)) would be ((b c) a).

where tells you what to set, and setting the cadr of (a b c) means setting the car of (b c).


Ah, now it's crystal clear.


A couple of things on my checklist for mathematical purity are (a) first-class macros and (b) hardcoded builtins. It looks like Bel does have first-class macros. As for (b)...

> Some atoms evaluate to themselves. All characters and streams do, along with the symbols nil, t, o, and apply. All other symbols are variable names

The definitions of "ev" and "literal" establish that nil, t, o, and apply are in fact hardcoded and unchangeable. Did you consider having them be variables too, which just happen to be self-bound (or bound to distinctive objects)? nil is a bit of a special case because it's also the end of a list, and "(let nil 3 5)" implicitly ends with " . nil"; o might be an issue too (said Tom arguably); but apply and t seem like they could be plain variables.

P.S. It looks like you did in fact implement a full numerical tower—complex numbers, made of two signed rational numbers, each made of a sign and a nonnegative rational number, each made of two nonnegative integers, each of which is a list of zero or more t's. Nicely done.


Welcome back PG!

HN: How do you get an intuitionistic understanding of computation itself? While Turing Machines kind of make sense in the context of algorithms, can I really intuitively understand how lambda calculus is equivalent to Turing machines. Or how Lambda Calculus can solve algorithms? What resources helped understanding these concepts?

I'm currently following http://index-of.co.uk/Theory-of-Computation/Charles_Petzold-... and a bunch of other resources in the hope I'll "get" them eventually.


I can share my experience, because I was asking myself the same question 6 years ago...

My approach was to try and build a Lisp -> Brainfuck compiler. My reasoning was: Brainfuck is pretty close to a Turing machine, so if I can see how code that I understand gets translated to movement on a tape, I'll understand the fundamentals of computation.

It became an obsession of mine for 2 years, and I managed to develop a stack based virtual machine, which executed the stack instructions on a Brainfuck interpreter. It was implemented in Python. You could do basic calculations with positive numbers, define variables, arrays, work with pointers...

On one hand, it was very satisfying to see familiar code get translated to a large string of pluses and minuses; on the other, even though I built that contraption, I still didn't feel like I "got" computation in the fundamental sense. But it was a very fun project, a deep dive in computing!

My conclusion was that even though you can understand each individual layer (eventually), for a sufficiently large program, it's impossible to intuitively understand everything about it, even if you built the machine that executes that program. Your mind gets stuck in the abstractions. :)

So... good luck! I'm very interested to hear more about your past and future experiences of exploring this topic.


(Binary) Lambda Calculus can interpret brainfuck in under 104 bytes:

http://tromp.github.io/cl/Binary_lambda_calculus.html#Brainf...


Are you aware that GNU Guile, which is self hosted (written mainly in scheme), can interpret brainfuck?

https://www.gnu.org/software/guile/manual/guile.html#Support...


I wasn't aware, no. However, interpreting Brainfuck code is the easy part, as I've learned. The hard part was creating a "runtime" that understands memory locations aka. variables.

See this question that I asked (and later answered myself) around that time: https://softwareengineering.stackexchange.com/questions/2847...

Most of the project was figuring out things similar to that. You get to appreciate how high-level machine code on our processors really is! When things like "set the stack pointer to 1234" are one instruction, instead of 10k.


Quite intrigued with your approach; will look into trying something similar for visualising an embedded system.


I wrote a book to help answer these questions: https://computationbook.com/. It won’t necessarily be right for you (e.g. the code examples are in Ruby) but the sample chapter may help you to decide that for yourself.


I had a small homework on lambda-calculus and Turing machines equivalence when I was an undergrad (during my third year of uni iirc). It's in French but you may be able to follow by looking at the "code" parts. In it I use pure lambda-calculus to simulate a universal Turing Machine. It's probably not done the best not the more canonical way, but it's what I got while discovering lambda-calculus at the same time, so it might be good as a starting point for where you're at now. You can find the paper here: https://pablo.rauzy.name/files/lambdacalcul.pdf. Hope it helps!


I really like how type checking is implemented for parameter lists. I think there's a more generalized extension of this.

Specifically, I think that there exists a lisp with a set of axioms that split program execution into "compile-time" execution (facts known about the program that are invariant to input) and a second "runtime" execution pass (facts that depend on dynamic input).

For example, multiplying a 2d array that's defined to be MxN by an array that's defined to be NxO should yield a type that's known to be MxO (even if the values of the array are not yet known). Or if the first parameter is known to be an upper-triangular matrix, then we can optimize the multiplication operation by culling the multiplication AST at "compile-time". This compile-time optimized AST could then be lowered to machine code and executed by inputting "runtime" known facts.

I think that this is what's needed to create the most optimally efficient "compiled" language. Type systems in e.g. Haskell and Rust help with optimization when spitting out machine code, but they're often incomplete (e.g., we know more at compile time than what's often captured in the type system).

I've put "compilation" in quotes, because compilation here just means program execution with run-time invariant values in order to build an AST that can then be executed with run-time dependent values. Is anyone aware of a language that takes this approach?


https://github.com/idris-lang/Idris-dev/blob/5965fb16210b184...

Idris is not a Lisp and I've never used it, but it has dependent types (types incorporating values) and encodes matrix dimensions into the type system (I think only matrix multiplications which can be proven to have matching dimensions can compile). I think the dimension parameters are erased, and generic at runtime (whereas C++ template int parameters are hard-coded at compile time). IDK if it uses dependent types for optimization.


I'm not sure, but this seems a bit like how Julia specialize functions based on type of arguments? Or maybe it's the inverse - as Julia creates specialized functions for you (eg add can take numbers, but will be specialized for both int32 and int64 and execute via appropriate machine instructions).

In fact, I think Julia is a great example of taking some good parts of scheme and building a more conventional (in terms of syntax anyway) language on top.


Yes, Julia has some of it. But you're still required to specify the template parameters of a type (unless I'm mistaken). Whereas what I'm talking about is that any value of a data type could be compile time known. For example, some or all of the dimensions of an nd-array, as well some or all values of said nd-array.


Julia has explicit parameterization, but will also interprocedurally propagate known field values at compile time if known (which happens a lot more because our compile time is later), even if they weren't explicitly parameterized. Since this is so useful (e.g. as you say for dimensions of nd arrays - particularly in machine learning), there's been some talk of adding explicit mechanisms to control the implicit specialization also.


Ah, that's good to know. This sounds exactly like what I'm looking for. Thanks will read up on this in the docs!


"I think that there exists a lisp with a set of axioms that split program execution into "compile-time" execution"

Common lisp macros? Pre-hygienic macros in scheme? Did I get you wrong?

The question is what is missing to implement _static_ type checking as macros and to allow the compiler to leverage the generated information.


I'm not sure, but I think it's different. Specifically, I think you would do macro evaluation first, then fully evaluate the resulting program on run-time independent values, and only then evaluate the resulting program on run-time dependent values.

Edit: Also, run-time independent evaluation would need to handle branching differently. For example, in this expression: (if a b c). If `a` is not known at "compile time" then this expression remains in the AST, and run-time independent value propagation continues into `b` and `c`. If `a` is known at "compile time" then only `b` or `c` remain in the AST depending on whether `a` is true or false.


Short direct intro, link to guide for language, here is a link to code examples. This is for me are the things I look for when reading about some new computer language release upon here or anywhere and this just wins upon that first impression. It's kinda like the early days of quality usenet posts nostalgia in the direct and to the point aspect, and I love that.


>Bel is an attempt to answer the question: what happens if, instead of switching from the formal to the implementation phase as soon as possible, you try to delay that switch for as long as possible? If you keep using the axiomatic approach till you have something close to a complete programming language, what axioms do you need, and what does the resulting language look like?

I really like this approach and have wondered in recent years what a programming language designed with this approach would look like. There are a few that come close, probably including Haskell and some of the more obscure functional languages and theorem prover languages. Will be really interesting to see a Lisp developed under this objective.

>But I also believe that it will be possible to write efficient implementations based on Bel, by adding restrictions. If you want a language with expressive power, clarity, and efficiency, it may work better to start with expressive power and clarity, and then add restrictions, than to approach from another direction.

I also think this notion of restrictions, or constraint-driven development (CDD), is an important concept. PG outlines two types of restrictions above. The first is simply choosing power and clarity over efficiency in the formative stages of the language and all the tradeoffs that go with that. The second is adding additional restrictions later once it's more clear how the language should be structured and should function, and then restricting some of that functionality in order to achieve efficiency.

Reminds of the essay "Out of the Tarpit" [1] and controlling complexity in software systems. I believe a constraints-based approach at the language level is one of the most effective ways of managing software complexity.

[1]:https://github.com/papers-we-love/papers-we-love/blob/master...


    Bel has four fundamental data types:
    symbols, pairs, characters, and streams.
No numbers?

Then a bit further down it says:

    (+ 1 2) returns 3
Now suddenly there are numbers.

What am I missing?


Numbers are represented using pairs. Specifically

  (lit num (sign n d) (sign n d))
where the first (sign n d) is the real component and the second the imaginary component. A sign is either + or -, and n and d are unary integers (i.e. lists of t) representing a numerator and denominator.


Is an implementation compliant if it ignores this definition for the sake of performance?

One nit with this definition is that it implies the car of a number would be a well-defined operation. For complex numbers, it would be natural for car to return the real component.

I admit, it was surprising you are defining numbers at all. It’s tricky to pin down a good definition that isn’t limiting (either formally or for implementations).

I once got most of Arc running in JavaScript, almost identical to your original 3.1 code, and FWIW it was very fast. Even car and cdr, which were implemented in terms of plain JavaScript object literals, didn’t slow down the algorithms much.

But I suspect that requiring that (car x) always be valid for all numbers might be much more tricky, in terms of performance.

I apologize if you have already explained that implementation isn’t a concern at all. I was just wondering if you had any thoughts for someone who is anxious to actually implement it.

EDIT: Is pi written as 31415926 over 1000000?


I don't expect implementations to be compliant. Starting with an initial phase where you care just about the concepts and not at all about efficient implementation almost guarantees you're going to have to discard some things from that phase when you make a version to run on the computers of your time. But I think it's still a good exercise to start out by asking "what would I do if I didn't count the cost?" before switching to counting the cost, instead of doing everything in one phase and having your thinking constrained by worries about efficiency.

So cleverness in implementation won't translate into compliance, but rather into inventing declarations that programmers can use that will make their programs dramatically faster. E.g. if programmers are willing to declare that they're not going to look inside or modify literals, you don't have to actually represent functions as lists. And maybe into making really good programming tools.

As I said elsewhere, half jokingly but also seriously, this language is going to give implementors lots of opportunities for discovering new optimization techniques.


Something like 5419351 / 1725033 requires fewer digits (or bits) and gives a much better approximation, off by e-14 instead of e-8.

http://mathworld.wolfram.com/PiContinuedFraction.html


Doesn't this allow four distinct representations of zero?


An infinite number, in theory. Even (sign n d) does, since you can have anything in d.


It’s best not to get hung up on formality. Lisp (and expecially arc) are powerful due to what they can do, not due to what they define. Racket might disagree with that though.

From a prototyping perspective it would be a waste of time and overly constraining to define what numbers are or what you can do with them. That’s best left to the implementation.

I can hear everyone groaning in unison with that, but trust me. Nowadays every implementation will choose some sensible default for numbers. If you transpile bel to JS, you get JS’s defaults. Ditto for Lua. But crucially, algorithms written for one generally work for the other.


But Bel obviously has a numeric type if arithmetic works...


Sure, but all languages do.

In any case, I was mistaken. Numbers are defined.


Numbers are defined further down, in terms of those primitives. Search for the phrase "Now we come to the code that implements numbers."


I've only read the first twenty pages approximately. I'd like to see a little more disentangling of characters and strings as inputs to automata from text as something people read. The narrow technical meaning is implied in the text, but the use of "string" as a synonym for "text" is common enough that it might be worth being a little more explicit or didactic or pedantic or whatever.

The second thought is that lists smell like a type of stream. Successive calls to `next` on `rest` don't necessarily discern between lists and streams. The difference seems to be a compile time assertion that a list is a finite stream. Or in other words, a sufficiently long list is indistinguishable from an infinite stream (or generator).

I'm not sure you can have a lisp without lists, but they seem more like objects of type stream that are particularly useful when representing computer programs than a fundamental type. Whether there are really two fundamental types of sequences, depends on how platonic really really is.

All with the caveat, that I'm not smart enough to know if the halting problem makes a terminating type fundamental.


It's strange that the description of the language [0] starts using numbers without introducing them at first and then far later in the file says that they are implemented as literals. I didn't get to see the source code yet (I'm on mobile and have to randomly tap the text of the linked article to find links…), but I don't understand the point of this nor if it's just a semantic choice or actually implemented that way (I don't see how).

[0] https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...


I like the (English) syntax of the article. It seems to have the same concise feeling as the language itself. I like to think that's why pg chooses falsity over falsehood.

It's a little strange to have (id 'a) return nil. Also having car and cdr as historical holdouts when most of the naming seems to aim at an ahistorical stylistic.

Not very deep remarks, since I would need more time to digest.


Genuine questions that probably most here want to put but seem to be afraid of:

Why Bel? What are the problems that Bel wants to solve? Is this an hobby project or something more serious?


https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157... explains the purpose of the project. It's an attempt to axiomatize a complete programming language the way that McCarthy's Lisp did for computability.

Sounds serious to me. Why can't it be both?


I can't answer the main question, but my guess from pg's other work is that there is no dichotomy between hobby and serious. Frivolous seeming starting points can be powerful in unexplored ways. The www, most everything in it, and the stuff it's built out of are all rich with examples.


This seems relevant here, in his own words:

> Don't be discouraged if what you produce initially is something other people dismiss as a toy. In fact, that's a good sign. That's probably why everyone else has been overlooking the idea. The first microcomputers were dismissed as toys. And the first planes, and the first cars. At this point, when someone comes to us with something that users like but that we could envision forum trolls dismissing as a toy, it makes us especially likely to invest.

---

Related: https://blog.ycombinator.com/why-toys/


answer is in the guide and is quite interesting.

so, lisp development came in two phases: formal phase -- that's where original 1960 paper described lisp from simple axioms and built on them -- and implementation phase -- formal step is of no use for us because it didn't have numbers, error handling, i/o, etc.

argument here is that the formal phase might be the most important phase, but that the second phase usually takes longer and is more practical. so what if one delays the second phase for as long as possible? what could be discovered and be useful for the implementation phase? that's the raison d'etre of bel i think.

i very much like it :)


The kind of questions that don't make sense in a forum where 'Hacker' is in the title.


I believe that when somebody makes a "Show HN" post, this person wants feedback. You can't give it without understanding the project.


Critical questions have a tendency to be loaded, one way or another. Yours is (unintentionally, I assume) loaded with the assumption that projects are either hobby/toy projects or "serious" ones. The "ethos" OP alludes to is that this is a severely limiting fallacy and a criticism you should actively ignore.


Welcome back to Hacker News


Its been a long time!


"Its you... Its been a long time. How have you been?"

- GLaDOS (Portal 2)


I think at the end of the day, the question is whether a compiler for this type of language could efficiently handle a function like distinct-sorted:

   > (distinct-sorted '(foo bar foo baz))
   (bar baz foo)
This is a function that usually requires efficient hash tables and arrays to be performant, a hash table for detecting the duplicates, an array for efficient sorting. However, both the hash map and array could theoretically be "optimized away", since they are not exposed as part of the output.

A language like Bel that does not have native hash maps or arrays and instead uses association lists would have to rely entirely on the compiler to find and perform these optimizations to be considered a usable tool.


Interesting example of a unit test of sorts for language usability. Got any others?


I'm no language expert, but the there things I can think of that make bel impractical without major compiler trickery are (1) lack of primitive hash tables (2) lack of primitive arrays (3) no support for tail call optimization (though that third thing is probably fixable with the right compiler tricks)

The other concern I have is the lack of a literal associative data structure syntax (like curly braces in clojure) It seems that would negatively impact pg's goal of "code simplicity" quite a bit.


(1) lack of primitive hash tables (2) lack of primitive arrays

I'll note that if there are primitive arrays and the compiler optimizes arithmetic, the rest of the hash table can be implemented in Bel.

Also, maybe a Sufficiently Smart Compiler could prove that a list's cdrs will never change, store it in cdr-coded form, and treat it like an array (with the ability to zoom right to, say, element 74087 without chasing a bunch of pointers).


Is there a version of Bel written in some other language? If not, how do you get started without a bootstrap?


As mentioned in the user guide, just like Lisp, Bel is still in its formal phase, and not usable as a programming language yet. The idea is to extend the formal phase for a longer time, before diving into the implementation.


He uses an unreleased interpreter he wrote for it in arc.


It amuses and pleases me to see that pg will continue to play around with lisp presumably forever, regardless of how wealthy he becomes.

I hope I'll never stop coding passion projects myself.

John Carmack talked about this on Joe Rogan's show recently, about how he still codes and how Elon Musk would like to do more engineering but hasn't much time.

I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

Tim Sweeney is a billionaire and still knee deep in code.

It's Saturday night here, and I'm going to go write some code. Unproductive, unprofitable, beautiful game engine code.

Hope all you other hackers get up to something interesting this weekend.


My father, who is about to turn 80, is still knee deep in code every day -- having retired (many years ago) after 45 years in the industry. The guy remembers paper tape, and is still going strong.

And often, he's way into whatever the latest language is. He builds games, software for organizing pills and bills, and has designed and rebuilt his own editor several times.

Honestly, the guy's my hero.


Does your dad have a website or github with his work? Would be cool to check it out.


No, honestly. He contributes little bugfix patches to projects here and there to various projects, but he doesn't want to maintain software for anyone else (fair, I think -- he did spend an entire career doing that).

He makes (pretty cool) side-scrollers for this grandchildren, tools for himself. He sometimes complains that he's not sure what to build, but he figures it out. He also has a Mac and a Linux box (running Gentoo, I think). That keeps him busy!


This is encouraging for me to hear. I hope to still be hacking away on my projects when I reach that age. (and beyond ideally).


That’s awesome. I’m curious, what are his languages of choice?


You probably can't name one he hasn't played around with (and I'll admit -- I'm an enabler with this). He certainly has his black belt in C++, JavaScript, and Ruby, but he's crazy about Lisp and has recently gone on a tear with Nim.

He's written his own editor ("I want it the way I want it!") maybe four times in different languages, just because. He's always teasing me for being a vi user ("You're stuck in the 70s!")


> has recently gone on a tear with Nim.

That's awesome! I'd love to hear his thoughts :)


He really loved it (and I think plans to write some more stuff with it).

I'm coming up on 50, so all of my friends' parents are roughly the same age. The big crackup for me is that they're all worried about their parents falling for a phishing attack or having to update operating systems that haven't seen an update in ten years.

When I call my dad, he wants to get deep into compiler theory. Or talk about Rust. Or moon about how the world would have been so much better if Brendan Eich had just implemented Scheme. Or he'll talk about how much he loves TeX.

You know, he complains that he's not as sharp as he used to be. He's always warning me to take care of my body, because "mens sana in corpore sano!" But really, I just don't worry about him at all.

He's also one of the happiest people I know, and absolutely one of the most well read (I'm an English professor; I'm surrounded by well-read people. It's kind of amazing how many books he's read in his life).


damn, that's amazing. I don't think my hands or eyes will keep up with me at that age. Hell, they carp is already getting my wrists. At least I'll still have metalworking!


Maybe you won't (alas)! He is also a master-level woodworker, but had to stop due to spinal stenosis.

Being a geek, he has his Apple watch set up to remind him to go walk around every hour, and that tends to keep it at bay. But he really can't work in the shop any more.

And honestly (just to brag on my dad a bit more) the way he said goodbye to all that struck me as really impressive. He loved woodworking, but he didn't weep and mourn or go into a depression when he couldn't do it anymore. He just shrugged and said, "Well, I guess it's on to my other hobbies!"

That's how I want to be.


> regardless of how wealthy he becomes

It may be the reverse that happens here: he is wealthy enough to have time to play with Lisp.


There are a LOT of things like that. Travel is a good example.

For people making a living wage, some think “I don’t have the money to travel.” Then when they have the money, they think “I don’t have the time to travel.” They don’t really travel until they retire, and then they have trouble going anywhere that isn’t friendly to their knees and faltering eyesight, so they buy an RV.

Others travel in college from hostel to hostel. When they get work, they negotiate longer vacations to travel, or they deliberately become giggers so they can travel.

I met a couple in Thailand who climbed, they worked five months a year around the clock as nurses, and climbed seven months a year literally everywhere. They lived on the cheap so they would have more climbing days.

I have similar stories about people who ride bicycles and dive. If you are passionate, don’t wait for some magical day when you can fly the Concorde to go climbing in Europe with your friends (true story about some Yosemite dirtbags who came into illicit cash).

If you’re passionate about a thing, go do that thing. Now.


> they negotiate longer vacations to travel

Does that work? I've tried to negotiate more vacation days with every job offer I've received, and it's never worked.


Coming up on my 4-year anniversary at a company, having just returned from 2 years in a foreign office, I told my Director I'd like a 3rd week of vacation and no more money. Just treat me like I'd been working here an extra year.

He got me a $5,000 raise, no extra vacation time. Said that he did his best and he was the type of person I'd absolutely believe.

Ever since then, in my mind, a vacation day has to be worth more than $1,000. Simple economics, right? Somehow giving me $5,000 maximizes shareholder value more than giving me 5 extra days off.

My next job was more generous with vacation. They used the accrual method with a maximum and there was an extended period where it had become difficult to take time off and they'd stopped putting our PTO balance on our paystubs... so at some point I track down where to find the info and realize I'd lost 10 days of vacation accrual.

I about lost my mind and HR couldn't understand why I felt like they'd cheated me out of more than $10,000.

Felt trapped in the job forever 'cause eventually I'm earning I think 27 vacation days per year, and 10 to start is pretty typical, maybe 15 if you're lucky... And as much as we might like to think everything is negotiable, reality on the ground is that very little besides salary can be successfully negotiated.


I've negotiated with dozens of people who favor vacation over salary increase and I'm glad to give the extra time off to them. It's definitely negotiable.


Are you hiring?


A friend of mine told his boss “I’m going traveling for three months starting in January. Am I quitting or just taking a leave of absence?”. It worked.

To negotiate more vacation days at hiring, you need multiple offers. Then you can tell them “I will accept your offer if it includes six weeks of vacation. Otherwise I’m going to Facebook”. It probably means negotiating a little less hard on pay since you need to focus to get it. But that’s probably fair.


> Then you can tell them “I will accept your offer if it includes six weeks of vacation. Otherwise I’m going to Facebook”

I've used that exact line, and they didn't budge, so I went to Facebook. I guess everywhere I've applied is large tech companies. It might work better at smaller firms.


Negotiation is never a sure thing. Don’t say it unless you mean it, it can always go either way.


How much vacation time did you get at Facebook?


All US employees get 21 days. Not amazing, but it's better than the starting level for many companies.


Facebook budged, and Facebook is a big tech company.


Really? I thought they were 21 days for literally everybody. What did you get?


They won’t ever “budge” all the way up to your actual worth though, you just went from 10% of your worth to 12%.


Nothing is certain, but remember that there's a difference between negotiating "unpaid time off" and "vacation days."

A vacation day is a paid thing. Unpaid time off is not. It's all money and productivity in the end, but in many companies it's easier to negotiate unpaid time off, than to negotiate extra vacation days. Even if the money works out the same in the end.


I’m not sure these are the universal definitions you think they are. I refer to paid vacation as ‘time off’.


And then there’s PTO, which is sometimes “paid time off,” and sometimes “personal time off,” which is also paid:

https://en.wikipedia.org/wiki/Paid_time_off

Perhaps it’s best, as another comment suggests, to always include the adjectives “paid” or “unpaid.”


The modifier matters

Paid time off

Unpaid time off

Leave of absence

Quit and reapply when you get back.


First rule of negotiation is you have to be willing to walk away if you don't get what you ask for. Could be you weren't willing to say "no thanks" if they didn't budge on vacation time.


> First rule of negotiation is you have to be willing to walk away if you don't get what you ask for. Could be you weren't willing to say "no thanks" if they didn't budge on vacation time.

Second would be maybe start asking about that when the offer letter comes in. At which point they are invested in you.


Fwiw, Google allows for unpaid time off of less than 30 days. Your salary stops during that time, but benefits and so on stay on, which is nice rather than going into "Ugh, need to figure out COBRA".

Google also has up to 5 weeks paid vacation in the US, starting at 3 weeks when you join. After 3 years, you move to 4 weeks, and after 5 years you max out at 5. Other countries basically have their government mandated time off rules, but 5 weeks plus unpaid plus holidays is pretty good.

Each week of unpaid time is thus about 2% of salary. That said, I'm definitely in the minority in using unpaid time off (though, even more strange for me is how many colleagues let their vacation accrual reach the maximum and even forfeit days...).


You do have to leave occasionally to have believable leverage. I’ve found that if you announce you’re leaving and will quit if you have to they’re much more amenable.


Link for the lazy: https://www.thecannachronicles.com/the-dirtbags-of-dope-lake...

This is discussed in the Valley Uprising documentary as well (the whole doc is amazing and definitely worth your time).


I haven't checked your link, but John Long told the story in one of his many autobiographical books. He also was credited with the story idea that became the Sylvester Stallone climbing action movie "Cliffhanger," and that is loosely based on the incident.


Oh yeah, I'm definitely not providing the definitive or original account, just throwing out a decent overview for anyone interested in more details.


Your link provides a number of details I do not recall from John Long’s account. Thank you.

On the other hand, John Long is a raconteur par excellence, and I recommend all of his books as entertaining reading about an important time in climbing history.

——

One thing Long wrote/claimed that is not mentioned in the linked post is that two of the dirtbags involved leveraged their haul into an ongoing drug-dealing business. Until they were found, shot dead, in their home.


We want our employees to live first, work second. So we have set it up so people can work from anywhere, and they do. A lot of employees go and work from interesting travel places for a few weeks or months.


He was playing with Lisp before he was wealthy and also while becoming wealthy. HN's software, which runs on Arc, used to run YC too. It was all one codebase. Only after pg retired did YC move the business parts of the software out of Lisp, and it got a lot bigger and more complicated in the process—as any Lisper would expect.


At first it was not just all one codebase, but all one thread. If HN was busy, all our internal software would run a little slower.


have you thought about publishing the full source from then? The HN source was dense but very educational, and it would be really interesting to see internal tools written in arc.


You wouldn't learn much more from reading it, because (to an almost comical extent in retrospect) the internal code was just like HN. In fact, not just like; it was mostly the same code.


Interesting, am I reading this the right way that the larger YC organization's codebase is an outgrowth of the message board tech? You guys post updates on investments/ideas for what startups should be doing just like news links are shared on HN, but actually on HN/not visible to other normal-HN users?


That was once true, but it's all been rewritten.


Bigger and more complicated and handled many more users.


I was in the room when the decision was made, and it wasn't about more users.

That's the kind of thing people say because they assume it must be so. If you do that about unconventional things, which this software was/is, you'll simply reproduce the conventional view.


A rule devised on college summers from a state school. Students had money or time, depending on if they had a summer job. Seems to hold true to most peoples existence.

until you have enough money to make more money without adding much time..


Completely agree. While I have no idea what I'd actually do if I was this wealthy (maybe just enjoy life and give into time consuming addictions), but I have numerous projects that are "on hold" from lack of time.


> I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

I don't have any inside information, but looking at this old post of Joel Spolsky https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev... I guess that Bill Gates is still coding.


I love this. I'm definitely appropriating the phrase "MBA-type." Seems to have fallen out of fashion as MBA-types become increasingly common in tech.


My dad isn't a software engineer, but more of a combination mechanical/aerospace engineer (MA in mechanical, PhD in aerospace), but his official job is a tech-lead right now, which he likes, but he misses "real" engineering".

Pretty much every evening and weekend, he's mucking around with something in FreeCAD and knee-deep in equations that I don't fully understand, because he genuinely loves this stuff, and doesn't want to ever become rusty.

He's 58 now, but I get the impression that he'll be doing this until his deathbed.


That's when you know you've found a passion. A lot of guys work on cars on weekends and it's not a job. My dad was into it, I always found it a pain in the ass when something went wrong with a vehicle. He didnt, it always got him going full bore with equipment, wrenches, lifts, etc. He was an accountant by trade, but cars brought him to life.


Stand-up comedian Jerry Seinfeld mentioned in an interview that he still goes to comedy clubs to perform every week. Obviously he isn't doing it for the money. It's the process of testing out meterials before big shows that makes him good at his acts.

My takeaway from such stories is that Passion for work and keeping in touch with the path that lead them to success helps them stay relevant and successful.


BTW, comedians literally call this "working out"—it's understood that if you don't use it, even for a week, you start to lose it. So along with a need to do it, part of it is that it's more difficult to get back into comedic shape than it is to stay in shape.


It's very similar to programming, right? I guess we can generalize it to any specialized skills/expertise. Use it or you start losing it.


It's fortunate that taking only a week off programming doesn't dull your edge. That sounds nerve wracking!


I bet it does. It's just that most programmers are used to working at 25% or less of their actual capacity due to constant interruptions, so it's less apparent.


He talks about this in the Comedians in cars getting coffee Obama episode (worth a watch!). He basically says he fell in love with the work and that’s what keeps him grounded.


Seinfeld also blames his audience when they don’t like his jokes. He’s a really, really bad example of that (common) practice.


Oh?


Yup.


I always argue the software is a new form of literacy - and if you think of it like that, it is not at all unusual that someone will keep reading and writing well into retirement.

Have fun with your unproductive beautiful coding :-)


Man it’s much easier to code if you don’t do it at work.


>I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once but never got a reply.

He does (at least as of 2013): https://www.reddit.com/r/IAmA/comments/18bhme/im_bill_gates_...


I'm spending the weekend making a set of custom sleeved PSU cables - I'm in the middle of a crunch at work and writing code is the last thing I want to do ... so instead I'm doing something that's monotonous, but relaxing because of it. Don't need to engage higher brain functions, just cut, crimp, cut, melt, crimp, into the connector, next.

Whilst I do extoll the virtues of working on "passion projects" with programming, sometimes you need to do something else so that you can go back recharged, and that's OK too.


If you follow John in Twitter you'll occasionally see him mention c++ quirks he ran into.


I'll bet Bill Gates can still code some mean, lean C.


Amen


How does this relate to Arc?


The code I wrote to generate and test the Bel source is written in Arc, and Bel copies some things from Arc. Otherwise they're separate.


How is it an improvement over Arc? What issues does Arc have that Bel solves/addresses?


In the same way it's an improvement over other Lisp dialects. There's no huge hole in Arc that Bel fixes. Just a lot of things that are weaker or more awkward or more complicated than they should be.


> The code I wrote to generate and test the Bel source is written in Arc

Was this during bootstrapping, or is this still the case? Or in other words, do you now edit bel.bel directly or are there arc files you edit that compile into bel.bel?


It's still the case. bel.bel is generated by software. Most of the actual code is Arc that I turn into Bel in generating it. E.g. the interpreter and anything called by it, and the reader. But code that's only used in Bel programs, rather than to interpret or read Bel programs, can be and is written in Bel.

I had to change Arc a fair amount to make this work.

Curiously, enough, though, I found doing development in Bel was sufficiently better that I'd often edit code in bel.bel, then paste a translated version into the file of Arc code, rather than doing development in the latter. This seemed a good sign.


I remember reading pg's article on Lisp and startups, and at the time questioned if it was just mere luck. Then having a cursory look at Lisp, I questioned its relevance to the modern world.

... fast forward a decade later and I'm reading books on functional and logical languages for work. After the first chapter of The Little Schemer, I was at first blown away with the content, but then sad after I realised I had put off reading it late in my life.

If you're reading this comment and thinking Lisp and what's the point? Take a deep dive. It you're still questioning why, I highly encourage you to read The Little Schemer (and then all the others in the series). Scheme, Lisp, and now Bel, are a super power... pg's article was spot on.


You and I both like Rust. Concrete performance is one key reason why I choose to use Rust for various things, and all Lisps I know of are just generally slower and use more resources; I’m not aware of any Lisp attempting to address this problem, and from what I do understand of Lisps (though I’ve never seriously coded in one) they seem at least somewhat incompatible with such performance. I’m interested in whether you have any remarks on this apparent conflict of goals.

(I’d love to be credibly told I’m completely wrong, because syntax aside, which I could get used to eventually, I rather like the model of Lisps and some of the features that supports.)


You can build a Lisp to stay close to the metal with zero-cost abstractions. Most people drawn to Lisp are wanting productivity instead of max performance. Besides, the commercial Lisps and fastest in FOSS are really fast.

There was a systems type of Lisp called PreScheme that was closer to what you're envisioning. Carp also aims at no-GC, real-time use. Finally, ZL was C/C++ implemented in Scheme with both their advantages compiled to C. Although done for ABI research, I've always encouraged something like that to be done for production use with tools to automatically make C and C++ library bindings.

https://en.wikipedia.org/wiki/Scheme_48

https://github.com/carp-lang/Carp

http://zl-lang.org/


There's a post about the three tribes of programming. https://josephg.com/blog/3-tribes/

    Tribe 1: You are a poet and a mathematician. Programming is your poetry
    Tribe 2: You are a hacker. You make hardware dance to your tune
    Tribe 3: You are a maker. You build things for people to use
alfiedotwtf falls in the first tribe. You fall in the 2nd. Different tribes have different values, and hence all the back and forth.


What if I'm kind of all of them? Eternal damnation?

I've always strived to write beautiful code - for things people find very useful - in ways that push boundaries for what people think is possible with the machines we use.

I have succeeded in balancing any two of the above, to terrible detriment of the third. Never been able to juggle the 3 at the same time and this bothers me a lot. The upside is this pushes me to learn a lot, downside is I'm never content with my craft.

I'd have loved to solve domain specific important problems with a domain specific language I craft with a LISP and have state of the art performance while doing so.


Love this distinction. Makes a lot of sense, and is probably a key to happiness, to both realize which tribe you're in, and whether you're trying to argue with the tribe with another set of principles.


I like this :)

Programming is my poetry. Damn... that's a nice take


I always thought that SBCL was considered high performance and for example Chicken Scheme compiles to C. Both might be slower than Rust but I always had the impression both of these were more performant than say Java, but I haven't used either in a meaningful way to really know how it would rate for your performance needs.


Chicken's author also created Bones, a Scheme that outputs x86-64 assembly language.

http://www.call-with-current-continuation.org/bones/

README: http://www.call-with-current-continuation.org/bones/bones.ht...


SBCL is quite fast for a lot of things. I used to use it frequently for research projects that required performant code, and it generally wasn't to difficult to get within a factor of 2 of c code, though getting much closer could be hard. I haven't really kept up with it recently, but it regularly impressed me at the time.


There is no conflict of goals, languages offer certain features, some of them offer only "zero-cost" abstractions (C++ and Rust) and others offer more abstractions. You get what you pay for in terms of runtime executable speed / features trade-offs. The other factor is the compiler toolchain - see below.

If Rust had the same features as most LISPs have, then using these features would make Rust programs as "slow" as LISP programs. That is generally true for most programming language comparisons. What also matters for final executable speed is the toolchain, of course. Any languages that compile directly using GCC or LLVM and allows for compile-time typing will be roughly in the same ballpark. If they use their own compiler, such as Chez or Racket, then they usually don't match the performance, because there is not enough manpower in their teams to implement all those nifty optimizations GCC and LLVM have. SBCL is probably the fastest LISP with its own compiler. (Or Allegro?)

To give an example, here are some CommonLisp features that Rust lacks: Full object system with inheritance and multiple dynamic dispatch, dynamic typing, hot runtime code reloading/recompilation, garbage collection.


You seem to be describing the effects of conflicting goals.


You mean trade-offs? The point is that every language and every implementation has them.


I don't think "Lisps are just generally slower and use more resources" is true.

According to "TechEmpower Web Framework Benchmarks" (which might not be perfect but at least give some indication), Clojure is one of the fastest language (in terms of handling responses per second) for building a JSON API with. Take a look at https://metosin.github.io/reitit/performance.html


I don’t think those results are actually particularly meaningful or that they scale up very well, but I’m going to skip that and address other relevant matters.

I mentioned resource usage, and that includes memory usage. The JVM can be surprisingly fast for some types of code, but it’s never a memory lightweight.

I suppose I might as well mention another feature I value: ease of deployment. In Rust, I can produce a single statically-linked binary which I can drop on a server, or on users’ machines, or whatever as appropriate, and run it. For anything on the JVM… yeah, without extremely good cause I will consider dependency on the JVM to be a blocker, especially for a consumer product.


True, but also Clojure is not the only Lisp, just one I remembered was mentioned in a actual benchmark as being fast at some things.

Nowadays you can either chose a Lisp that does compile to a single binary and doesn't require the JVM (like SBCL or something like that), or you could use Clojure but use GraalVM to get the single binary.

Anyways, I agree with your general point, Rust being in general faster. But I don't agree with the whole "Lisps are just generally slower and use more resources" thing, while it's certainly true in some conditions.


TechEmpower is (historically) a JVM shop. No surprise they do not bench the aspects JVM stinks at (startup/warmup, mem usage, total installed size).


It’s rather hard to meaningfully measure any of these figures.

Startup you can measure, but the optimal tuning for faster general operation may start up slower, so now you might want both figures. And then you’re getting perilously close to development considerations, so once you’re doing that surely you should benchmark any required compilation, and so on.

Warmup is even harder to handle well, because it introduces another dimension; even if you were presenting just a single-dimensional figure (which you’re not quite), you’re now wanting to consider how that varies over time while warming up, so now it’s at least two-dimensional. But unless warmup takes ages, you’ll find it hard to get statistically significant figures, because you have so many fewer samples in each time slice.

Total installed size? Problematic because the number isn’t meaningfully comparable, as this is a minimal test, and a number in that scope gives no indication of how much is due to the environment (e.g. JRE, CPython, &c.), and how much further growth there will be as you add more features. People will also start arguing about what should count; if for example you count the delta from a given base OS installation, then you’re providing an advantage to something that uses the system version of, say, Python, and penalising something that requires a different and extra version of Python.

Memory usage? Suppose you pick two figures: peak memory usage, and idle memory usage after the tests. Both are easy to measure, but neither is particularly useful. Idle memory usage, similar problem to disk usage, and so the numbers aren’t usefully comparable. Peak memory usage is perhaps surprisingly the more useless of the two, because the increase in memory usage is strongly correlated with how many requests are being served at once—and so a slower contestant might artificially use less memory than a faster one; and so if you wanted those numbers to be comparable, you’d need to throttle requests to the lowest common denominator, which could then be argued as penalising light memory usage patterns.


You can have a single binary with the JVM and .Net core, so it’s a non issue.


The fastest for these sort of things are usually C++, Rust, C projects[1]. The fastest Java projects are Rapidoid, wizzardo-http, and proteus. The link you provided does not have a comprehensive comparison of different HTTP JSON frameworks.

1. https://www.techempower.com/benchmarks/#section=data-r18&hw=...


and here the author ports a Clojure program to Common Lisp with a x10 speed gain: http://johnj.com/from-elegance-to-speed.html. Sorry, x300.


The issue is memory management. Henry Baker helped popularize linear types in the early 1990s (http://home.pipeline.com/~hbaker1/) in his proposals for Lisp, from which work Rust's affine types derive, but no one wants to write in a dialect with those restrictions. Garbage collection seems like an algorithmically insurmountable problem. The other overheads are constant time, and Common Lisp already includes mechanisms to optimize them away (declarations and efficient arrays).


I'm currently using a CPU-slow machine, and I've discovered everything is slow among the popular scripting languages, except Lua. I've thought JS has good startup time, but some of my Lua scripts that are doing file i/o finish faster than node.js does a ‘hello world.’ Lua is also very lightweight on memory, to my knowledge. I think it can even do true multithreading, via libraries.

So, there's a Lisp on top of Lua―compiled, not interpreted: https://fennel-lang.org

It's deliberately feature-poor, just like Lua: you use libraries for everything aside simple functions and loops. And it suffers somewhat from double unpopularity of Lua and Lisp. But it works fine.


I am using Fennel for this: https://itch.io/jam/autumn-lisp-game-jam-2019

I will not finish ... I suck at lisp. But it works.


Have you looked at Julia?


Julia is compiled to machine code, not interpreted


Many lisps compile to machine code too. Compilation vs. interpretation is a feature of an implementation, not a language.


Scripting languages are by definition interpreted. The question was about fast scripting languages. I'm skeptical that Lisp compiled to Lua would count as in that category, but Julia unequivocally doesn't.


"Scripting languages are by definition interpreted" [[Citation Needed]].

There are Common Lisp implementations of both Python and Javascript (ES3) that compile to Common Lisp and then the Common Lisp compiles to machine code: as I said before, whether or not a language is interpreted or compiled is an implementation detail, not part of its spec.


https://www.webopedia.com/TERM/S/scripting_language.html

I chose this based off a random search on google of "scripting language." You can do the same and would get the same result.

It's not a rigorous term, because (you are right) its a property of the implementation not the language itself, despite the name.

But I don't think Julia has relevance to discussion of the speeds of interpreted Lisp.


That term was a long time ago already seen as problematic in the Lisp community. At the time when Ousterhout used it for marketing TCL.

Basically a scripting language interpreter runs source code files, called scripts.

Lisp does that, too. But many Lisp implementations compile the code and often to machine code, since the runtime includes an incremental compiler. The incremental compiler can compile individual expressions to memory.

Thus when you use a Lisp system, running a script just looks like running an interpreter, even though the code gets compiled incrementally.

Lisp uses the function LOAD, which can load a textual source file and which can then read, compile, execute expressions from the file. Typically this is also available via command line arguments.

Like in SBCL one can do:

  sbcl --script foo.lisp
and it will read/compile/execute expression by expression from a textual source code file, actually incrementally compiling each expression to machine code.


> Scripting languages are by definition interpreted.

I don't think this is correct. Scripting aren't rigorously defined, of course, but it would be reasonable to say they are defined by practice (i.e. a scripting language is practical for scripting).

Most compiled languages knock themselves out of contention in this definition because the startup and/or compile times are too long to be practical. But I've used luaJit in scripting contexts before, so that at least muddies the waters. Also common lisp (a variant that was always compiled).


It's a term for a simpler era where these divides made sense.

I've always understood it to mean interpreted and from a quick google search there are at least a sizeable number of people who agree.

I think now that there are a lot of high level compiled languages, it makes less sense.


That has nothing to do with how "lispy" it might be. Part of the compiler is implemented in femtolispe no?


I suspect this does have to do with whether or not it is a fast scripting language, considering a language compiled into machine code is by definition not a scripting language.

Sure, flag me, but my comment was relevant to the discussion.


Julia is currently (only) jit. AOT is stil work in progress AFAIK.

It's about as "compiled" as Luajit is.


Hey Chris :)

I totally get you with Lisps and performance, but I was more talking about the beauty and elegance of code written in them. Algorithms written in Lisps seem to be pieces of art.

And that's just Lisps... having a look at logic programming languages like Prolog and Mercury, it's amazing to see just how compact yet understandable an algorithm can be. They're beautiful!

But as for performance, take a look at Mercury - its a declarative logic programming language with built in constraint solving, which compiles down to C, and its performance is great. We've rewritten some Mercury in Rust, and yes our Rust is faster, but not by orders of magnitude as you'd think.


How's the editor support with Mercury?


Not sure about other editors, but there's syntax highlighting for Vim in the repo, and mtags (ctags for Mercury).


I use SBCL to build command line applications. Reasonable enough memory use and very fast startup times. I for a while used to use Gambit Scheme for this but decided I like Common Lisp better. We no longer need hardware Lisp machines for good performance. That said, for some applications Rust is clearly better.


Do you have some examples you could share?


There is one in the new edition of my Common Lisp book https://leanpub.com/lovinglisp

You can read it online for free, and clone the github examples repo.


Why do you like Common Lisp better?


I have so much experience with Common Lisp, starting around 1984 on my 1108 Lisp Machine. I have used various Scheme implementations a lot, but I just have more history and experience with Common Lisp. Both are great choices, and there are many high quality Schemes, plus Racket. Experiment, and then choose your own favorite Lisp.


Not exactly lisps, but Carp (https://github.com/carp-lang/Carp) and Scopes (https://bitbucket.org/duangle/scopes/wiki/Home) might be worth a look. I haven't tried the former, and couldn't get the latter to do anything useful, though. (it looks like the author stopped responding to issues, sadly).


"they seem at least somewhat incompatible with such performance"

Lisp is for the other 99.5% of programming that doesn't require you to wring every last drop of performance out of your CPU.


After JavaScript’s success, I’m convinced any language can be made fast given enough time and money.

Some languages are easier to be made faster, but I think JS almost proves there’s no such thing as a language incompatible with high performance by design.

You’d be hard pressed to conceive a more dynamic language and here we are, with crazy fast performance through many JIT tiers, interpreters, compilers, each of which would take years for a bright engineer to understand.


JavaScript is an extremely small, well-defined language with relatively simple semantics and very limited metaprogramming. It’s not a challenge compared to something like Ruby.


When was the last time you looked at it? Javascript is not small anymore. ECMAScript specification is over 800 pages, only a few hundred pages smaller than Java's.


> When was the last time you looked at it?

I’ve read the latest version of the specs of all these languages in detail.

I’ve worked professionally on a team implementing Ruby, the latest version of JavaScript, and Java at the same time, using the same language implementation system, so I can directly compare.

JavaScript is simplest by far. There’s quite a lot of sugar now, but the core semantics are small and simple. The core semantics of a language like Ruby are an order of magnitude more complicated.


You may be wrong in thinking all programs require maximum performance?

I just find your question a little confusing. My house is only two floors, yet meets all my needs. A bigger house would just waste my time in cleaning, cost me more in maintaining, in heating, etc.

It's the same for programs. Target performance at all cost, and for a lot of programs you get a hard to maintain, non scalable, bug ridden, featureless program that took you way too long to build.

That's why for example I use Clojure and Rust. Rust gets the least use, because the programs I tend to write can manage fine with Clojure's performance. In fact, I mostly toy with Rust, because I really never needed to write anything higher performance.

So I'm just not sure what you mean. The world is full of useful programs that meets the needs of their users which aren't high performance. For all these programs, Lisp is a superpower.


Some pointers:

- how to make lisp go faster than C: https://www.reddit.com/r/lisp/comments/1udu69/how_to_make_li...

- https://github.com/marcoheisig/Petalisp "Petalisp is an attempt to generate high performance code for parallel computers by JIT-compiling array definitions. It is not a full blown programming language, but rather a carefully crafted extension of Common Lisp that allows for extreme optimization and parallelization."


I don’t know if you are wrong, but I would give fennel lang a look. It’s a lisp that compiles to Lua, so you can access LuaJIT performance while lisping away.


I don't think OP considers LuaJIT high performance, considering lots of Lisps are just as performant as LuaJIT.


What if there was hardware accelerated Lisp?: https://en.wikipedia.org/wiki/Lisp_machine


a forth array cpu based lisp machine would be a fun toy


>> they seem ... incompatible with such performance

Might answer your question [0]

[0] https://news.ycombinator.com/item?id=2192629


I don’t think so. My simple reading of things there is that that was for one particular micro-benchmark (and the Benchmarks Game is acknowledged to be unrealistic and not actually suitable for comparing language performance), that the results were not actually conclusive, and that it had required telling SBCL to do some stupidly dangerous things that you should probably never turn on on real software, because they may well make it behave catastrophically badly rather than just crashing when you have a bug.

On https://benchmarksgame-team.pages.debian.net/benchmarksgame/... at present, the fastest SBCL implementation, which seems to include these optimisations, is at 2.0, with other SBCL implementations being slower, the first one (whatever that means) being 8.0; meanwhile, the Rust implementations range between 1.0 and 1.2. The SBCL implementations all use at least eight times as much memory as the Rust implementations, too.

I think this demonstrates my point pretty well, actually.


> … required telling SBCL to do some stupidly dangerous things …

No, that isn't required.

On the contrary SBCL is quite insistent that the arithmetic be made safe.

I'm not even a Lisp newbie, but with help from SO I've been able to tweak those spectral-norm programs to make the SBCL compiler happy without destroying the performance:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


The (micro)benchmarks game is a gimmick and should not be taken seriously. You forgot to mention that the SBCL #3 implementation also sits at 2.0 and is fully memory safe.

I fail to see the reason for you presenting the facts in this manner especially the "stupidly dangerous things part" which is of course totally inaccurate. (safety 0) has its uses (e.g. like Rust unsafe).

Your conclusions are not indicative of real world performance.


> … benchmarks game is a gimmick …

What exactly is it designed to attract attention to?

> … should not be taken seriously.

Because?

> … fully memory safe.

Here's the problem, the programs are explicitly required to use function calls but:

;; * redefine eval-A as a macro


> … acknowledged to be unrealistic and not actually suitable for comparing language performance …

By whom?

Maybe there are situations for which it is "realistic" — the point is that we don't know.

What would actually be suitable for comparing language performance?


I just read Fibonacci's Liber Abaci (from 1202). I think I'm not too far off the mark if I say Lisp in our day is what the Indian numeral system was in Fibonacci's day. A new writing system that has significant practical advantages.


10 years after college I finally understood the lisp, ml, prolog ladder

mainstream world is damaging to my soul


I know right :O


A major mistake is thinking that Lisp itself is magic.

The magic is actually in how Lisp changes the way you think.


No, no... that's exactly what I meant :)


> fast forward a decade later and I'm reading books on functional and logical languages for work

What work has you doing that?


I work at a wonderful company called YesLogic, and we try our best to do PDF generation. Mercury works wonders against all the constraints that CSS rendering can throw at us.


if you have any blog posts about that up i'd love to read them!


Can't say when, but we're planning on blogging about it in the near future. You'll most likely see it on "This Week in Rust" :)


If you want to, you can email a draft to hn@ycombinator.com and we might be able to give some tips on what the HN community tends to respond better to.

Same offer goes for anyone who's working on something they hope will interest HN. Just don't be disappointed if you don't hear back for a long time—and as a corollary, don't send it just a couple days before you plan to publish. We have terrible worst-case latency!


Awesome, thanks for that!


Guile + guix.

I work in devops and that replaced a horrible mess of god knows how many containers with a straight forward, boring and reproducible single large instance of Guix SD.


Very interesting. I played with Guile a while ago and came back impressed. I'd love to hear more about your work.


I'd be pretty interested in reading about this work. Do you have a blog post or any code you can share around this case study?


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: