
Show HN: Bel - pg
http://paulgraham.com/bel.html
======
cousin_it
Whenever I see a new programming language, this list of questions by Frank
Atanassow comes to mind:

    
    
        1. What problem does this language solve? How can I make it precise?
    
        2. How can I show it solves this problem? How can I make it precise?
    
        3. Is there another solution? Do other languages solve this problem? How?
           What are the advantages of my solution? of their solution?
           What are the disadvantages of my solution? of their solution?
    
        4. How can I show that my solution cannot be expressed in some other language?
           That is, what is the unique property of my language which is lacking in
           others which enables a solution?
    
        5. What parts of my language are essential to that unique property?
    

Do read the whole post ([http://lambda-the-
ultimate.org/node/687#comment-18074](http://lambda-the-
ultimate.org/node/687#comment-18074)), it has lots of elaboration on these
questions.

From a skim of the Bel materials, I couldn't answer these questions. Maybe PG
or someone else can take a stab at the answer?

~~~
pg
I think the point of a high-level language is to make your programs shorter.
All other things (e.g. libraries) being equal, language A is better than
language B if programs are shorter in A. (As measured by the size of the parse
tree, obviously, not lines or characters.) The goal of Bel is to be a good
language. This can be measured in the length of programs written in it.

Lisp dialects have as a rule been good at making programs short. Bel is meant
to do the same sorts of things previous dialects have, but more so.

It's also meant to be simple and clear. If you want to understand Bel's
semantics, you can read the source.

~~~
jblow
I have to disagree; this is very clearly too simplistic. There are many
dimensions in which a language can be better or worse. Things like:

* How debuggable is it?

* Do most errors get caught at compile time, or do they require that code path to be exercised?

* How understandable are programs to new people who come along? To yourself, N years later?

* How error-prone are the syntax and semantics (i.e. how close is the thing you intended, to something discontinuous that is wrong, that won't be detected until much later, and that doesn't look much different, so you won't spot the bug)?

* How much development friction does it bring (in terms of steps required to develop, run, and debug your program) ... this sounds like a tools issue that is orthogonal to language design, but in reality it is not.

* What are the mood effects of programming in the language? Do you feel like your effort is resulting in productive things all the time, or do you feel like you are doing useless busywork very often? (I am looking at you, C++.) You can argue this is the same thing as programs being shorter, but I don't believe it is. (It is not orthogonal though).

* What is your overall morale of the code's correctness over time? Does the language allow you to have high confidence that what you mean to happen is what is really happening, or are you in a perpetual semi-confused state?

I would weigh concision as a lower priority than all of these, and probably
several others I haven't listed.

~~~
pg
One answer to this question (and an exciting idea in itself) is that the
difference between conciseness and many of these apparently unrelated matters
approaches zero. E.g. that all other things being equal, the debuggability of
a language, and the pleasure one feels in using it, will be inversely
proportional to the length of programs written in it.

I'm not sure how true that statement is, but my experience so far suggests
that it is not only true a lot of the time, but that its truth is part of a
more general pattern extending even to writing, engineering, architecture, and
design.

As for the question of catching errors at compile time, it may be that there
are multiple styles of programming, perhaps suited to different types of
applications. But at least some programming is "exploratory programming" where
initially it's not defined whether code is correct because you don't even know
what you're trying to do yet. You're like an architect sketching possible
building designs. Most programming I do seems to be of this type, and I find
that what I want most of all is a flexible language in which I can sketch
ideas fast. The constraints that make it possible to catch lots of errors at
compile time (e.g. having to declare the type of everything) tend to get in
the way when doing this.

Lisp turned out to be good for exploratory programming, and in Bel I've tried
to stick close to Lisp's roots in this respect. I wasn't even tempted by
schemes (no pun intended) for hygienic macros, for example. Better to own the
fact that you're generating code in its full, dangerous glory.

More generally, I've tried to stick close to the Lisp custom of doing
everything with lists, at least initially, without thinking or even knowing
what types of things you're using lists to represent.

~~~
jstimpfle
I'm still trying to sort this out here, not sure if I will manage, so take my
apologies if it's a little confused.

I agree with this idea to a degree. However, there are limits to this
"relation". Imagine a sophisticated software that compresses program sources.
Let's say it operates on the AST level, and not on the byte level, since the
former is a little bit closer to capturing software complexity, as you
mentioned somewhere else. Now, I know that that's almost the definition of a
LISP program, but maybe we can agree that 1) macros can easily become hard to
understand as their size grows 2) there are many rituals programmers have in
code that shouldn't get abstracted out of the local code flow (i.e.
compressed) because they give the necessary context to aid the programmer's
understanding, and the programmer would never be able (I assume) to
mechanically apply the transformations in their head if there are literally
hundreds of these macros, most of them weird, subtle, and/or unintuitive.
Think how gzip for example finds many surprising ways to cut out a few bytes
by collapsing multiple completely unrelated things that only share a few
characters.

In other words, I think we should abstract things that are intuitively
understandable to the programmer. Let's call this property "to have meaning".
What carries meaning varies from one programmer to the next, but I'm sure for
most it's not "gzip compressions".

One important measure to come up with a useful measure of "meaning" is
likelihood of change. If two pieces of code that could be folded by a
compressor are _likely to change and diverge into distinct pieces of code_ ,
that is a hint that they carry different meanings, i.e. _they are not really
the same_ to the programmer. How do we decide if they are likely to diverge?
There is a simple test, "could we write any of these pieces in a way that
makes it very distinct from the other piece, and the program would still make
sense?". As an aspiring programmer trying to apply the rule of DRY (don't
repeat yourself) at some point I noticed that this measure is the best way to
decide whether two superficially identical pieces of code should be folded.

I noticed that this approach defines a good compressor that doesn't require
large parts of the source to be recompressed as soon as one unimportant detail
changes. Folding meaning in this sense, and folding only that, leads to
maintainable software.

A little further on this line we can see that as we strip away meaning as a
means of distinction, we can compress programs more. The same can be done with
performance concerns (instead of "meaning"). If you start by ignoring runtime
efficiency, you will end up writing a program that is shorter since its parts
have fewer distinctive features, so they can be better compressed. And if the
compressed form is what the programmer wrote down, the program will be almost
impossible to optimize after the fact, because large parts essentially have to
be uncompressed first.

One last thought that I have about this is that maybe you have a genetic,
bottom-up approach to software, and I've taken a top-down standpoint.

~~~
UncleEntity
> Imagine a sophisticated software that compresses program sources...

Isn't that the definition of a compiler?

~~~
ithkuil
[https://en.m.wikipedia.org/wiki/Partial_evaluation#Futamura_...](https://en.m.wikipedia.org/wiki/Partial_evaluation#Futamura_projections)
are an interesting point of view for the definition of a compiler

------
dfranke
Reproduced from feedback that I gave pg on an earlier draft (omitting things
he seems to have addressed):

When you say,

> But I also believe it will be possible to write efficient implementations
> based on Bel, by adding restrictions.

I'm having trouble picturing what such restrictions would look like. The
difficulty here is that, although you speak of axioms, this is not really an
axiomatic specification; it's an operational one, and you've provided
primitives that permit a great deal of introspection into that operation. For
example, you've defined closures as lists with a particular form, and from
your definition of the basic operations on lists it follows that the
programmer can introspect into them as such, even at runtime. You can't
provide any implementation of closures more efficient than the one you've
given without violating your spec, because doing so would change the result of
calling car and cdr on closure objects. To change this would not be a mere
matter of "adding restrictions"; it would be taking a sledgehammer to a
substantial piece of your edifice and replacing it with something new. If
closures were their own kind of object and had their own functions for
introspection, then a restriction could be that those functions are
unavailable at runtime and can be only be used from macros. But there's no
sane way to restrict cdr.

A true axiomatic specification would deliberately leave such internals
undefined. Closures aren't necessarily lists, they're just values that can be
applied to other values and behave the same as any other closure that's
equivalent up to alpha, beta, and eta conversion. Natural numbers aren't
necessarily lists, they're just values that obey the Peano axioms. The axioms
are silent on what happens if you try to take the cdr of one, so that's left
to the implementation to pick something that can be implemented efficiently.

Another benefit of specifying things in this style is that you get much
greater concision than any executable specification can possible give you,
without any loss of rigor. Suppose you want to include matrix operations in
your standard library. Instead of having to put an implementation of matrix
inversion into your spec, you could just write that for all x,

    
    
        (or
         (not (is-square-matrix x))
         (singular x)
         (= (* x (inv x))
            (id-matrix (dim x))))
    

Which presuming you've already specified the constituent functions is every
bit as rigorous as giving an implementation. And although you can't automate
turning this into something executable (you can straightforwardly specify a
halting oracle this way), you _can_ automate turning this into an executable
fuzz test that generates a bunch of random matrices and ensures that the
specification holds.

If you do stick with an operational spec, it would help to actually give a
formal small-step semantics, because without a running implementation to try,
some of the prose concerning the primitives and special forms leaves your
intent unclear. I'm specifically puzzling over the `where` form, because you
haven't explained what you mean by what pair a value comes from or why that
pair or its location within it should be unique. What should

    
    
       (where '#1(#1 . #1))
    

evaluate to? Without understanding this I don't really understand the macro
system.

~~~
cousin_it
This is similar to the feedback Dave Moon gave to PG's previous language, Arc,
more than a decade ago.
[http://www.archub.org/arcsug.txt](http://www.archub.org/arcsug.txt)

 _Representing code as linked lists of conses and symbols does not lead to the
fastest compilation speed. More generally, why should the language
specification dictate the internal representation to be used by the compiler?
That 's just crazy! When S-expressions were invented in the 1950s the idea of
separating interface from implementation was not yet understood. The
representation used by macros (and by anyone else who wants to bypass the
surface syntax) should be defined as just an interface, and the implementation
underlying it should be up to the compiler. The interface includes
constructing expressions, extracting parts of expressions, and testing
expressions against patterns. The challenge is to keep the interface as simple
as the interface of S-expressions; I think that is doable, for example you
could have backquote that looks exactly as in Common Lisp, but returns an
<expression> rather than a <cons>. Once the interface is separated from the
implementation, the interface and implementation both become extensible, which
solves the problem of adding annotations._

This paragraph contributed a lot to my understanding of what "separating
interface from implementation" means. Basically your comment is spot on.
Instead of an executable spec, there should be a spec that defines as much as
users need, and leaves undefined as much as implementors need.

~~~
Viliam1234
In Clojure, some data structures are implemented as "sequable", which means
that they are not implemented as sequences, but they can be converted to
sequences if needed. Functions for head and tail are also defined for these
structures, for example by internally converting them to sequences first. That
means that any function that works with sequences can work with these
structures, too.

This seems to me like the proper way to have "separation of interface from
implementation" and "everything is a list" at the same time. Yeah, everything
is an (interface) list, but not necessarily an (implementation) list.

~~~
nikisweeting
Python and Rust both have similar "duck"y features where you can Impl
iteration or add the necessary __special__ methods to get iteration features.

------
rntz
> 5\. (where x)

> Evaluates x. If its value comes from a pair, returns a list of that pair and
> either a or d depending on whether the value is stored in the car or cdr.
> Signals an error if the value of x doesn't come from a pair.

> For example, if x is (a b c), > > > (where (cdr x)) > ((a b c) d)

That is one zany form.

1\. How is this implemented?

2\. What is the use of this?

3\. What does (where x) do if x is both the car of one pair and the cdr of
another, eg. let a be 'foo, define x to be (join a 'bar), let y be (join 'baz
a), and run (where a).

~~~
pg
It's used to implement a generalization of assignment. If you have a special
form that can tell you where something is stored, you can make macros to set
it. E.g.

    
    
      > (set x '(a b c))
      (a b c)
      > (set (2 x) 'z)
      z
      > x
      (a z c)
    

which you can do in Common Lisp, and

    
    
      > (set ((if (coin) 1 3) x) 'y)
      y
      > x
      (y z c)
    

which you can't.

~~~
soulofmischief
Thanks for sharing, Paul.

Still knee-deep in the source. Gonna steal some of this.

What made you settle on (coin), is that a LISP trope? I flopped back and forth
between naming it (coin) and (flip) in my own LISP before finally settling on
(flip). I'd honestly like to divorce the name entirely from its physical
counterpart.

~~~
defen
> I'd honestly like to divorce the name entirely from its physical
> counterpart.

How about (bit) or (randbit)

~~~
soulofmischief
(randbit) isn't bad. My implementation also takes a float 0 <= x <= 1 to
determine the probability of the outcome, so (bit) would probably be too
ambiguous. I do like the brevity of a 4-letter function, though. A lot of my
lisp coding is genetic and probabilistic so it gets used a lot.

~~~
e12e
dice?

------
waterhouse
A couple of things on my checklist for mathematical purity are (a) first-class
macros and (b) hardcoded builtins. It looks like Bel does have first-class
macros. As for (b)...

> Some atoms evaluate to themselves. All characters and streams do, along with
> the symbols nil, t, o, and apply. All other symbols are variable names

The definitions of "ev" and "literal" establish that nil, t, o, and apply are
in fact hardcoded and unchangeable. Did you consider having them be variables
too, which just happen to be self-bound (or bound to distinctive objects)? nil
is a bit of a special case because it's also the end of a list, and "(let nil
3 5)" implicitly ends with " . nil"; o might be an issue too (said Tom
_arguably_ ); but apply and t seem like they could be plain variables.

P.S. It looks like you did in fact implement a full numerical tower—complex
numbers, made of two signed rational numbers, each made of a sign and a
nonnegative rational number, each made of two nonnegative integers, each of
which is a list of zero or more t's. Nicely done.

------
lewisjoe
Welcome back PG!

HN: How do you get an intuitionistic understanding of computation itself?
While Turing Machines kind of make sense in the context of algorithms, can I
really intuitively understand how lambda calculus is equivalent to Turing
machines. Or how Lambda Calculus can solve algorithms? What resources helped
understanding these concepts?

I'm currently following [http://index-of.co.uk/Theory-of-
Computation/Charles_Petzold-...](http://index-of.co.uk/Theory-of-
Computation/Charles_Petzold-Annotated_Turing-Wiley\(2008\).pdf) and a bunch of
other resources in the hope I'll "get" them eventually.

~~~
whoeverest
I can share my experience, because I was asking myself the same question 6
years ago...

My approach was to try and build a Lisp -> Brainfuck compiler. My reasoning
was: Brainfuck is pretty close to a Turing machine, so if I can see how code
that I understand gets translated to movement on a tape, I'll understand the
fundamentals of computation.

It became an obsession of mine for 2 years, and I managed to develop a stack
based virtual machine, which executed the stack instructions on a Brainfuck
interpreter. It was implemented in Python. You could do basic calculations
with positive numbers, define variables, arrays, work with pointers...

On one hand, it was very satisfying to see familiar code get translated to a
large string of pluses and minuses; on the other, even though I built that
contraption, I still didn't feel like I "got" computation in the fundamental
sense. But it was a very fun project, a deep dive in computing!

My conclusion was that even though you can understand each individual layer
(eventually), for a sufficiently large program, it's impossible to intuitively
understand everything about it, even if you built the machine that executes
that program. Your mind gets stuck in the abstractions. :)

So... good luck! I'm very interested to hear more about your past and future
experiences of exploring this topic.

~~~
dorfsmay
Are you aware that GNU Guile, which is self hosted (written mainly in scheme),
can interpret brainfuck?

[https://www.gnu.org/software/guile/manual/guile.html#Support...](https://www.gnu.org/software/guile/manual/guile.html#Supporting-
Multiple-Languages)

~~~
whoeverest
I wasn't aware, no. However, interpreting Brainfuck code is the easy part, as
I've learned. The hard part was creating a "runtime" that understands memory
locations aka. variables.

See this question that I asked (and later answered myself) around that time:
[https://softwareengineering.stackexchange.com/questions/2847...](https://softwareengineering.stackexchange.com/questions/284758/is-
is-possible-to-write-a-brainfuck-with-variables-compiler)

Most of the project was figuring out things similar to that. You get to
appreciate how high-level machine code on our processors really is! When
things like "set the stack pointer to 1234" are one instruction, instead of
10k.

------
alexkcd
I really like how type checking is implemented for parameter lists. I think
there's a more generalized extension of this.

Specifically, I think that there exists a lisp with a set of axioms that split
program execution into "compile-time" execution (facts known about the program
that are invariant to input) and a second "runtime" execution pass (facts that
depend on dynamic input).

For example, multiplying a 2d array that's defined to be MxN by an array
that's defined to be NxO should yield a type that's known to be MxO (even if
the values of the array are not yet known). Or if the first parameter is known
to be an upper-triangular matrix, then we can optimize the multiplication
operation by culling the multiplication AST at "compile-time". This compile-
time optimized AST could then be lowered to machine code and executed by
inputting "runtime" known facts.

I think that this is what's needed to create the most optimally efficient
"compiled" language. Type systems in e.g. Haskell and Rust help with
optimization when spitting out machine code, but they're often incomplete
(e.g., we know more at compile time than what's often captured in the type
system).

I've put "compilation" in quotes, because compilation here just means program
execution with run-time invariant values in order to build an AST that can
then be executed with run-time dependent values. Is anyone aware of a language
that takes this approach?

~~~
e12e
I'm not sure, but this seems a bit like how Julia specialize functions based
on type of arguments? Or maybe it's the inverse - as Julia creates specialized
functions for you (eg add can take numbers, but will be specialized for both
int32 and int64 and execute via appropriate machine instructions).

In fact, I think Julia is a great example of taking some good parts of scheme
and building a more conventional (in terms of syntax anyway) language on top.

~~~
alexkcd
Yes, Julia has some of it. But you're still required to specify the template
parameters of a type (unless I'm mistaken). Whereas what I'm talking about is
that any value of a data type could be compile time known. For example, some
or all of the dimensions of an nd-array, as well some or all values of said
nd-array.

~~~
KenoFischer
Julia has explicit parameterization, but will also interprocedurally propagate
known field values at compile time if known (which happens a lot more because
our compile time is later), even if they weren't explicitly parameterized.
Since this is so useful (e.g. as you say for dimensions of nd arrays -
particularly in machine learning), there's been some talk of adding explicit
mechanisms to control the implicit specialization also.

~~~
alexkcd
Ah, that's good to know. This sounds exactly like what I'm looking for. Thanks
will read up on this in the docs!

------
Zenst
Short direct intro, link to guide for language, here is a link to code
examples. This is for me are the things I look for when reading about some new
computer language release upon here or anywhere and this just wins upon that
first impression. It's kinda like the early days of quality usenet posts
nostalgia in the direct and to the point aspect, and I love that.

------
SkyMarshal
_> Bel is an attempt to answer the question: what happens if, instead of
switching from the formal to the implementation phase as soon as possible, you
try to delay that switch for as long as possible? If you keep using the
axiomatic approach till you have something close to a complete programming
language, what axioms do you need, and what does the resulting language look
like?_

I really like this approach and have wondered in recent years what a
programming language designed with this approach would look like. There are a
few that come close, probably including Haskell and some of the more obscure
functional languages and theorem prover languages. Will be really interesting
to see a Lisp developed under this objective.

 _> But I also believe that it will be possible to write efficient
implementations based on Bel, by adding restrictions. If you want a language
with expressive power, clarity, and efficiency, it may work better to start
with expressive power and clarity, and then add restrictions, than to approach
from another direction._

I also think this notion of restrictions, or constraint-driven development
(CDD), is an important concept. PG outlines two types of restrictions above.
The first is simply choosing power and clarity over efficiency in the
formative stages of the language and all the tradeoffs that go with that. The
second is adding additional restrictions later once it's more clear how the
language should be structured and should function, and then restricting some
of that functionality in order to achieve efficiency.

Reminds of the essay "Out of the Tarpit" [1] and controlling complexity in
software systems. I believe a constraints-based approach at the language level
is one of the most effective ways of managing software complexity.

[1]:[https://github.com/papers-we-love/papers-we-
love/blob/master...](https://github.com/papers-we-love/papers-we-
love/blob/master/design/out-of-the-tar-pit.pdf)

------
TekMol

        Bel has four fundamental data types:
        symbols, pairs, characters, and streams.
    

No numbers?

Then a bit further down it says:

    
    
        (+ 1 2) returns 3
    

Now suddenly there are numbers.

What am I missing?

~~~
pg
Numbers are represented using pairs. Specifically

    
    
      (lit num (sign n d) (sign n d))
    

where the first (sign n d) is the real component and the second the imaginary
component. A sign is either + or -, and n and d are unary integers (i.e. lists
of t) representing a numerator and denominator.

~~~
sillysaurusx
Is an implementation compliant if it ignores this definition for the sake of
performance?

One nit with this definition is that it implies the car of a number would be a
well-defined operation. For complex numbers, it would be natural for car to
return the real component.

I admit, it was surprising you are defining numbers at all. It’s tricky to pin
down a good definition that isn’t limiting (either formally or for
implementations).

I once got most of Arc running in JavaScript, almost identical to your
original 3.1 code, and FWIW it was very fast. Even car and cdr, which were
implemented in terms of plain JavaScript object literals, didn’t slow down the
algorithms much.

But I suspect that requiring that (car x) always be valid for all numbers
might be much more tricky, in terms of performance.

I apologize if you have already explained that implementation isn’t a concern
at all. I was just wondering if you had any thoughts for someone who is
anxious to actually implement it.

EDIT: Is pi written as 31415926 over 1000000?

~~~
pg
I don't expect implementations to be compliant. Starting with an initial phase
where you care just about the concepts and not at all about efficient
implementation almost guarantees you're going to have to discard some things
from that phase when you make a version to run on the computers of your time.
But I think it's still a good exercise to start out by asking "what would I do
if I didn't count the cost?" before switching to counting the cost, instead of
doing everything in one phase and having your thinking constrained by worries
about efficiency.

So cleverness in implementation won't translate into compliance, but rather
into inventing declarations that programmers can use that will make their
programs dramatically faster. E.g. if programmers are willing to declare that
they're not going to look inside or modify literals, you don't have to
actually represent functions as lists. And maybe into making really good
programming tools.

As I said elsewhere, half jokingly but also seriously, this language is going
to give implementors lots of opportunities for discovering new optimization
techniques.

------
brudgers
I've only read the first twenty pages approximately. I'd like to see a little
more disentangling of characters and strings as inputs to automata from text
as something people read. The narrow technical meaning is implied in the text,
but the use of "string" as a synonym for "text" is common enough that it might
be worth being a little more explicit or didactic or pedantic or whatever.

The second thought is that lists smell like a type of stream. Successive calls
to `next` on `rest` don't necessarily discern between lists and streams. The
difference seems to be a compile time assertion that a list is a finite
stream. Or in other words, a sufficiently long list is indistinguishable from
an infinite stream (or generator).

I'm not sure you can have a lisp without lists, but they seem more like
objects of type stream that are particularly useful when representing computer
programs than a fundamental type. Whether there are really two fundamental
types of sequences, depends on how platonic _really_ really is.

All with the caveat, that I'm not smart enough to know if the halting problem
makes a terminating type fundamental.

------
p4bl0
It's strange that the description of the language [0] starts using numbers
without introducing them at first and then far later in the file says that
they are implemented as literals. I didn't get to see the source code yet (I'm
on mobile and have to randomly tap the text of the linked article to find
links…), but I don't understand the point of this nor if it's just a semantic
choice or actually implemented that way (I don't see how).

[0]
[https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...](https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1570864570&)

------
stereolambda
I like the (English) syntax of the article. It seems to have the same concise
feeling as the language itself. I like to think that's why pg chooses falsity
over falsehood.

It's a little strange to have (id 'a) return nil. Also having car and cdr as
historical holdouts when most of the naming seems to aim at an ahistorical
stylistic.

Not very deep remarks, since I would need more time to digest.

------
galfarragem
Genuine questions that probably most here want to put but seem to be afraid
of:

Why Bel? What are the problems that Bel wants to solve? Is this an hobby
project or something more serious?

~~~
netcan
I can't answer the main question, but my guess from pg's other work is that
there is no dichotomy between hobby and serious. Frivolous seeming starting
points can be powerful in unexplored ways. The www, most everything in it, and
the stuff it's built out of are all rich with examples.

~~~
lioeters
This seems relevant here, in his own words:

> Don't be discouraged if what you produce initially is something other people
> dismiss as a toy. In fact, that's a good sign. That's probably why everyone
> else has been overlooking the idea. The first microcomputers were dismissed
> as toys. And the first planes, and the first cars. At this point, when
> someone comes to us with something that users like but that we could
> envision forum trolls dismissing as a toy, it makes us especially likely to
> invest.

\---

Related: [https://blog.ycombinator.com/why-
toys/](https://blog.ycombinator.com/why-toys/)

------
dangrossman
Welcome back to Hacker News

~~~
wtvanhest
Its been a long time!

~~~
giancarlostoro
"Its you... Its been a long time. How have you been?"

\- GLaDOS (Portal 2)

------
drcode
I think at the end of the day, the question is whether a compiler for this
type of language could efficiently handle a function like distinct-sorted:

    
    
       > (distinct-sorted '(foo bar foo baz))
       (bar baz foo)
    

This is a function that usually requires efficient hash tables and arrays to
be performant, a hash table for detecting the duplicates, an array for
efficient sorting. However, both the hash map and array could theoretically be
"optimized away", since they are not exposed as part of the output.

A language like Bel that does not have native hash maps or arrays and instead
uses association lists would have to rely entirely on the compiler to find and
perform these optimizations to be considered a usable tool.

~~~
sooheon
Interesting example of a unit test of sorts for language usability. Got any
others?

~~~
drcode
I'm no language expert, but the there things I can think of that make bel
impractical without major compiler trickery are (1) lack of primitive hash
tables (2) lack of primitive arrays (3) no support for tail call optimization
(though that third thing is probably fixable with the right compiler tricks)

The other concern I have is the lack of a literal associative data structure
syntax (like curly braces in clojure) It seems that would negatively impact
pg's goal of "code simplicity" quite a bit.

~~~
arundelo
_(1) lack of primitive hash tables (2) lack of primitive arrays_

I'll note that if there are primitive arrays and the compiler optimizes
arithmetic, the rest of the hash table can be implemented in Bel.

Also, maybe a Sufficiently Smart Compiler could prove that a list's cdrs will
never change, store it in cdr-coded form, and treat it like an array (with the
ability to zoom right to, say, element 74087 without chasing a bunch of
pointers).

------
cperciva
Is there a version of Bel written in some other language? If not, how do you
get started without a bootstrap?

~~~
npstr
As mentioned in the user guide, just like Lisp, Bel is still in its formal
phase, and not usable as a programming language yet. The idea is to extend the
formal phase for a longer time, before diving into the implementation.

------
PostOnce
It amuses and pleases me to see that pg will continue to play around with lisp
presumably forever, regardless of how wealthy he becomes.

I hope I'll never stop coding passion projects myself.

John Carmack talked about this on Joe Rogan's show recently, about how he
still codes and how Elon Musk would like to do more engineering but hasn't
much time.

I wonder if Bill Gates ever codes anything anymore, I emailed to him ask once
but never got a reply.

Tim Sweeney is a billionaire and still knee deep in code.

It's Saturday night here, and I'm going to go write some code. Unproductive,
unprofitable, beautiful game engine code.

Hope all you other hackers get up to something interesting this weekend.

~~~
p4bl0
> regardless of how wealthy he becomes

It may be the reverse that happens here: he is wealthy enough to have time to
play with Lisp.

~~~
braythwayt
There are a LOT of things like that. Travel is a good example.

For people making a living wage, some think “I don’t have the money to
travel.” Then when they have the money, they think “I don’t have the time to
travel.” They don’t really travel until they retire, and then they have
trouble going anywhere that isn’t friendly to their knees and faltering
eyesight, so they buy an RV.

Others travel in college from hostel to hostel. When they get work, they
negotiate longer vacations to travel, or they deliberately become giggers so
they can travel.

I met a couple in Thailand who climbed, they worked five months a year around
the clock as nurses, and climbed seven months a year literally everywhere.
They lived on the cheap so they would have more climbing days.

I have similar stories about people who ride bicycles and dive. If you are
passionate, don’t wait for some magical day when you can fly the Concorde to
go climbing in Europe with your friends (true story about some Yosemite
dirtbags who came into illicit cash).

If you’re passionate about a thing, go do that thing. Now.

~~~
badfrog
> they negotiate longer vacations to travel

Does that work? I've tried to negotiate more vacation days with every job
offer I've received, and it's never worked.

~~~
tbyehl
Coming up on my 4-year anniversary at a company, having just returned from 2
years in a foreign office, I told my Director I'd like a 3rd week of vacation
and no more money. Just treat me like I'd been working here an extra year.

He got me a $5,000 raise, no extra vacation time. Said that he did his best
and he was the type of person I'd absolutely believe.

Ever since then, in my mind, a vacation day has to be worth more than $1,000.
Simple economics, right? Somehow giving me $5,000 maximizes shareholder value
more than giving me 5 extra days off.

My next job was more generous with vacation. They used the accrual method with
a maximum and there was an extended period where it had become difficult to
take time off and they'd stopped putting our PTO balance on our paystubs... so
at some point I track down where to find the info and realize I'd lost 10 days
of vacation accrual.

I about lost my mind and HR couldn't understand why I felt like they'd cheated
me out of more than $10,000.

Felt trapped in the job forever 'cause eventually I'm earning I think 27
vacation days per year, and 10 to start is pretty typical, maybe 15 if you're
lucky... And as much as we might like to think everything is negotiable,
reality on the ground is that very little besides salary can be successfully
negotiated.

~~~
goatherders
I've negotiated with dozens of people who favor vacation over salary increase
and I'm glad to give the extra time off to them. It's definitely negotiable.

~~~
badfrog
Are you hiring?

------
arketyp
How does this relate to Arc?

~~~
pg
The code I wrote to generate and test the Bel source is written in Arc, and
Bel copies some things from Arc. Otherwise they're separate.

~~~
_emacsomancer_
How is it an improvement over Arc? What issues does Arc have that Bel
solves/addresses?

~~~
pg
In the same way it's an improvement over other Lisp dialects. There's no huge
hole in Arc that Bel fixes. Just a lot of things that are weaker or more
awkward or more complicated than they should be.

------
alfiedotwtf
I remember reading pg's article on Lisp and startups, and at the time
questioned if it was just mere luck. Then having a cursory look at Lisp, I
questioned its relevance to the modern world.

... fast forward a decade later and I'm reading books on functional and
logical languages for work. After the first chapter of The Little Schemer, I
was at first _blown away_ with the content, but then sad after I realised I
had put off reading it late in my life.

If you're reading this comment and thinking Lisp and what's the point? Take a
deep dive. It you're still questioning why, I highly encourage you to read The
Little Schemer (and then all the others in the series). Scheme, Lisp, and now
Bel, are a super power... pg's article was spot on.

~~~
chrismorgan
You and I both like Rust. Concrete performance is one key reason why I choose
to use Rust for various things, and all Lisps I know of are just generally
slower and use more resources; I’m not aware of any Lisp attempting to address
this problem, and from what I _do_ understand of Lisps (though I’ve never
seriously coded in one) they seem at least somewhat incompatible with such
performance. I’m interested in whether you have any remarks on this apparent
conflict of goals.

(I’d love to be credibly told I’m completely wrong, because syntax aside,
which I could get used to eventually, I rather like the model of Lisps and
some of the features that supports.)

~~~
iamwil
There's a post about the three tribes of programming.
[https://josephg.com/blog/3-tribes/](https://josephg.com/blog/3-tribes/)

    
    
        Tribe 1: You are a poet and a mathematician. Programming is your poetry
        Tribe 2: You are a hacker. You make hardware dance to your tune
        Tribe 3: You are a maker. You build things for people to use
    

alfiedotwtf falls in the first tribe. You fall in the 2nd. Different tribes
have different values, and hence all the back and forth.

~~~
dkarras
What if I'm kind of all of them? Eternal damnation?

I've always strived to write beautiful code - for things people find very
useful - in ways that push boundaries for what people think is possible with
the machines we use.

I have succeeded in balancing any two of the above, to terrible detriment of
the third. Never been able to juggle the 3 at the same time and this bothers
me a lot. The upside is this pushes me to learn a lot, downside is I'm never
content with my craft.

I'd have loved to solve domain specific important problems with a domain
specific language I craft with a LISP and have state of the art performance
while doing so.

------
excessive
This one seems backwards to me:

    
    
        (2 '(a b c))
    

In addition to being data structures, I like to think of lists/arrays as
functions which map integers to the contents. This nicely generalizes to hash
tables / associative arrays, and then further to actual functions. If that's
all reasonable, then

    
    
        ('(a b c) 2)
    

is the right order for application.

However, maybe pg is just thinking of 2 as a shorthand for cadr or similar.

~~~
pg
Initially I would have preferred that. I did it that way in Arc. But since
functions are lists in Bel, I couldn't do that, or you wouldn't be able to
call a function on a number.

As often happened with things I was forced into, though, I not only got used
to putting numbers first but started to prefer it. It means for example you
can compose them with other callable things.

~~~
e12e
Am I reading that right, that:

    
    
      (2 '(a b c))
    

Is equivalent to:

    
    
      (second '(a b c))
    

And that would work for strings as well:

    
    
      (5 "Hello, world!")
      > "o"
    

Which in turn means 'car and' 1 are equivalent? (probably means 'car should be
thrown out, because surely '1 is clearer?

Ed: and with some notation to differenciate "element at N" and "tail behind N"
you could get even more mileage out of integers? And then to generalize to
lists of lists to reference elements and sub-sections (sub dimensions, like
cubes) of matrices?

Not sure what would be nice, perhaps star or ellipsis?

    
    
      (1... '(a b c))
      >'(b c)
    
      ('(1..) '(0..2)
        '(
          (a b c)
          (d e f))
      >'(
        (b c)
        (d f))
    

Or something?

~~~
pg
It would not be clearer to use 1 instead of car when you were using a pair to
represent a tree, rather than a list, and you were traversing the left and
right branches using car and cdr.

~~~
e12e
Yes, I suppose that's something I've never grown quite used to with lisp -
that it's lists (as a special form of "leaning" trees) and trees - not arrays
and matrices.

I suppose even proper lists are incidental - it's really all about trees (and
also parse trees).

------
shpx
If Bel doesn't have irrational numbers, how do I use pi? Define a function
that returns an approximation to a given precision?

~~~
pubby
I'm curious how you use irrational numbers in any language. I'm not aware of
any mainstream languages that support them.

~~~
AYBABTME
Go does

~~~
jhoechtl
You probably confused irrational numbers with complex numbers?

~~~
AYBABTME
That's correct

------
pwpwp
No fexprs?
[https://web.cs.wpi.edu/~jshutt/kernel.html](https://web.cs.wpi.edu/~jshutt/kernel.html)

------
mark_l_watson
I like the syntax for function chaining: (dedup:sort < "abracadabra")

I assume that dedup and sort are separate functions.

------
imdhmd
I am unable to understand the difference between (a b c) and (a b . c)

~~~
SamReidHughes
(a b c) is (a b c . nil). That is, (a . (b . (c . nil))). On the other hand,
(a b . c) is (a . (b . c)). (Edit: rewrote post)

~~~
imdhmd
() is equivalent to an empty list and nil according to this notation, in which
case both would be proper lists. But according to pg, in bel, (a b . c) is not
a proper list.

Edit: I think I understand now, hmm.

~~~
erik_seaberg
Think of a proper list like a degenerate tree with all the values on the left
and structure on the right. A dotted list puts the last value on the right,
where nil would usually be.

Dotted lists are pretty rare. They mostly show up in association lists, where
key/value pairs are stored as

    
    
      ((name . dave) (type . user))
    

and adding a pair to the front of the list shadows any other pairs with the
same key.

------
applecrazy
I can't say I'm familiar with Lisp (or its dialects).

How does one get started learning a Lisp variant (in terms of learning
resources/guides), and why use Lisp over other languages?

~~~
cylinder714
As I understand it, Scheme was intended to be a dialect suitable for teaching
programming. The usual first texts are _The Little Schemer_ , _How to Design
Programs_ ([https://htdp.org/](https://htdp.org/) but apparently a third
edition is forthcoming:
[https://felleisen.org/matthias/HtDP3e/index.html](https://felleisen.org/matthias/HtDP3e/index.html))
and the inestimable SICP:
[http://sarabander.github.io/sicp/](http://sarabander.github.io/sicp/) (HN
discussion:
[https://news.ycombinator.com/item?id=13918465](https://news.ycombinator.com/item?id=13918465))

"A bad day writing code in Scheme is better than a good day writing code in
C." —David Stigant

In another man's opinion, "Common Lisp is the best language to learn
programming":

[https://oneofus.la/have-emacs-will-hack/2011-10-30-common-
li...](https://oneofus.la/have-emacs-will-hack/2011-10-30-common-lisp-is-the-
best-language-to-learn-programming.html)

Another resource for learning Common Lisp is Stuart C. Shapiro's _Common Lisp:
An Interactive Approach_ :
[https://cse.buffalo.edu/~shapiro/Commonlisp/](https://cse.buffalo.edu/~shapiro/Commonlisp/)

~~~
sedachv
Harvey and Wright's _Simply Scheme_ is another excellent book that does not
get enough mention.

------
neya
Welcome back Paul :) Great to hear from you again. I almost thought you were
too busy for HN since you became less active here, it's really nice to see you
again!

------
ajju
It’s great to see a technical post from pg after a while!

------
repolfx
There's a typecheck function in the standard library, but with no
documentation in the Bel source anywhere it's hard to know what it does.

For people like me who want static typing and a powerful type system to catch
errors, does lisp have anything to offer? My understanding is that it predates
decent type systems and lisps will always be genealogically much closer to
Python than, say, a Haskell or even a Kotlin.

~~~
pg
What this

    
    
      (def typecheck ((var f) arg env s r m)
        (mev (cons (list (list f (list 'quote arg)) env)
                   (fu (s r m)
                     (if (car r)
                         (pass var arg env s (cdr r) m)
                         (sigerr 'mistype s r m)))
                   s)
             r
             m))
    

says is, first create a function call

    
    
      (list f (list 'quote arg))
    

in which the function describing the type (e.g. int) is called on the argument
that came in for that parameter. Its value will end up on the return value
stack, r. So in the next step you look at the first thing on the return value
stack

    
    
      (car r)
    

If it's true, you keep going as if the parameter had been a naked one, with no
type restriction

    
    
      (pass var arg env s (cdr r) m)
    

and if it's false, you signal an error

    
    
      (sigerr 'mistype s r m)

------
jmccarthy
Open to sharing a bit about the name? No mention of it in the guide.

~~~
cperciva
"Characters that aren't letters may have longer names. For example the bell
character, after which Bel is named, is

\bel"

~~~
jmccarthy
There it is! Thanks.

~~~
tim333
Ah - brings back memories of the old teletype I first used that had an actual
mechanical bell that rang if it got that. Similar to
[https://trmm.net/Model_ASR33_Teletype](https://trmm.net/Model_ASR33_Teletype)

------
yellowapple
At the risk of some bikeshedding:

    
    
        The name "car" is McCarthy's. It's a reference to the architecture of 
        the first computer Lisp ran on. But though the name is a historical 
        accident, it works so well in practice that there's no reason to 
        change it.
    

While I understand the rationale here, would this not have been a good
opportunity to encourage something a bit less vestigial than "car" and "cdr"?
Say, "head" and "tail"? Or "left" and "right"? A lot of things come to mind
when seeing a bunch of cars and cdrs and such everywhere in Lisp code, and
"works so well in practice" ain't exactly one of them, IMO.

~~~
pg
As names, car and cdr are great: short, and just the right visual distance
apart. The only argument against them is that they're not mnemonic. But (a)
more mnemonic names tend to be over-specific (not all cdrs are tails), and (b)
after a week of using Lisp, car and cdr mean the two halves of a cons cell,
and languages should be designed for people who've used them for more than a
week.

~~~
yellowapple
> more mnemonic names tend to be over-specific

I feel like this applies just as much (if not more so) to calling something
the "contents of the address part of the register" or "contents of the
decrement part of the register", especially when in actuality the "car" and
"cdr" of a cons cell are implemented in a way that has nothing to do with an
IBM 704.

> short, and just the right visual distance apart

Even shorter (and similar visual distance apart) would be cl and cr (for "cons
left" and "cons right", i.e. the car and cdr, respectively). Or pl and pr if
we swap "cons"/"cell" for "pair". Like car and cdr, these can be combined into
other operations, like (cllllr foo) -> (cl (cl (cl (cl (cr foo))))). Heck, we
could go even shorter with just "l" and "r". They're even kinda pronounceable
("cull", "curr", "cullullullullurr"). Literally all the upsides of "car" and
"cdr" without any historical baggage.

Point being, if Bel is supposed to be a reconceptualization of Lisp, it feels
really weird to not reconceptualize how we talk about cons cells and the
contents thereof.

> after a week of using Lisp, car and cdr mean the two halves of a cons cell,

This could be true for any chosen terminology here. Unless you meant Lisp in
general and not this particular dialect, in which case there are
counterexamples to that (namely: Clojure, last I checked).

------
shrubble
Will someone please teach pg some APL? It seems to be the missing piece of
what he is working towards.

~~~
dang
Please say more. What would you bring from APL into Lisp?

~~~
shrubble
If Lisp is meant to be a formal model of computation ( a quote from the Bel
article by pg), then other formal models of computation should also be
examined; perhaps 'compare and contrast' of two different perspectives will
lead to a new thought or even a sort of amalgamation.

Lisp and Bel's basic data structure is the list. APL's is the array, which can
be seen as a list or in multidimensional form as a list of lists.

Lisp uses a form of notation that is quite different from the APL notation.

If there are advantages to both notations then it makes sense to study both of
them.

Bel's description of functions might benefit from looking at APL's niladic,
monadic and dyadic functions, as another example.

------
daoudc
I love the idea of trying to build a mathematically pure language. I wonder
how far it is possible to use optimisation techniques to make Bel efficient.
For example, can the representation of strings as pairs be automatically
optimised to a sequence of characters?

~~~
pg
One thing you can say for Bel for sure is that it will give implementors lots
of opportunities to develop new optimization techniques. That's partly
humorously euphemistic, but also true.

------
eggsyntax
There are two things I found confusing in the first part of the guide (ie the
part up to "Reading the Source"). The first is 'where', which is addressed in
another thread on this page. The second is this:

    
    
      This is a proper list:
      
      (a b c)
      
      and this is not:
      
      (a b . c)
    

Could someone clarify how to interpret '(a b . c)'? How would it be
represented in the non-abbreviated dot notation for pairs? It's not '(a . (b .
(c . nil))' \-- is it '((a . b) . c)'? The only Lisp I'm fluent in is Clojure,
so I'm not used to the dot notation; otherwise this might be obvious.

~~~
hollerith
(a b . c) is (a b) with the final nil replaced with c.

In other words, it is (a . (b . c)).

~~~
eggsyntax
Thanks, that clarifies it completely!

------
segmondy
Very nice for the depth and quality. This could pass for a very good grad
school project. Pg, how much effort did it take to figure out the axiomatic
approach for previous formal methods. How long and how much effort did it take
to produce this?

------
kizer
O GLORY DAY OUR PAUL HATH RETURNED! Also is Lisp just syntactic sugar over
Lambda calculus? All the Lisp people (I.e., the authors of books I’ve skimmed
on lisp) worship it, but it’s simply an encoding of a n-ary tree, no? I know
you can do elegant things like define lisp in lisp and the whole
homoiconicity[?] thing - but don’t these endless opportunities stem from n-ary
trees? My point is that of course Lisp can do this and that if it’s simply a
representation of a tree - trees capture as models like all the structures of
information.

------
orthoxerox
Why does apply evaluate to itself instead of (lit prim apply)?

~~~
pg
Because it's not a primitive. It's handled by a special case in the
interpreter,

    
    
      (= f apply)    (applyf (car args) (reduce join (cdr args)) a s r m)

rather than being something whose behavior you have to assume.

------
mrburton
PG is still alive and coding! :)

------
vessenes
Paul, this is really nice - I’m glad to see you doing some research about
stuff you clearly love.

Other than because you want to, do you have any sort of longer form apology
for bel worked out that you want to share?

I ask because I’m curious how you’re thinking about play, work and legacy at
this point in your career.

~~~
pg
I talk about this in the first section of the The Bel Language:
[http://paulgraham.com/lib/paulgraham/bellanguage.txt](http://paulgraham.com/lib/paulgraham/bellanguage.txt)

~~~
vessenes
I did read that before I asked, I promise :)

I was curious more about how you think about your own time at this phase in
your career -- are you mostly 'playing'? Is this "serious play"? Is it
motivated by anything beyond personal interest?

I ask because I'm monitoring my own projects and time commitments more
seriously as I hit my mid 40s and trying to make sense of how and when I've
had the best impact, globally, personally, to my own happiness, etc.

One of the keys for me seems to have been to pay attention to what
interests/intrigues me, and I'm curious what your experience is on that front
as well -- you have a pretty unique amount of experience assessing and
watching companies that have in some cases made a major amount of change in
the world.

Anyway, I guess I'm just asking for a sort of 'mid career thoughts' essay from
you, or at least wondering if you're thinking much about it.

~~~
pg
This was something I'd meant to do for a long time, and wanting to work on it
was one of the reasons I retired from YC. Being overtly ambitious would have
provoked haters, but few will see this thread now, so I'll tell you: the goal
was to discover the Platonic form of Lisp, which is something I could always
sense lurking beneath the surface of the many dialects I've used, but hidden
by mistaken design choices. (T was probably the best in this respect.)

I don't know how much my experience translates to other people, because my
"career" has been unusually random, but when I retired from YC what I was
thinking was that at 49, if there was something I'd been meaning to do, I'd
better do it.

------
ericd
I haven't had a chance to dig in yet, just wanted to say congrats on getting
it out into the world!

------
quickthrower2
Seems a bit closer to the lambda calculus than most lisps. Building stuff up
from such primitive types.

------
gdubs
Aside: Nice to see you hear again, pg.

------
iamwil
As for the name, I figured it has a couple connotations.

AT&T's Bell labs.

Belle is French for Beautiful

Bel is the 7th ASCII character and it use to ring an electro mechanical bell
in computers, before they had speakers or sound cards.

B is the second letter after A, with which Arc, his first Lisp variant was
named.

And both have three letters.

~~~
stevelosh
> For example the bell character, after which Bel is named, is \bel

------
QueensGambit
This seems like a DSL masking itself as a programming language. Will this
axiomatic approach allow declaration of domain specific operators like
credit/debit? Is this is an attempt to make application development axiomatic?

------
est31
The source is hosted on the yahoo CDN. Interesting.

~~~
tosh
I always wondered about the back story.

I guess Yahoo Store & migrating away from Yahoo infrastructure?

Who owns the domain now? pg?

~~~
the_watcher
It was built using Yahoo Stores, which is what Viaweb became when Yahoo
purchased it. He just continues to use it.

------
e12e
So, how does one bootstrap the interpreter?

Is it available as a Racket "language" or is it compatible with most/some
lisps/schemes?

------
juskrey
Have you considered posting this on HN under some pseudonym (including
original website etc) to make experiment more .. interesting?

------
idm
Hi Paul - looks interesting!

What is the license? My presumption is that you intentionally did not embed a
license within either of the documents.

------
fao_
> (dedup:sort < "abracadabra") "abcdr"

I extremely like this format. I think clojure has something similar?

------
jxub
Hyped to have you back in here pg :)

------
machawinka
There is not mention of licensing anywhere. Is it freely
distributable/modifiable?

------
Liron
How'd you pick the name?

------
z3phyr
I can't find the definition of id in bel.bel file. Did I miss it? @pg

------
rayalez
Wow! So cool to see a new post from PG!

Really curious to see what this project will become.

------
varjag
Is 'sys' really necessary, when you already have streams?

------
dfischer
Just when I was curious on learning some Lisp. I was also saddened how much
it's dropping on [https://www.tiobe.com/tiobe-
index/](https://www.tiobe.com/tiobe-index/)

~~~
erik_seaberg
“Lisp doesn't look any deader than usual to me.”—David Thornley via
[http://www.paulgraham.com/quotes.html](http://www.paulgraham.com/quotes.html)

~~~
dfischer
The actual graphs on the Tiobe can empirically show it's getting deader.

~~~
badcede
Lisp was around decades before "Tiobe", and will be for decades after.

~~~
dfischer
That wasn’t the point. It’s losing popularity. It’s been in a downtrend that
has never recovered. It’s only getting more and more esoteric.

Maybe it’ll come back in fashion one day.

------
SudoNhim
Exciting! This has a lot in common with Nock/Hoon

------
georgeEsb
Being so simple reminds me of RISC

------
machawinka
Wonder why `no` instead of `not`.

~~~
pg
Because falsity is also the empty list.

------
netcan
Welcome back pg.

I hope it was a pleasant return.

------
codr7
I just don't think yet another Lisp is going to do it, and I've written a few
[0].

Lisp is fine for what it is, one extreme of the spectrum and a child of its
time; but far from the final answer to anything.

We would be better off triangulating new ideas than polishing our crufty
icons. Lisp was a giant leap, hopefully not the last.

[0] [https://github.com/codr7/g-fu](https://github.com/codr7/g-fu)

~~~
codingdave
> I just don't think yet another Lisp is going to do it

Going to do what? I wasn't aware there was a specific goal that had to be met
here?

~~~
soulofmischief
Hell, if we're talking about the next emergent layer of software, autonomous
code / artificial intelligence, I'd say LISP has a better shot than most
languages. Performance-critical code can still be refactored into C but LISP
has much more built-in capability for advanced genetic coding and reflection.

~~~
codr7
Lisp had a shot since 1958, somewhere along the way it even made sense.

------
m0zg
Wish I could have Lisp without all the parentheses. Parentheses just sour the
experience for me.

~~~
dang
The parens are the shadow cast by Lisp's consistency, which is the reason to
wish for Lisp in the first place.

The solution is not to look at the parens. It's a bit like not listening to
tinnitus, which is harder for some than for others. Tools help.

~~~
m0zg
I get that, but I can't help but view them as visual noise.

~~~
capableweb
On the other hand, other languages use bunch of different characters and they
all have different meanings and have to be in different places depending on
lots of things.

In Lisp, you just have () and they are always in the same place, meaning the
same thing.

~~~
breck
This is a good point, in that Lisp is better than the other languages. But why
not go all the way, and drop the parens? If the goal of Bel is to be a "good
language", with shorter programs, why oh why not also strive for shorter
syntax?

I love Lisp and have gotten more good constructive feedback from the Lisp
community than anywhere else (with the Haskell community being a close
second), but I don't understand why there's only been a dozen or so serious
attempts to do Lisp without parens (I-Expressions being the best so far,
better than the subsequent Wisp, etc). There should be 1,000x attempts.

This should be the top priority of most Lisp researchers. I of course think my
Tree Notation is the solution, but I could easily be wrong, and there could be
a better way, but the () need to go!

~~~
lispm
> But why not go all the way, and drop the parens?

Because Lisp is not about 'parens', but that code is written in a serialized
form of Lisp data and that it can be both written as externalized textual data
and as actual data (by calling functions which work directly over this data).

> dozen or so serious attempts to do Lisp without parens

MLISP, Lisp2, Logo, ML, Clisp, Dylan, RLISP, SKILL, CGOL, Julia, ...

> This should be the top priority of most Lisp researchers.

There are many more interesting areas for progress. A Lisp without
s-expression-based syntax is not a main Lisp and will form a new language
group - which has happened multiple times in the past - but it won't replace
the main core Lisp languages.

~~~
breck
Good references, thanks. I took a fresh look at CGOL and was again impressed.

> Because Lisp is not about 'parens',

Exactly, so why keep them around?

When you ditch the parens, things become easier. You no longer start your
programs with a syntax error. You no longer even have syntax errors. Program
concatenation is easier. Dictation of programs is easier: you speak "add 2 3"
instead of "open parens add 2 3 close parens". Program synthesis is easier....

> A Lisp without s-expression-based syntax is not a main Lisp and will form a
> new language group

All I'm saying is put S-Expressions into normalized form (indented cleanly),
drop the parens, and voila, everything can still work, and you've reduced
things to their simplest form.

~~~
lispm
> Exactly, so why keep them around?

Because Lisp source code is written in a data-structure using nested lists and
nested lists are written as s-expressions with parentheses. The same Lisp code
can be executed by an s-expression-based interpreter.

> When you ditch the parens

Again, the parens are not what Lisp is about, it is the nested lists as source
code: internally and externally. It just happens that the lists are written
with ( and ).

What you propose is essentially not just getting rid of 'parens', but dropping
the source code idea based on explicit list representation.

> , things become easier.

Code generation, manipulation and transformation becomes harder.

> You no longer even have syntax errors.

How so?

> Program concatenation is easier.

How so? Lisp actually makes many forms of source code transformations easy.

> All I'm saying is put S-Expressions into normalized form (indented cleanly),
> drop the parens, and voila, everything can still work, and you've reduced
> things to their simplest form.

But then we no longer have the 'Lisp code is explicitly written in a list-
based data structure' idea, which is very powerful and still relatively
simple.

Language with syntax based on textual representations already exist many and
there is a place for a programming language which works slightly differently.

It's possible to drop the explicit s-expression syntax, as has been
demonstrated many times over history, but then the has a different look and
feel. It becomes something different then and loses basic Lisp features or
makes them considerable harder to use.

One can say 'why keep the wheels around'? We can do that, but either the car
won't drive very well or one would transform it into something else: a boat, a
plane, a sled, .. It would lose its 'car nature'.

~~~
breck
> What you propose is essentially not just getting rid of 'parens', but
> dropping the source code idea based on explicit list representation.

No, I keep the nested list representation of data (as does I-Expressions:
[https://srfi.schemers.org/srfi-49/srfi-49.html](https://srfi.schemers.org/srfi-49/srfi-49.html)),
you just ditch the enclosing parens in favor of whitespace (or, to be more
precise, in favor of 3 syntactic tokens: atomBreakSymbol, nodeBreakSymbol and
edgeSymbol; by convention space/" ", newline/"\n", and space/" ").

>> You no longer even have syntax errors.

>How so?

See for yourself:
[https://jtree.treenotation.org/designer/](https://jtree.treenotation.org/designer/).
Try to generate a syntax error, it's impossible. For the same reason you don't
have syntax errors at the binary notation. I think this is a very, very
important hint that there is something important going on here that ties into
something in nature (I don't know what that is but seems like there is a fancy
prize to the person who can explain that in mathy terms). However, using the
openParenSymbol and closeParenSymbol style of delimiters, you do have syntax
errors--unbalanced parens.

>> Program concatenation is easier.

> How so?

This one is mildly easier, but comes up all the time. We have a Tree Language
called Grammar for building other Tree Languages (and yes, Grammar itself has
an implementation in Grammar). We have a Tree Language called Hakon that
compiles to CSS. And we have one called Stump that compiles to HTML. Want to
build a new language called "TreeML" that has both? Just `cat hakon.grammar >
treeml.grammar; cat stump.grammar >> treeml.grammar` and you are just about
done (just concat your new root node). You don't have to worry about adjusting
any parens at the tails. A minor improvement, but lots of little things like
that add up.

> but then the has a different look and feel. It becomes something different

This might be true. It seems you are taking the stand that Tree Notation is
something different than Lisp (which I actually lean to agreeing with), while
most lispers have given me the flippant response "congratulations! you've
reinvented lisp/s-expressions". I think both arguments have merit. We will see
where it goes and perhaps although the differences with parens S-Expressions
are slight, perhaps there will always be a separate niche for parens lisp.

~~~
lispm
> I-Expressions

That's not an EXPLICIT list representation, where the nesting is notated by
explicit characters.

It's also harder to use in interactive interfaces like REPLs, debuggers, etc.,
where Lisp lists don't need to have vertical layout.

> you do have syntax errors--unbalanced parens.

if you use a structure editor for Lisp you don't have unbalanced parentheses.
In editors with support for s-expressions, unbalanced parentheses are
basically not an issue.

> cat hakon.grammar

Yeah, Lisp works very differently. It does not use grammars like that. Lisp
uses procedures in readtables to read s-expressions. Parsing the code is then
done by the evaluator traversing the source or by the compiler traversing the
source, with an integrated source code transformation phase (macro expansion).

We load a bunch of macros into Lisp and they then are available incrementally.

There are syntax-based structure editors (for example they were in
Interlisp-D), but they always have limits, since macros can do arbitrary
source transformations on s-expressions and those are not based on grammars.

[https://www.youtube.com/watch?v=2qsmF8HHskg](https://www.youtube.com/watch?v=2qsmF8HHskg)

You are really looking for a very different language experience. Lisp with
s-expressions works very different from what you propose.

That's why I say: it's not about dropping parentheses, you propose to get rid
of some of the core parts of Lisp (how programs are represented, parsed, etc.)
and give it a different user interface. That's fine, but it is no longer Lisp
like we know it.

Other syntaxes and IDEs for those have been done several times in Lisp-based
language implementation. For example the Sk8 multimedia development tool from
Apple had implemented something like AppleScript on top of Lisp. The
multimedia applications were implemented in an AppleScript like language
(actually it was kind of the first AppleScript implementation) with an IDE for
that - implemented in Lisp.

[https://opendylan.org/_static/images/sk8.jpg](https://opendylan.org/_static/images/sk8.jpg)

Apple Dylan had an IDE written in Lisp for infix Dylan.

[https://www.macintoshrepository.org/_resize.php?w=640&h=480&...](https://www.macintoshrepository.org/_resize.php?w=640&h=480&bg_color=333333&imgenc=ZmlsZ56cXMvbWcvc2l0ZXMvbWcvZmlsZXMvc2NyZWVuc2hvdHMvYnJvd3NlcnMxXzEucG5n)

[https://www.flickr.com/photos/nda/4738803235](https://www.flickr.com/photos/nda/4738803235)

[http://www.dylanpro.com/picts/dylanProjectBrowser.gif](http://www.dylanpro.com/picts/dylanProjectBrowser.gif)

[https://pbs.twimg.com/media/D0KY_rGV4AAJfxH.png](https://pbs.twimg.com/media/D0KY_rGV4AAJfxH.png)

The original dream of Lisp was to have algol-like M-Expressions and use
s-expressions only for data inside M-expressions. But the internals of Lisp
were implemented with s-expressions and M-expressions were manually translated
into s-expressions. Breaking into a Lisp execution and looking into the
interpreter state then revealed the s-expression representation of code to the
developer.

Then the cat was out of the bag...

~~~
breck
> That's fine, but it is no longer Lisp like we know it.

I think this is fine, and now I can direct future commenters who tell me we've
just "reinvented lisp" to your thread.

Really helpful links, and I am looking forward to watching that youtube video
later this week when I have a moment. Thanks so much for all the information.

------
robobro
How do I install this?

------
vincent-toups
Seems goofy, but ok. I'll just use Scheme.

------
intricatedetail
Bit Edgy Language? Nice work!

------
dang
Some users are reporting that the links don't show up on some mobile browsers.
Pending a fix, here they are. (Edit: fixed now.)

The way Lisp began
[http://paulgraham.com/rootsoflisp.html](http://paulgraham.com/rootsoflisp.html)

A guide to the Bel language
[https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...](https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1570864329&)

The Bel source
[https://sep.yimg.com/ty/cdn/paulgraham/bel.bel?t=1570864329&](https://sep.yimg.com/ty/cdn/paulgraham/bel.bel?t=1570864329&)

Some code examples
[https://sep.yimg.com/ty/cdn/paulgraham/belexamples.txt?t=157...](https://sep.yimg.com/ty/cdn/paulgraham/belexamples.txt?t=1570864329&)

------
markhowe
There’s no link styling, at least on mobile Safari, they’re kind of essential
to understand the article.

...sorry to be that guy.

~~~
dang
Pending a fix, here are the links. (Edit: should be fixed now.)

The way Lisp began:
[http://paulgraham.com/rootsoflisp.html](http://paulgraham.com/rootsoflisp.html)

A guide to the Bel language:
[https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...](https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=1570864329&)

The Bel source:
[https://sep.yimg.com/ty/cdn/paulgraham/bel.bel?t=1570864329&](https://sep.yimg.com/ty/cdn/paulgraham/bel.bel?t=1570864329&)

Some code examples:
[https://sep.yimg.com/ty/cdn/paulgraham/belexamples.txt?t=157...](https://sep.yimg.com/ty/cdn/paulgraham/belexamples.txt?t=1570864329&)

------
lostmsu
On mobile Firefox links are indistinguishable from regular text.

Workaround is to view full site (link at the bottom)

~~~
dang
Hopefully there will be a fix soon. In the meantime see
[https://news.ycombinator.com/item?id=21231416](https://news.ycombinator.com/item?id=21231416).

------
joak
I do not see any text file...

Must be hidden somewhere or not showing in my browser (firefox android with ad
blockers)

~~~
dang
[https://news.ycombinator.com/item?id=21231348](https://news.ycombinator.com/item?id=21231348)

------
kingofpee
I'm shocked it's the first post of Paul since 2014

I feel lucky now to be online ;)

------
1penny42cents
@pg: hyperlinks aren't visible on mobile web

------
foobar_
Almost every language has macros now. Lisp is pointless.

~~~
capableweb
Don't let the name "macro" fool you, Macros in Lisp is completely different
than the macros you do in other languages.

In other languages, you're basically just outputting source code.

In Lisp, since the source code is actually the program, you can program the
program.

There is a good perspective from a Perl developer about Lisp macros here:
[http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.h...](http://lists.warhead.org.uk/pipermail/iwe/2005-July/000130.html)

------
whazor
Welcome back pg. You should check out Jetbrains MPS, they are doing cool stuff
with projectional editing. Which could mean no more brackets. I read via
Twitter that they recently announced MPS for Web in Amsterdam yesterday,
unfortunately I missed the conference because of my home situation :). Luckily
they filmed the presentations so I will be waiting for them on YouTube.

------
wintorez
I have a hunch that this is going to become a big deal in the future. Not sure
why.

------
adreamingsoul
Instead of writing code on the weekend, I instead have been thinking about and
visualizing the code that I would like to write. That way I can use my hands
for other activities, like playing in the dirt.

------
Geee
paulgraham.com isn't secured by HTTPS. The reason why sites like this should
be secured is content manipulation attacks. Now I can't trust that the Bel
source code that I see is actually written by pg.

~~~
segmondy
Https doesn’t guarantee that. Someone could manipulate the content on the
host.

