Hacker News new | past | comments | ask | show | jobs | submit login

I think the point of a high-level language is to make your programs shorter. All other things (e.g. libraries) being equal, language A is better than language B if programs are shorter in A. (As measured by the size of the parse tree, obviously, not lines or characters.) The goal of Bel is to be a good language. This can be measured in the length of programs written in it.

Lisp dialects have as a rule been good at making programs short. Bel is meant to do the same sorts of things previous dialects have, but more so.

It's also meant to be simple and clear. If you want to understand Bel's semantics, you can read the source.




Thanks for the response! Which features of Bel do you think will contribute the most to making programs shorter or clearer, compared to other Lisp dialects?


It's so broadly the goal of the language that many things do, often in small ways. For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them. And the where special form and zap macro make it much easier to define operators that modify things. The advantage of Bel is a lot of small things like that rather than a single killer feature.

Making bel.bel shorter was one of my main goals during this project. It has a double benefit. Since the Bel source is a Bel program, the shorter I can make it, the more powerful Bel must be. Plus a shorter source is (pathological coding tricks excepted) easier to understand, which means Bel is better in that way too. There were many days when I'd make bel.bel 5 lines shorter and consider it a successful day's work.

One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.


> For example, it turns out to be really convenient that strings are simply lists of characters.

Statements such as these are very academic and concerning - to me - when it comes to new languages. And they make me wary of whether or not the language will ever move out of the land of theory and into practice.

Continuing with the "strings are cons lists" theme... Other, very notable languages have tried this in the past: Erlang and Haskell immediately come to mind. And - without exception - they all end up regretting that decision once the language begins being used for real-world applications that require even a moderate level of performance.

Lisp programmers (among which I count myself) are very fond of pointing to new languages and identifying all their "features" that were implemented in Lisp decades ago (so not "new"). And they also bemoan when a language design doesn't take a moment to learn from the mistakes of those that came before them. Strings as lists is very much in the latter case.

The above said, the idea of streams as a fundamental type of the language (as opposed to a base class, type class, or what-have-you) is quite intriguing. Here's hoping they are more than just byte streams.


As I read about strings-as-lists, I tried to maintain sight of one of the premises of PG's exercise -- that the language is climbing a ladder of abstractions in the non-machine realm.

The reality of 2019 is that strings are not random access objects -- they are either empty or composed of the first char and the rest of the chars. A list is the proper primary abstraction for strings.

That is, if "list" is not a data structure but a mathematical concept -- based on the concept of "pair." If I were a Clojure or Swift programmer -- and I'm both, I'd say there are protocols that embody Listness and Pairness, and an implementation would have multiple dispatch that allows an object to deliver on the properties of those protocols. There are other fundamental concepts, though, that deserve inclusion in a language.

Is something suitable as the first argument of `apply`? Functions obviously are, but to Clojure programmers, hash tables and arrays are (and continuations are to Schemers) just as function or relation-like as a procedure. This is so obviously a good idea to me that I am irritated when a language doesn't support the use of random-access or dictionary-like collections as function-like objects.

Which brings us to random-access and dictionary-like objects. Those should have protocols. And given that sets are 1) not relations and 2) are incredibly important, set-like and perhaps bag-like itself deserve support.

At minimum, a designer should think through the protocols they want to support in a deep way and integrate those into the language. Maximally, protocols would be first-class features of a language, which is what I'd prefer, because a protocol-driven design is so often so much more productive than an OO one.

PG's plans with respect to all of the above are what really interest me. IIRC Arc embodied some of these concepts (arrays-as-functions), so at the moment I'm content to watch him climb this ladder that he's building as he goes and see what comes of it.

[Edit: various typos and grammatical errors.]


I don't think they're claiming that strings should be random access. Rather, I think they're objecting to the notion that strings should be sequences at all, rather than scalars (a la perl).


Speaking of performance and streams, it seems streams are not even byte streams: they are bit streams (rdb and wrb).


> For example, it turns out to be really convenient that strings are simply lists of characters. It means all the list manipulation functions just work on them.

Why is it preferable to couple the internal implementation of strings to the interface “a list of characters”? Also, since “character” can be an imprecise term, what is a character in Bel?


> Also, since “character” can be an imprecise term, what is a character in Bel?

Exactly. How is unicode and UTF-8 treated by this language?


While we're down here in the weeds[0]...

A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

0. "This is not a language you can use to program computers, just as the Lisp in the 1960 paper wasn't."[1]

1. https://sep.yimg.com/ty/cdn/paulgraham/bellanguage.txt?t=157...

2. http://www.unicode.org/reports/tr29/#Grapheme_Cluster_Bounda...


> A good way to do it IMO, is for each character to be unicode grapheme cluster[2].

I would agree. However, for the sake of argument,

The width of a grapheme cluster depends on whether something is an emoji or not, which for flags, depends on the current state of the world. The combining characters (u)(k) are valid as one single grapheme cluster (a depiction of the UK's flag), which is not valid if the UK splits up, for example. This is only one single example, there are many others, that demonstrate that there is basically no 'good' representation of 'a character' in a post-UTF8 world.


At this point Bel doesn't even have numbers. A character is axiomatically a character, not some other thing (like a number in a character set encoding). Isn't it premature to talk about character set encoding or even representation yet? A character is a character because it's a character. But I haven't nearly read through these documents so perhaps I'm not understanding something.


As far as I know a "character" is a number in a character set. "A character is a character because it's a character" doesn't make any sense, at least not in the context of a programming language. A character is a character because it maps to a letter or glyph, and that requires a specific encoding.

The Bel spec contains the following:

    2. chars

    A list of all characters. Its elements are of the form (c . b), where
    c is a character and b is its binary representation in the form of a 
    string of \1 and \0 characters. 
In other words, even in Bel a character is still a "number" in a character set encoding. The "binary representation" isn't mentioned as being a specific encoding, which means it's assumed to be UTF08 or ASCII.

Also, Bel does have numbers, apparently implemented as pairs. Go figure.


You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

But I am trying to read this in the spirit in which it's intended, as irreducible axioms. And so I don't think it's true that a character needs a representation from the perspective of Bel.

Consider if pg had approached this more like I would have, and started with integers and not characters as a built-in. Does an integer need a "representation"? At least, in a language that is not designed to execute programs in, like Bel? An implementation (with the additional constraints like width, BCD vs two's complement vs whatever, radix, etc) would need one, but the formal description would not.

Thanks for exploring the subject.


> You're right that I need to review more of the Bel spec (which as you point out does in fact implement numbers).

Can I ask why you claimed that it didn't have numbers or characters? It seems an odd thing to claim without actually having that knowledge in the first place.


To be fair, numbers aren't listed in the spec initially as one of its basic types (and they aren't, they're implemented as pairs, which are a basic type) and no one is disputing that Bel has characters, just whether it's appropriate to mention character encodings for what seems to be intended to be mostly an academic exercise.


That's actually an interesting point for a Unicode mailing-list, but as long as I can farm grapheme-splitting out to I'm happy.


> the shorter I can make it, the more powerful Bel must be

Yes, well, until some limit, when the code becomes harder and harder and eventually too hard and concise, for humans to easily understand? :- )

I wonder if humans sometimes understand better, read faster, with a little bit verbosity. I sometimes expand a function-style one-liner I wrote, to say three imperative lines, because, seems like simpler to read, the next time I'm at that place again. — I suppose though, this is a bit different from the cases you have in mind.

Best wishes with Bel anyway :- )


> One of the things I found helped most in making programs shorter was higher order functions. These let you get rid of variables, which are a particularly good thing to eliminate from code when you can. I found that higher order functions combined with intrasymbol syntax could often collapse something that had been a 4 line def into a 1 line set.

How do Bel's higher order facilities compare to Haskell's? E.g. https://wiki.haskell.org/Pointfree


I have to disagree; this is very clearly too simplistic. There are many dimensions in which a language can be better or worse. Things like:

* How debuggable is it?

* Do most errors get caught at compile time, or do they require that code path to be exercised?

* How understandable are programs to new people who come along? To yourself, N years later?

* How error-prone are the syntax and semantics (i.e. how close is the thing you intended, to something discontinuous that is wrong, that won't be detected until much later, and that doesn't look much different, so you won't spot the bug)?

* How much development friction does it bring (in terms of steps required to develop, run, and debug your program) ... this sounds like a tools issue that is orthogonal to language design, but in reality it is not.

* What are the mood effects of programming in the language? Do you feel like your effort is resulting in productive things all the time, or do you feel like you are doing useless busywork very often? (I am looking at you, C++.) You can argue this is the same thing as programs being shorter, but I don't believe it is. (It is not orthogonal though).

* What is your overall morale of the code's correctness over time? Does the language allow you to have high confidence that what you mean to happen is what is really happening, or are you in a perpetual semi-confused state?

I would weigh concision as a lower priority than all of these, and probably several others I haven't listed.


One answer to this question (and an exciting idea in itself) is that the difference between conciseness and many of these apparently unrelated matters approaches zero. E.g. that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

I'm not sure how true that statement is, but my experience so far suggests that it is not only true a lot of the time, but that its truth is part of a more general pattern extending even to writing, engineering, architecture, and design.

As for the question of catching errors at compile time, it may be that there are multiple styles of programming, perhaps suited to different types of applications. But at least some programming is "exploratory programming" where initially it's not defined whether code is correct because you don't even know what you're trying to do yet. You're like an architect sketching possible building designs. Most programming I do seems to be of this type, and I find that what I want most of all is a flexible language in which I can sketch ideas fast. The constraints that make it possible to catch lots of errors at compile time (e.g. having to declare the type of everything) tend to get in the way when doing this.

Lisp turned out to be good for exploratory programming, and in Bel I've tried to stick close to Lisp's roots in this respect. I wasn't even tempted by schemes (no pun intended) for hygienic macros, for example. Better to own the fact that you're generating code in its full, dangerous glory.

More generally, I've tried to stick close to the Lisp custom of doing everything with lists, at least initially, without thinking or even knowing what types of things you're using lists to represent.


Most programming I do is exploratory as well. Sometimes it takes a couple of years of exploring to find the thing. Sometimes I have to rewrite pretty hefty subsystems 5-7 times before I know how they should really work. I find that this kind of programming works much better in a statically-typechecked language than it ever did in a Lisp-like language, for all the usually-given reasons.

I agree that there is such a thing as writing a program that is only intended to be kind of close to correct, and that this is actually a very powerful real-world technique when problems get complicated. But the assertion that type annotations hinder this process seems dubious to me. In fact they help be a great deal in this process, because I don’t have to think about what I am doing very much, and I will bang into the guardrails if I make a mistake. The fact that the guardrails are there gives me a great deal of confidence, and I can drive much less carefully.

People have expressed to me that functions without declared types are more powerful and more leveragable, but I have never experienced this to be true, and I don’t really understand how it could be true (especially in 2019 when there are lots of static languages with generics).


> Most programming I do seems to be of this type

> language in which I can sketch ideas fast

Can I ask, what are you programs about?

And sketching ideas? Is it ... maybe creating software models for how the world works, then input the right data, and proceed with simulating the future?


I'm still trying to sort this out here, not sure if I will manage, so take my apologies if it's a little confused.

I agree with this idea to a degree. However, there are limits to this "relation". Imagine a sophisticated software that compresses program sources. Let's say it operates on the AST level, and not on the byte level, since the former is a little bit closer to capturing software complexity, as you mentioned somewhere else. Now, I know that that's almost the definition of a LISP program, but maybe we can agree that 1) macros can easily become hard to understand as their size grows 2) there are many rituals programmers have in code that shouldn't get abstracted out of the local code flow (i.e. compressed) because they give the necessary context to aid the programmer's understanding, and the programmer would never be able (I assume) to mechanically apply the transformations in their head if there are literally hundreds of these macros, most of them weird, subtle, and/or unintuitive. Think how gzip for example finds many surprising ways to cut out a few bytes by collapsing multiple completely unrelated things that only share a few characters.

In other words, I think we should abstract things that are intuitively understandable to the programmer. Let's call this property "to have meaning". What carries meaning varies from one programmer to the next, but I'm sure for most it's not "gzip compressions".

One important measure to come up with a useful measure of "meaning" is likelihood of change. If two pieces of code that could be folded by a compressor are likely to change and diverge into distinct pieces of code, that is a hint that they carry different meanings, i.e. they are not really the same to the programmer. How do we decide if they are likely to diverge? There is a simple test, "could we write any of these pieces in a way that makes it very distinct from the other piece, and the program would still make sense?". As an aspiring programmer trying to apply the rule of DRY (don't repeat yourself) at some point I noticed that this measure is the best way to decide whether two superficially identical pieces of code should be folded.

I noticed that this approach defines a good compressor that doesn't require large parts of the source to be recompressed as soon as one unimportant detail changes. Folding meaning in this sense, and folding only that, leads to maintainable software.

A little further on this line we can see that as we strip away meaning as a means of distinction, we can compress programs more. The same can be done with performance concerns (instead of "meaning"). If you start by ignoring runtime efficiency, you will end up writing a program that is shorter since its parts have fewer distinctive features, so they can be better compressed. And if the compressed form is what the programmer wrote down, the program will be almost impossible to optimize after the fact, because large parts essentially have to be uncompressed first.

One last thought that I have about this is that maybe you have a genetic, bottom-up approach to software, and I've taken a top-down standpoint.


> Imagine a sophisticated software that compresses program sources...

Isn't that the definition of a compiler?


https://en.m.wikipedia.org/wiki/Partial_evaluation#Futamura_... are an interesting point of view for the definition of a compiler


> that all other things being equal, the debuggability of a language, and the pleasure one feels in using it, will be inversely proportional to the length of programs written in it.

Busting a gigantic nut when I see a blank file


I have a feeling this post got way more upvotes than it really deserves, partly because, duh, it's pg we're talking about.

Many other languages have been introduced here (e.g. Hy) that actually solve new problems or have a deeper existential reasons, not just to "shorten stuff".


IDK, this is exactly the kind of thing I enjoy reading while sitting down with a cup of coffee on sunday morning.

Sure, I'm not going to start programming in Bel but the thought processes in the design document are definitely something I could learn a thing or two from.


"Many other languages have been introduced here (e.g. Hy) that actually solve new problems"

Adding s-expression syntax to Python solves an important program?


Don't take my sentence out of context. My sentence continues to state "or ..." which you clearly don't want to understand.


Ya, here's why I think "make your programs shorter" is a good criterion.

If a language compresses conceivably-desirable programs to a smaller AST size, then for all N, a higher fraction of size-N ASTs (compared to other languages) represent conceivably-desirable programs.

So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write". For example, failing to free your memory is bad code, and it's impossible(ish) to write in managed-memory languages.


> So "make your programs shorter" also implies "make bad behaviors impossible or extra-verbose to write".

It reminds me "The Zen of Python" "There should be one — and preferably only one — obvious way to do it"

https://jeffknupp.com/blog/2018/10/11/write-better-python-fu...

https://towardsdatascience.com/how-to-be-pythonic-and-why-yo...


Make your programs as short as possible, but no shorter. Failing to free your memory makes for a shorter program. Using GC makes for a shorter program, but there is a whole class of problems that just can't be solved with GC.


I think deciding to not free your memory or use GC (instead of not) would count as having a different program.


"language A is better than language B if programs are shorter in A", so something like APL (not the weird characters, but rather the way each operator operates on entire arrays, and naturally chain together) or maybe Forth (or Factor, for a more modern Forth), since they are a bit more general (I've never heard of a website being built in an array-oriented language, for example) would be the holy grail?


Hello,

do you think you could find time make a table of content ? I like .txt files but a language spec is just a tad too long.

Or maybe for the lispers around have a short summary of what was inspiring you to make bel after arc ? what were your ideas, problems (beside making programs more expressive shorter).

Thanks


I disagree that measuring program length by the size of the parse tree is a good measure of conciseness. To an author and reader, it is the number of words you read that matters, so i use word count as the benchmark value. People who write newspaper articles are given a word count to hit, you type a certain number of words per minute, and people read at a certain speed. So words are the units of measurement that are easily counted and there will be no dispute.

A parse tree is not particularly comparable between languages; most modern languages make extensive use of powerful runtimes to handle complex things that are part of the OS. There is typically over a million lines of code behind a simple text entry field.

In my Beads language for example, i go to great lengths to allow declarations to have a great deal of power, but they are not executable code, and thus have hardly any errors. Lisp is not a particularly declarative type of language and is thus more error prone. The more code you don't execute the more reliable the software will be, so declarative programming is even better than Lisp.

Lisp derivative languages are notorious for their poor readability; hence the avoidance of Lisp by companies who fear "read-only" code bases that cannot be transferred to a new person. Lisp, Forth, APL, all win contests for fewest characters, but lose when it comes to the transfer phase. But like a bad penny, Lisp keeps coming back over and over, and it always will, because 2nd level programming, where you modify the program (creating your own domain specific language basically for each app), is the standard operating methodology of Lisp programmers. That one cannot understand this domain specific language without executing the code makes Lisp very unsuitable for commercial use.

Yes there are a few notable successful commercial products (AutoCad) that used Lisp to great results, but for the next 5-10 million programmers coming on board in the next few years, I laugh at anyone with the audacity to imagine that Lisp would be remotely suitable. Lisp is an archaic language is so many ways. It cannot run backwards. It has no concept of drawing; it is firmly rooted in the terminal/console era from which it sprang. With no database or graphics or event model, you have to use API's that are not standardized to make any graphical interactive products, which is what the majority of programmers are making.


>> Lisp derivative languages are notorious for their poor readability

Is it though?

I find at least 2 other languages more unreadable.

- Scala, with operators everywhere, you need to have a cheat sheet

- Perl, you know that joke that this is the only language that looks the same before and after RSA applied on it?

Lisp, on the other hand, is pretty readable, at least to me. I only used Clojure from the Lisp family and it had a great impact on how I think and write code in other languages. The result is more readability. For a long time MIT tought CS courses using Lisp and SICP is also a pretty amazing read.


Scala and Perl are even worse, i agree with you there. Some people love Haskell too, but i think it's awful.

MIT has replaced Lisp with Python, because even though they pushed forced it upon their students for decades they had to admit Lisp was archaic and not particularly readable. The Pharo IDE is arguably the most sophisticated IDE around today, but the fact remains that Lisp doesn't permit easy interchangeable parts, which is a major goal of the new programming languages being developed. Although very bright people can get very good at Lisp, the average person finds it extremely hard. Remember you are reading it from the inside-out which is highly unnatural to someone who reads books which read left-to-right.


93% of Paint Splatters are Valid Perl Programs

See: https://famicol.in/sigbovik/

--

So ignoring AST/character measures of shortness; is perl reaching optimality on the noise-to-meaning ratio?

Am I allowed to say this makes perl even more meaningful than other languages?

I'm not especially serious, but in the spirit of my own stupidity, let me suggest that this is how, finally, painters can truly be hackers


> i use word count as the benchmark value

Words are called 'symbols' in Lisp. Making extremely expressive domain level constructs is reducing the code size a lot in larger Lisp programs.

> Lisp is not a particularly declarative type of language

Just the opposite. Lisp is one of the major tools to write declarative code. The declarations are a part of the running system and can be queried&changed while the program is running.

> Lisp ... all win contests for fewest characters,

Most Lisp since the end 70s don't care about character count. Lisp usually uses very descriptive symbols as operator names. Lisp users invented machines with larger memory sizes in the late 70s, to get rid of limitations in code and data size.

Paul Graham is favoring low-character count code, but this is not representative for general Lisp code, which favors low-symbol count code through expressive constructs.

> understand this domain specific language without executing the code

the languages will be documented and will be operators in the language. Other programming languages also create large vocabularities, but in different ways. Lisp makes it easy to integrate syntactic abstractions in the software, without the need to write external languages, which many other systems need to do.

For example one can see the Common Lisp Object System as a domain-specific extension to write object-oriented code in Lisp. It's constructs are well documented and widely use in Lisp.

> it/is firmly routed in the terminal/console era...

Use of a Lisp Machine in the 80s:

https://youtu.be/gV5obrYaogU

Interactive graphical systems were early used in Lisp since the 70s. There is a long tradition of graphical systems written in Lisp.

For example PTC sells a 3d design system written with a C++ kernel and a few million lines of Lisp code:

https://www.ptc.com/-/media/Files/PDFs/CAD/Creo/creo-element...


Also don’t brains pursue an analogous, maximally compressed encoding of information via the use of abstractions/ideals and heuristics of probably many many kinds (of course lossy compression would evolve over a “lossless” compression which would grant minimal gains over lossy and be much more sophisticated/energy expensive)?

I wonder if psychology in this topic could inform programming language design as to a “natural” step-size of abstraction, or an optimal parse-tree measure which would correspond to an equivalent mental model.


Well, for high-level languages like you say. I guess you could say for two languages of the same “abstraction level class”, as terseness would be valued in two low level languages as well - from the humans POV, though! Since this is all related to easier conceptual expression.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: