Hacker News new | past | comments | ask | show | jobs | submit login
Avail Programming Language (availlang.org)
80 points by FractalLP on May 25, 2018 | hide | past | favorite | 109 comments

I'm confused by the website. The intro has a clear example that reads like natural language processing, but the FAQ goes out of the way to stress that the language does not do NLP. At that stage it gets a bit ranty, vague and dense, and I kind of lost interest. Perhaps I'm not smart enough to get what they're trying to do. "Developing a domain-appropriate lexicon and phraseology". Is this a DSL? Regardless, I don't see this setting the world on fire.

It's interesting that they've called this paradigm Articulate Programming, because articulation of the domain is where the problem both starts and ends.

How many times have you worked in a company with staff who start off exasperated with how complex IT makes solving a business problem, only to be surprised at just how many details are in their day to day processes once you've spent time covering off all the edge cases and writing tests around exceptions.

Code becomes complicated because the domain it models is complicated. Hence the reason why a good engineer's most important skill is in gaining an understanding of the real world problem domain, and expressing that as code. And also why I'm not worried of AI taking my job any time soon.

Same. I guess my takeaway is that I'm happy people are looking into this because in ten (or twenty (or a hundred)) years this research will hopefully have paid off. I, however, want nothing to do with it.

Also, they don't do natural language processing but allow you to write method names like "the square-root of _x^2 + _x + _" where the underscores are arguments.

Thus their "efficient" compiler that they parallelize the parser to find an unambiguous parse.

I don't know. This seems like it would be super fun to write and to play with, and that there are probably some really cool new things being discovered that, when matured and properly integrated, may be workable into a usable programming language.

While I believe 'plain English' type programming languages are a great concept, the reality is that it just introduces far more ambiguity into the mix, and you are just left trying to guess what the language designer's vagaries are.

I came across this years ago when trying to get to grips with the then new fangled Ruby language. I kept having to go back to the documentation to remember the best way to convert a string to all uppercase... was it:

  str.capitalize (<- Don't even get me started on the regional differences of 'ise' vs 'ize' between US and UK English variants)
Even here, I would probably start using Avail, then in a few weeks I would be scratching my head and asking, was it:

  Print 1 to 10, as a comma-separated list

  Display 1-10, in CSV format

Yeah, that was my problem with AppleScript. It’s a read-only language to me. It took me years to try other languages and discover I could actually write code.

"The experiment in designing a language that resembled natural languages (English and Japanese) was not successful." -- William Cook

Any programming language that tries to do "natural language" should at least reference the AppleScript HOPL Paper[1] and say how they are doing things differently to address the (now) obvious problems.

(Oh, and there is a LOT of good stuff in the paper, definitely worth a read).

[1] http://www.cs.utexas.edu/~wcook/Drafts/2006/ashopl.pdf

I think all experiments of languages that resembled natural languages will never work until one will try to ascend to a more high-level paradigm.

Actuals programming language are based on a "forklift paradigm" : you describe the path of a forklift in a virtual warehouse where data's are in some places and your forklift move them from places to places. So a natural language isn't design to express this kind of semantic.

In PPIG ( Psychology of Programming Interest Group ) 2006 was published a very interesting paper where is studied the metaphors which was shared by the core java library authors, the mental model said in another way. It reaveals very interesting facts : Programmers tend to think the world in term of Belief-Intention agents. Agent are members of a society who trade and own datas and are subject to legal constraints. A method calls is a speech act. The execution is mentalized as a journey in a a spatial world. Etc. There's a lot of metaphors like that, and it reveals tracks to conceive a more high level language.

So there's a lot of work in this field (Behavior-Tree based language, time dimension include inside the semantics, etc.).

Note that it is also possible to express non ambiguigous semantics in natural language : Attempto Controlled English is an exemple [2].

Here an toy example of what you could do with this language :

"MyWebPage is a webpage. MyWebPage contains a textfield which a logical name is-named NameTextField. His label is "Name". MyWebPage contains a button which a logical name is-named SubmitButton. His label is "Submit". If User clicks on Submit and NameTextField is empty then NameTextField ' s css class becomes AlertEmptyInput. "

[1] http://www.ppig.org/sites/default/files/2006-PPIG-18th-black... [2] http://attempto.ifi.uzh.ch/site/

I tried to learn AppleScript at one point because it had to do some automation on a Mac. It is an exceedingly confusing language, because the syntax does not clearly reflect the underlying semantic and processing model. The English-like syntax just obscures what is actually going on. Yeah it is easy to read, and readability is important, but I think they forgot that code have to be written before it can be read.

Just to add another data point, I didn't find it so. I can code up applescript if I want to completely relying on dynamic documentation. In fact, what surprised me was the ease with which I could pick it up after nearly a decade of touching it. The vocabulary explorer is awesome for scripting applications .. if only applications would care about them. Script editor offers coding up using JavaScript as well as applescript, but I'd any day pick AS over JS for its purpose because mental model is clearly established.

The vocabulary explorer is indeed awesome and a great idea. Unfortunately not even Apple is very good at supporting them. And with Sal Soghoian's departure, automation at Apple will probably either change a lot or die.

My problem was the language syntax as a whole. But I'm particularly sensitive and picky on this issue. ObjC and Java's verbosity repel me so strongly I could never properly learn them, for instance, and so on.

I'm hopeful that these sorts of languages can enjoy a renaissance in the form of suggestive interfaces similar to the ones that Android/iPhone users get when typing. It'd be much easier to program in Applescript if I had an editor that was letting me choose from a list of valid words rather than me having to remember the perfect incantation to do what I want.

> I kept having to go back to the documentation to remember

1. Why would you want to remember this? It's in the docs. If you use it often, you'll learn it eventually. If you don't, you have no need to remember this. Don't burden your memory unnecessarily.

2. A well written and searchable documentation adds very little overhead anyway, on the order of a few seconds. Yeah, writing from memory is faster, but not by that much, and that advantage disappears almost completely if you have good auto-completion and docs support in your editor.

3. There are hundreds of languages out there, even if you learn the library of one language, it doesn't help with other languages at all. Learning to quickly search reference docs, along with learning some basic concepts/assumptions of each language, is much more sustainable if you're thinking of becoming polyglot.

EDIT: please, don't turn HN into Reddit, write a comment if you disagree, instead of downvoting.

I agree with you in principle. Over the past 4 decades, I have programmed in dozens of programming languages, and remembering things like FOR or CASE loop structures is almost impossible for me now, and I constantly refer to docs.

However, as a starting point for me, it would be easier if the languages all used a common naming convention for things like .upper() etc. Common mistake for me is to try .upper(), then .uppercase(), then have to leave my code and refer to the docs after repeated runtime errors. If I can make a reasonably intelligent guess within 2 or 3 tries, then it bodes well for me. Otherwise it is a productivity hit.

To extrapolate this - the corollary to .upcase() is, to my mind .lowercase(), but it is in fact .downcase(). Makes sense logically ('down' is the opposite of 'up'), but syntax wise, I never say to anyone "You need to write your username in down case...". If the naming conventions for functions followed the English language definitions, then my hit/miss ration will improve, and so will my productivity.

NB: I didn't downvote you (I can't anyway as I don't have the necessary karma to downvote immediate child answers to my posts). I thought you raised a valid point worthy of discussion.

Or you use tooling that is aware of the syntax and available methods and can provide you live assistance and documentation at write time.

> it would be easier if the languages all used a common naming convention for things like .upper() etc.

Well, it would definitely be easier if all the languages used the same conventions. The problem here, I think, is that all the possible conventions are arbitrary and objectively as good as any other, so there are many of them and settling on a single one across languages is not going to happen.

It's much more important to choose one convention for a language and to follow it everywhere - this improves guessability of identifiers in that language. The convention chosen may be familiar to you or not, but that doesn't mean the more familiar one is any better. So, I think, it's more important for a language to follow a single convention consistently, while choosing which convention to follow is less important.

That being said, I agree that it would be easier to learn new languages if they all followed a single standard for naming things, I just don't think it's going to happen.

> If I can make a reasonably intelligent guess within 2 or 3 tries, then it bodes well for me. Otherwise it is a productivity hit.

Agreed, but note that the 2-3 tries are informed by your experiences to date - someone with a different history of programming languages could find your guesses weird and would have completely different ones themselves.

Another thing, I agree on the productivity hit in principle, just want to note that there are ways of mitigating it. I use many languages regularly (I'm a hobbyist-polyglot, I learned at least 30-40 languages to varying degrees) and I think that a good editor integration (most notably "semantic auto-complete" and "inline docs") can make up for a lot of cases like the `upcase`. For example, if I write this (`|` is cursor position):

    a = "asd"
and hit <tab>, I get a popup with the suggestion of a correct name (`upper`) along with a bit of documentation. The level of support for this obviously varies a lot across languages, but where it's available I find it mostly fixes the problem of having to hit the docs too often.

> NB: I didn't downvote you

It's impossible to downvote immediate child posts, no matter the karma level, so it obviously couldn't be you :)

> it would be easier if the languages all used a common naming convention for things like .upper() etc.

It would be easier if everyone would just speak English all the time, but I'm not sure it would be better.

Thing is, Avail actually is not a 'plain English' type programming language. The website states things very confusingly.

But look at the FAQ: http://www.availlang.org/about-avail/documentation/faq.html

Commenter dcw303 in this thread puts it better than I can.

>While I believe 'plain English' type programming languages are a great concept

I will refute that notion and suggest that 'plain English' type languages are an incredibly foolish diversion into a world where we pretend formal languages don't exist, are unimportant, or that English is one, or that it somehow can be laboriously contorted into a useful approximation of one.

> I kept having to go back to the documentation to remember the best way to convert a string to all uppercase... was it:

Wouldn't you have the same issue with just any programming language?

Point taken. But in the above example, why does Ruby do it as 'upcase' instead of the more English-y and popular 'uppercase' as most other languages use?

Take a more esoteric example - to capitalise just the first letter of a string. By definition, this is called 'proper case', and most other languages I know use .proper() to achieve this. Except Ruby, which decided to use .titleize() (and there is that 'ize' again just to confound me further).

If a language is going to be 'English-y', then sticking to actual English words for things such as uppercase, proper case etc. would be handy. Going outside of those constraints just increases the guessing game workload for the programmer.

Pascal uses upcase, and so does many others, so it has a very long history.

.titleize() is from ActiveSupport, not Ruby.

But proper() would have confused me much more. First time I've ever heard it called that. 'proper' would sound ambiguous to me, because it doesn't indicate to me it's about case at all.

But of course this just reinforces the point that naming is hard.

Except titleize changes not the first letter of a string, but the first letter of every word. That's title case (https://en.wiktionary.org/wiki/title_case).

Capitalize (https://ruby-doc.org/core-2.2.0/String.html#method-i-capital...) on the other hand does something even weirder from your perspective, as it also changes letters to lower case.

The rule they’re following is that method names should be verbs, and upcase / downcase are shorter than alternatives like to_upper etc. So while it’s an unusual choice it’s not an arbitrary one

Inform 7 thoroughly explored this space: http://inform7.com/

I've tried several times to figure out Inform 7, and I always go back to the old C-ish Inform 6 instead. I think my brain is damaged by prolonged exposure to traditional programming languages.

I had the exact same experience. I really wanted to like Inform 7 (and plenty of people do) but it feels completely counter-intuitive because it wants to feel like natural language but it's still "just" a programming language.

It still has a fairly rigid syntax - it's just more verbose than a regular programming language. That is probably an oversimplification, but that's how it felt to me at least.

The language could include all of the possible expressions as valid alternatives. There are good reasons why most programming languages don't do that (it really helps comprehension if there's only one standardized way to do something), but if a language wants to be as close to English as possible, it will have to include the whole kitchen sink of synonyms. It might have to educate the users about the difference between a minus and a dash, though, if "1-10" and "1–10" (that's an en-dash) should evaluate to a number and a list, respectively.

Some of the keywords and operators (punctuation) of Avail methods are “completely optional”, in that the caller’s choice to include it or not (or alternative prepositions in some cases) is solely to improve readability of the result. For example, the ordinary field-getter has the form “_’s??thing”. The double-question-mark should actually be a single Unicode character (I’m on my phone). The question-mark makes the “s” optional, so you can say “cat’s thing” or “Jess’ thing”. But there’s nothing stopping you from saying “cat’ thing”, other than the derisive laughter of your peers. :-)

"an efficient compiler that concurrently explores all possible parses in pursuit of a semantically unambiguous interpretation."

This suggests they have a solution to the ambiguity problem

Thanks for the clarification. I missed that on the initial skim of the article. (And yes, I appreciate the irony of me introducing the 'ambiguity' arguments into a topic which deals specifically with the removal of such) :)

It's still arguable that there could be some ambiguity. The compiler makes sure the statement is interpreted in an unambiguous way on the computer's end, but the writer might still not know which of a number of ways to express what she wants to state.

I went into this design decision, having looked very closely at the specific success of the SHRDLU system (1971?). Local disambiguation was very effective, and Avail goes even further by having exceptionally strong types to help distinguish meaning. We also have grammatical restrictions to express a unique form of precedence rules, and semantic restrictions to statically identify a large number of inappropriate uses of methods, like “dog”[4], which is a compile-time subscript bounds error.

This sounds extremely dangerous.

Millions of dollars are wasted each year on mistyped ==/= operators, but this is some next level evil.

Very true. The core Avail language uses “_=_” for comparison, “_:_” to declare a variable, “_:=_” got assignment, and “_::=_” for constants (think “final”). It also has “_:_:=_” for the case when you want to declare a variable of a general type, but initialize it with something more specific (say a counter that starts at 1).

But yes. There’s a lot of deeply evil things that can be done in Avail, right down to the lexical scanning. Like forbidding certain variable names, or manufacturing a run of tokens every time a certain emoji is encountered. There are also a lot of safeties, like sealing methods to prevent someone overriding “_+_” for [3’s type, 4’s type]->9’s type.

Oh, and writing “x = x + 1;” as a statement will be flagged as a syntax error, because it would produce a Boolean value which is not allowed to be silently discarded like in Java or C. “Discard:_” makes these rarely needed situations absolutely obvious.

Exactly my thoughts whenever I have to touch AppleScript.

Perhaps lojban would have been a better choice.

Constructive Criticism: I could not find any code examples on the website within 3 minutes of searching, gave up, and left.

I persevered following the "Learn" link. Then went to the Github page. https://github.com/AvailLang/Avail/blob/master/distro/src/ex...

Not impressed tbh.

    Module "Hello World"
	"Avail" =
		"keyword lexer"
    Entries "Greet"

    Method "Greet" is [ Print: "Hello, world!\n"; ];

I have a blog post from a few years back with a sample, here: https://klibert.pl/posts/avail-and-articulate-programming.ht...

I'm pasting a link because I added on-hover popups explaining parts of the snippet, but here is the sample itself:

    Module "Hello World"
        "Greet Router"

    Method "Greet Router" is [
        socket ::= a client socket;
        target ::= a socket address from <192, 168, 1, 1> and 80;
        http_request ::= "GET / HTTP/1.1\n\n"→code points;

        Connect socket to target;
        Write http_request to socket;
        resp_bytes ::= read at most 440 bytes from socket;

        Print: "Router says: " ++ resp_bytes→string ++ "\n";

This looks like the result of a drug-and-alcohol-fueled unplanned pregnancy between COBOL and Ada.

Yeah, plus a bit of APL - it's not visible in this snippet, but Avail uses a lot of strange unicode characters/symbols. The `→` in `resp_bytes→string` is one example, but there are many more symbols used: http://www.availlang.org/about-avail/documentation/unicode-c...

If you grab the development branch, you can use the modular lexers to make that a little bit clearer. If memory serves, “a socket address from <192, 168, 1, 1> and 80” can now be written “”. Not because that syntax is designed into the language, but because the language is designed to allow an IPv4 lexer to be defined within the language. See distro/src/avail/Avail.avail/IO.avail/Address.avail:149 for the actual Avail code that lexes (produces tokens from module source) IPv4 addresses.

Apparently the code examples don't display with JavaScript turned off.

I had the same experience with JavaScript enabled

If you wouldn’t mind, please post a screenshot where you think an example is supposed to be displayed but isn’t. Or mail it to “mark” at the same address as the website.

There's a couple in this section[0] of the FAQ. I don't know whether it's because I'm too used to using traditional languages, but it does not look at all pleasant to write (IMO). Interesting idea though.

[0]: http://www.availlang.org/about-avail/documentation/faq.html#...

IMO, the landing page of any programming language site should include some code samples demonstrating what makes the language different from the crowd.

The landing page does


I tried to link to a more interesting page though. I stumbled upon someone mentioning the language, but don't really get it. It seems to promote building DSLs, but lots of languages like Lisp & Rebol have been doing that for ages.


I understand your intent, but I think it unfortunately ended up confusing matters for fellow readers.

Add to this the fact that the website itself is not very clear already...

The home page of the Pyret programming language [1] is great in this regard. You get a feel for the language and also get to see some quick comparisons with other languages. Too bad that it is targeted to be a pedagogic language -- it seems great for that, but it also seems like it would be good as a general purpose programming language for the masses.

[1] https://www.pyret.org/index.html

Yes definitely. The old racket-lang site used to be the PERFECT programming language landing page. Imo. But it had to be close to like actually perfect too.

Can you find it on archive.org and link it for reference?

You can't take ten seconds to browse there? Seriously?

I can’t know which previous version you are talking about. I would have to look at a lot of snapshots and still I could only guess.

Only you can identify the version quickly and link it for us.

> An infinite algebraic type lattice that confers a distinct type upon every distinct value. Intrinsic support for a rich collection of immutable types, including tuples, sets, maps, functions, and continuations.

Ok, I consider myself OK at type theory but I'm still lost in what this claim actually means. And if it is what I think it is (that all values have types), I wonder how this doesn't run afoul of decidability of fancy dependent type systems (perhaps 1 has a type, 2 has a type, but 1 + 2's type isn't 3?).

Avail does not seem to have dependent types. The term 'dependent type' is overused. It means that the type system must allow the compile-time type of an expression to depend on the run-time value. This is not totally decidable. Most type systems, including advanced ones, such as Haskell's, do not have this property. Avail seems no different.

Haskell has infinite and recursive constructs, lazily computed. That’s what makes their type system undecidable. Avail is constructivist in that sense, so immutable structures must be finite (i.e., you can draw them as a dag). For cyclic structures, you have to include mutable “variable” objects as well, but in that case the type of the construct stops at the boundary of the variable. The type of a tuple of variables is a composition of the declared types of the variables, not their current content. This is essential to ensure an object’s type is permanent, and the immutable acyclicity condition ensures everything reachable without hitting a variable contributes to an object’s type.

> Haskell has infinite and recursive constructs, lazily computed. That’s what makes their type system undecidable.

Haskell (98) has infinite and recursive constructs and its type system is not undecidable.

Haskell has infinite and recursive types but its type system is perfectly decidable, mainly because Haskell's type system is nominal, so cycles are easily detected and dealt with.

IIRC, semilattices in type systems usually means subtypes, and lattices in type systems usually means subtypes plus a bottom type.

Yes, that agrees with the shape of Avail’s type hierarchy. There’s a top type and a bottom type, and the latter has no instances.

I think there's actually a very fundamental difference between natural and formal languages that make this kind of project wrongheaded.

Formal languages, at root, have exact reference. In a programming language, a symbol ultimately refers to a block of memory, or an operation. The problems of writing a formal language are ones of trying to express a given concept when the relation between symbols and references is known, but the relationship between concept and symbol is not.

In natural language, a symbol ultimately refers to nothing. Its meaning is derived from context, convention, intention. As such, the relationship between concept and symbol is basically known - we know we are talking about red things when we use the word red. The relationship between concept and reference is absolutely unknown - we can never know for sure whether our concept 'red' is adequate to real red objects.

As such, natural languages are a poor model for formal ones. The problems are essentially different. In one, you know how the symbol 'red' relates to operations and memory. In another, you know how the symbol 'red' relates to intention and meaning. Each has different challenges associated.

There are more ways to define semantics for formal languages than you suggested. What you described seemed to be mostly operational semantics where each term (or statement) ultimately causes some memory to be referenced or changed or an operation carried out. It is quite possible to define the semantics denotationally where each term (or statement) simply becomes an element in a domain. Its ultimate meaning can change depending on which domain you are using.

Good point. I'd never heard of denotational semantics before. I'm coming from a more or less naive perspective of trying to pinpoint where the ambiguity is that you have to wrestle with in different kinds of languages. In formal languages, the classic problem is, what you say is not what you mean. In natural languages, the classic problem is, what you mean is not what really exists. So for the latter, we have the whole development of science, epistemology, empiricism etc.

For the former, we have the whole notion of semantics, the development of tools like valgrind, tests, etc.

Is there anything you can reccomend to read? I'm pretty familiar with how computers work on a mechanical level, but I'm pretty ignorant about the theoretical intuitions behind all the more functional stuff.

Print 1 to 10, as a comma-separated list.

In e.g. Scala, you can do that:

print( 1 to 10 mkString "," )

It's not 100% human language/grammar, but close (and you have auto-completion using IDE). Why would you need another DSL?

(Not trying to bash Avail, nor promote Scala, just curious for its usecases)

I'm pretty sure the goal of these "natural language style" programming languages extends beyond printing a comma-delimited list of numbers. This arbitrary example doesn't mean much.

It's not a very good example though - it's not showing me any sort of improvement over actual programming languages - just an intentionally bad bit of code.

No it's an arbitrary example of which their are many more on the site. You're just overly obsessed with that one.

It's the only code example on the 'Introduction' page. I'm sorry for expecting something better.

I agree; defining well-named functions seem a superior solution. In Haskell, it would just be:

print (intercalate ", " [1..10])

I don't buy natural-language-ish programming languages. The grammar becomes far too complicated very quickly. A simple but flexible grammar, a la most functional languages, is superior.

I hope you didn't mean to imply that `intercalate` is a _well-named_ function... I've no idea what it means, and even after a dictionary search it's still not obvious to me why you'd ever name a method like this!

It's kind of a bad name, but not entirely.

This is the definition of intercalate.


intercalate [in-tur-kuh-leyt]

verb (used with object), in·ter·ca·lat·ed, in·ter·ca·lat·ing.

1. to interpolate; interpose.

2. to insert (an extra day, month, etc.) in the calendar.

It'd be better if the tutorials are rewritten in a more concise manner. Do you really need that many words to explain Guess The Number? http://www.availlang.org/about-avail/learn/tutorials/guess-t...

I find the syntax hard to follow for the express reason that variable names and function names have no distinguishing features. If variable names were decorated somehow (a symbol, or color) it would be much easier to visually parse a function call. As is, my brain must remember exactly the (complex) names of functions and variables in scope to determine how to parse a function call. But I like the idea and am using something similar in a language I'm developing.

We’re working on tools for writing/viewing Avail more easily. Stay tuned, it’ll be worth the ride...

Here is some code from their examples page if anyone else (like me) is more interested in how the actual code looks like.

To be honest, I'm not sure how much clearer this is to read than for example Python.


It looks like GitHub is active [1], but documentation hasn't kept up. The blogs don't seem to have been updated since 2014, and the links to the mailing lists are broken.

[1] https://github.com/AvailLang/Avail

Working on it, sorry about that. We lost some things when rehosting some time ago and have prioritized other things. There is no mailing list any more.

Someone smart needs to explain what an infinite algebraic lattice is, because it sounds awesome. Potentially.

Edit: (I just googled "algebraic type lattice" and while ymmv, I don't recommend it unless you're well versed in scary black mathic)

I didn't get too in depth with reading the docs, but any language that goes for non ascii symbols a la APL is going to be fighting an uphill battle right from the get go.

Maybe it was a bit easier even for apl because there were interfaces more immersive than what we have now for non ascii, especially when mixed with regular ascii.

Type type type, oh wait, backslash, dropdown, there's my symbol, enter, type type type. That's not very fun. That's less fun when youre dividing your cognition between what things im actually trying to accomplish and what things I have to type.

Just my two cents, no ill will

A lattice is a partial ordering where any two elements have a supremum and infinum. I.E., for any two types in avail's lattice, there is a unique super type, and a unique sub type. A partial ordering means that -- given any two elements -- I can say that either one is a subtype of the other or that the types are not in relation. The super type of every type is known as 'top', and the sub type of every type is known as 'bottom'. An infinite lattice just means that there is an infinite amount of types in the Avail type system.

Well said, that’s exactly right. Most languages lack a bottom type, or screw it up by pretending that “null” fits that role and falsely promises everything that every other type promises.

Languages like Java (and many more) fail to even have a top type, which leads to kludgey add-ons to the type system — boxed, @Nullable, erased generics, annotations, dependency injection, purity annotations, etc., all of which should have been part of a single type system.

I forgot to add the interfaces versus classes, inner classes, and lambda notation, with the tons of trivial FunctionalInterface interfaces to deal with a smattering of the combinations of number of inputs, void or primitive or class output, which inputs or inputs can be null, and which exceptions can be thrown by the evaluation method (having many different names, etc).

Almost all of these monstrosities are caused by the “object purity” mindset that kept the designers from including first-class functions in the language. Avail attempts to include a large set of paradigms instead of trying to “axiomatize” the exposed surface (although we do axiomatize the construction of these constructs).

FWIW, latices are a pretty easy concept once you grok the basic idea.

I first encountered them when doing work-related reading on dataflow analysis, and I'm very glad I did. Interesting stuff.

(edit) Summary: the topic might look scary at first blush, but it's actually not.

It’s a big concern, but technically most of the world doesn’t actually use ASCII. We use shortcuts in IntelliJ (and Eclipse before that), so if you want a right arrow, you type “rig”, pause a split second, and hit return to accept the “rightarrow” suggestion, which it replaces with a real right arrow. It’s as easy to use in that environment as any other kind of autocompletion.

Here is an infinite lattice (roughly speaking):

           o  o
          /\  /\
         o  o  o
        /\ /\  /\
       o  o   o  o
      /\ /\  /\  /\
     o  o  o   o   o
    ...   ...   ...

The example at the beginning strikes me as a bit over the top. Something like 'String.Join("," Range(1,10))' (pseudocode, but you get the picture) would be better, and avoid all the ambiguities of the plain english version.

My ideal syntax for that expression is very close to yours, something like str.join(1..10, ', '). Looking at our two approaches I noticed something -- yours has no space after the comma, and mine does, so how would Avail express that distinction without becoming even more verbose?

I really hate programming in AppleScript, which also attempted a similar syntax, because it's in the uncanny valley of semi-structured English. It's too close to the language I speak such that remembering all the special cases (which prepositions link which operations, sentence structure, etc.) becomes really difficult.

I like some well-structured separation in my coding languages. It's not a downside for me at all.

The core syntax of Avail does have a prose-like feel to it, and that’s intentional. But when you narrow it or extend it for specific linguistic domains (CSV, tensors, business rules, build rules, expert systems, or a vast number of existing notations), it makes the code read exactly at the right level. No noise. Have a look at the Silly Quiz example for what I mean about getting the right level.

Reminds me a bit of intentional programming, something that Charles Simonyi has been pushing for a few decades. As far as I know he might still be pushing this but I haven't seen much progress since 2002.

This reminds me of The Osmosian Order of Plain English Programmers.

The Osmosian Order is alive and well.


And we're about to release Español Llano, the Spanish version of Plain English. This new compiler compiles Spanish and English. Or both, even in the same sentence. Kind of like a bi-lingual human.

First rule of programming language home pages - have a good selection of examples on the first page! This fails that horribly.

Super cool. Going to have to check this out.

Someone looked at COBOL and thought, "That looks great".

> But there are many career programmers who would rather say: > Print 1 to 10, as a comma-separated list.

No, I would not. Don't make assumptions on behalf of others.

It says "many", not "every".

But even then, I'm not convinced that "many" programmers would rather write the latter.

There are many career mathematicians who would rather say: " add two to four and multiply the sum with three". Natural language mathematics (arithmetics)

What I'm contesting is the assumption of the majority. There may be a small sub-set of programmers who will prefer the example, but until Avail's usage/interest is wide-spread such an assumption has no validity.

"Many" doesn't mean "the majority" any more than it means "every".

Right, and that’s why Avail is nothing like that. Kind of the opposite when it gets down to it...

Cheesy, closed languages like C forgot that exponentiation was even a thing, or complex numbers. If you shift over to using C++’s “clevernesses”, you still don’t get exponentiation, because the traditional caret symbol is already used for exclusive-or, which has no sensible ASCII punctuation.

As for someone’s distant comment that Lisp has been used to create languages for years... sure, if the language you wanted was parenthesized lists with keywords inside the left parenthesis. Which is just Lisp with a few more operations and macros. Yuck.

A few languages implemented in Lisp: C, Pascal, Ada, Prolog, Python, JavaScript, Fortran, Haskell, ML, ... none of that had parenthesized lists...

“Implemented in” is the key. If you want, say, a C compiler written in Lisp, you end up writing a compiler. If you want a Haskell compiler written in Lisp, you write another compiler.

If you want Avail to include all of C, you define C language capability in an Avail module and subtract the rest. Then your C program is also an Avail program, so you have the same tool chain you had with core Avail (still lacking, but getting there), you have dynamic optimizing compilation to JVM (and eventually native, perhaps), and you have a program that not only interoperates with programs written in other dialects, but allows direct connection between them. Like if you want Avail exceptions with your C code, you import exceptions and just use them. You don’t build them over and over again for each language. And if you want to support closures in your C, you’d probably just remove the limitation that treated the closures of Avail as contextless C functions. Similarly for backtracking, universal serialization, and sensible module dependencies (maybe call it #import, versus the horrible textual #include mess of C).

And finally, if you want your existing vanilla C code to be able to call “out” efficiently to some FORTRAN or Python code, you no longer have an impedance mismatch. Define those dialects as Avail, and you’ll have real garbage collection, multiplexed lightweight threads (fibers), dynamic optimization, and objects and functions that are reasonably compatible between these dialects.

As for SHRDLU, it was built atop a “language”, PROGRAMMAR, if I recall, which was really a big ball of Lisp, so the syntax was basically tons of parentheses with keywords inside the left one. But given the available languages, memory, and speed of that time, it was amazing that even a library-ish extension of Lisp could come into play.

Note that embedding PROGRAMMAR inside Lisp is exactly the kind of thing we’re doing with Avail… it’s just that the base metarules are more articulate for that sort of thing.

PROGRAMMAR rewritten today in Avail wouldn’t look at all like Lisp — thank goodness. But if you implemented that language via a compiler, you would have the limitations of the compiler technology constantly getting in the way of the linguistic forms you’re trying to express. For example, having to decide on a linear ranking of precedence levels for every operation, even if they couldn’t ever appear next to each other due to type or linguistic constraints.

Even C++ requires custom lexing tweaks because of spurious “>>” tokens in nested templates… and special backtracking rules to distinguish function definitions from stack object creation. But those bypasses aren’t readily usable in the available parsing tools. Avail’s compiling scheme is decades beyond that.

obligatory xkcd: https://xkcd.com/568/

Cobol 4ever ;)

Perhaps. But wouldn’t you prefer that COBOL become a mere dialect of Avail?

To be honest, this project is going the wrong direction.

Rather than trying to get programming languages to look like human language, we need to get human language closer to computer language.

By this I mean that every argument I've ever been in has turned out to either be an intrinsic disagreement about definitions (fixable, and usually we agree) or an intrinsic argument about god (probably not fixable, we will probably not agree).

If the average person understood the beauty of a solid (and unambiguous) definition, I dunno, world peace and rainbows and butterflies? Probably not, but I'd definitely not have to rage-quit socializing so often.

Still, with that said, from a purely intellectual curiosity standpoint this is neat. I hope that the general saltiness of the internet doesn't discourage the devs from working on this some more.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact