It's interesting that they've called this paradigm Articulate Programming, because articulation of the domain is where the problem both starts and ends.
How many times have you worked in a company with staff who start off exasperated with how complex IT makes solving a business problem, only to be surprised at just how many details are in their day to day processes once you've spent time covering off all the edge cases and writing tests around exceptions.
Code becomes complicated because the domain it models is complicated. Hence the reason why a good engineer's most important skill is in gaining an understanding of the real world problem domain, and expressing that as code. And also why I'm not worried of AI taking my job any time soon.
Also, they don't do natural language processing but allow you to write method names like "the square-root of _x^2 + _x + _" where the underscores are arguments.
Thus their "efficient" compiler that they parallelize the parser to find an unambiguous parse.
I don't know. This seems like it would be super fun to write and to play with, and that there are probably some really cool new things being discovered that, when matured and properly integrated, may be workable into a usable programming language.
I came across this years ago when trying to get to grips with the then new fangled Ruby language. I kept having to go back to the documentation to remember the best way to convert a string to all uppercase... was it:
str.capitalize (<- Don't even get me started on the regional differences of 'ise' vs 'ize' between US and UK English variants)
Print 1 to 10, as a comma-separated list
Display 1-10, in CSV format
Any programming language that tries to do "natural language" should at least reference the AppleScript HOPL Paper and say how they are doing things differently to address the (now) obvious problems.
(Oh, and there is a LOT of good stuff in the paper, definitely worth a read).
Actuals programming language are based on a "forklift paradigm" : you describe the path of a forklift in a virtual warehouse where data's are in some places and your forklift move them from places to places.
So a natural language isn't design to express this kind of semantic.
In PPIG ( Psychology of Programming Interest Group ) 2006 was published a very interesting paper where is studied the metaphors which was shared by the core java library authors, the mental model said in another way.
It reaveals very interesting facts : Programmers tend to think the world in term of Belief-Intention agents. Agent are members of a society who trade and own datas and are subject to legal constraints. A method calls is a speech act. The execution is mentalized as a journey in a a spatial world. Etc.
There's a lot of metaphors like that, and it reveals tracks to conceive a more high level language.
So there's a lot of work in this field (Behavior-Tree based language, time dimension include inside the semantics, etc.).
Note that it is also possible to express non ambiguigous semantics in natural language : Attempto Controlled English is an exemple .
Here an toy example of what you could do with this language :
"MyWebPage is a webpage.
MyWebPage contains a textfield which a logical name is-named NameTextField.
His label is "Name".
MyWebPage contains a button which a logical name is-named SubmitButton. His label is "Submit".
If User clicks on Submit and NameTextField is empty then NameTextField ' s css class becomes AlertEmptyInput.
My problem was the language syntax as a whole. But I'm particularly sensitive and picky on this issue. ObjC and Java's verbosity repel me so strongly I could never properly learn them, for instance, and so on.
1. Why would you want to remember this? It's in the docs. If you use it often, you'll learn it eventually. If you don't, you have no need to remember this. Don't burden your memory unnecessarily.
2. A well written and searchable documentation adds very little overhead anyway, on the order of a few seconds. Yeah, writing from memory is faster, but not by that much, and that advantage disappears almost completely if you have good auto-completion and docs support in your editor.
3. There are hundreds of languages out there, even if you learn the library of one language, it doesn't help with other languages at all. Learning to quickly search reference docs, along with learning some basic concepts/assumptions of each language, is much more sustainable if you're thinking of becoming polyglot.
EDIT: please, don't turn HN into Reddit, write a comment if you disagree, instead of downvoting.
However, as a starting point for me, it would be easier if the languages all used a common naming convention for things like .upper() etc. Common mistake for me is to try .upper(), then .uppercase(), then have to leave my code and refer to the docs after repeated runtime errors. If I can make a reasonably intelligent guess within 2 or 3 tries, then it bodes well for me. Otherwise it is a productivity hit.
To extrapolate this - the corollary to .upcase() is, to my mind .lowercase(), but it is in fact .downcase(). Makes sense logically ('down' is the opposite of 'up'), but syntax wise, I never say to anyone "You need to write your username in down case...". If the naming conventions for functions followed the English language definitions, then my hit/miss ration will improve, and so will my productivity.
NB: I didn't downvote you (I can't anyway as I don't have the necessary karma to downvote immediate child answers to my posts). I thought you raised a valid point worthy of discussion.
Well, it would definitely be easier if all the languages used the same conventions. The problem here, I think, is that all the possible conventions are arbitrary and objectively as good as any other, so there are many of them and settling on a single one across languages is not going to happen.
It's much more important to choose one convention for a language and to follow it everywhere - this improves guessability of identifiers in that language. The convention chosen may be familiar to you or not, but that doesn't mean the more familiar one is any better. So, I think, it's more important for a language to follow a single convention consistently, while choosing which convention to follow is less important.
That being said, I agree that it would be easier to learn new languages if they all followed a single standard for naming things, I just don't think it's going to happen.
> If I can make a reasonably intelligent guess within 2 or 3 tries, then it bodes well for me. Otherwise it is a productivity hit.
Agreed, but note that the 2-3 tries are informed by your experiences to date - someone with a different history of programming languages could find your guesses weird and would have completely different ones themselves.
Another thing, I agree on the productivity hit in principle, just want to note that there are ways of mitigating it. I use many languages regularly (I'm a hobbyist-polyglot, I learned at least 30-40 languages to varying degrees) and I think that a good editor integration (most notably "semantic auto-complete" and "inline docs") can make up for a lot of cases like the `upcase`. For example, if I write this (`|` is cursor position):
a = "asd"
> NB: I didn't downvote you
It's impossible to downvote immediate child posts, no matter the karma level, so it obviously couldn't be you :)
It would be easier if everyone would just speak English all the time, but I'm not sure it would be better.
But look at the FAQ:
Commenter dcw303 in this thread puts it better than I can.
I will refute that notion and suggest that 'plain English' type languages are an incredibly foolish diversion into a world where we pretend formal languages don't exist, are unimportant, or that English is one, or that it somehow can be laboriously contorted into a useful approximation of one.
Wouldn't you have the same issue with just any programming language?
Take a more esoteric example - to capitalise just the first letter of a string. By definition, this is called 'proper case', and most other languages I know use .proper() to achieve this. Except Ruby, which decided to use .titleize() (and there is that 'ize' again just to confound me further).
If a language is going to be 'English-y', then sticking to actual English words for things such as uppercase, proper case etc. would be handy. Going outside of those constraints just increases the guessing game workload for the programmer.
.titleize() is from ActiveSupport, not Ruby.
But proper() would have confused me much more. First time I've ever heard it called that. 'proper' would sound ambiguous to me, because it doesn't indicate to me it's about case at all.
But of course this just reinforces the point that naming is hard.
Capitalize (https://ruby-doc.org/core-2.2.0/String.html#method-i-capital...) on the other hand does something even weirder from your perspective, as it also changes letters to lower case.
It still has a fairly rigid syntax - it's just more verbose than a regular programming language. That is probably an oversimplification, but that's how it felt to me at least.
This suggests they have a solution to the ambiguity problem
Millions of dollars are wasted each year on mistyped ==/= operators, but this is some next level evil.
But yes. There’s a lot of deeply evil things that can be done in Avail, right down to the lexical scanning. Like forbidding certain variable names, or manufacturing a run of tokens every time a certain emoji is encountered. There are also a lot of safeties, like sealing methods to prevent someone overriding “_+_” for [3’s type, 4’s type]->9’s type.
Oh, and writing “x = x + 1;” as a statement will be flagged as a syntax error, because it would produce a Boolean value which is not allowed to be silently discarded like in Java or C. “Discard:_” makes these rarely needed situations absolutely obvious.
Not impressed tbh.
Module "Hello World"
Method "Greet" is [ Print: "Hello, world!\n"; ];
I'm pasting a link because I added on-hover popups explaining parts of the snippet, but here is the sample itself:
Module "Hello World"
Method "Greet Router" is [
socket ::= a client socket;
target ::= a socket address from <192, 168, 1, 1> and 80;
http_request ::= "GET / HTTP/1.1\n\n"→code points;
Connect socket to target;
Write http_request to socket;
resp_bytes ::= read at most 440 bytes from socket;
Print: "Router says: " ++ resp_bytes→string ++ "\n";
I tried to link to a more interesting page though. I stumbled upon someone mentioning the language, but don't really get it. It seems to promote building DSLs, but lots of languages like Lisp & Rebol have been doing that for ages.
I understand your intent, but I think it unfortunately ended up confusing matters for fellow readers.
Add to this the fact that the website itself is not very clear already...
Only you can identify the version quickly and link it for us.
Ok, I consider myself OK at type theory but I'm still lost in what this claim actually means. And if it is what I think it is (that all values have types), I wonder how this doesn't run afoul of decidability of fancy dependent type systems (perhaps 1 has a type, 2 has a type, but 1 + 2's type isn't 3?).
Haskell (98) has infinite and recursive constructs and its type system is not undecidable.
Formal languages, at root, have exact reference. In a programming language, a symbol ultimately refers to a block of memory, or an operation. The problems of writing a formal language are ones of trying to express a given concept when the relation between symbols and references is known, but the relationship between concept and symbol is not.
In natural language, a symbol ultimately refers to nothing. Its meaning is derived from context, convention, intention. As such, the relationship between concept and symbol is basically known - we know we are talking about red things when we use the word red. The relationship between concept and reference is absolutely unknown - we can never know for sure whether our concept 'red' is adequate to real red objects.
As such, natural languages are a poor model for formal ones. The problems are essentially different. In one, you know how the symbol 'red' relates to operations and memory. In another, you know how the symbol 'red' relates to intention and meaning. Each has different challenges associated.
For the former, we have the whole notion of semantics, the development of tools like valgrind, tests, etc.
Is there anything you can reccomend to read? I'm pretty familiar with how computers work on a mechanical level, but I'm pretty ignorant about the theoretical intuitions behind all the more functional stuff.
In e.g. Scala, you can do that:
print( 1 to 10 mkString "," )
It's not 100% human language/grammar, but close (and you have auto-completion using IDE). Why would you need another DSL?
(Not trying to bash Avail, nor promote Scala, just curious for its usecases)
print (intercalate ", " [1..10])
I don't buy natural-language-ish programming languages. The grammar becomes far too complicated very quickly. A simple but flexible grammar, a la most functional languages, is superior.
This is the definition of intercalate.
verb (used with object), in·ter·ca·lat·ed, in·ter·ca·lat·ing.
1. to interpolate; interpose.
2. to insert (an extra day, month, etc.) in the calendar.
To be honest, I'm not sure how much clearer this is to read than for example Python.
Edit: (I just googled "algebraic type lattice" and while ymmv, I don't recommend it unless you're well versed in scary black mathic)
I didn't get too in depth with reading the docs, but any language that goes for non ascii symbols a la APL is going to be fighting an uphill battle right from the get go.
Maybe it was a bit easier even for apl because there were interfaces more immersive than what we have now for non ascii, especially when mixed with regular ascii.
Type type type, oh wait, backslash, dropdown, there's my symbol, enter, type type type. That's not very fun. That's less fun when youre dividing your cognition between what things im actually trying to accomplish and what things I have to type.
Just my two cents, no ill will
Languages like Java (and many more) fail to even have a top type, which leads to kludgey add-ons to the type system — boxed, @Nullable, erased generics, annotations, dependency injection, purity annotations, etc., all of which should have been part of a single type system.
Almost all of these monstrosities are caused by the “object purity” mindset that kept the designers from including first-class functions in the language. Avail attempts to include a large set of paradigms instead of trying to “axiomatize” the exposed surface (although we do axiomatize the construction of these constructs).
I first encountered them when doing work-related reading on dataflow analysis, and I'm very glad I did. Interesting stuff.
(edit) Summary: the topic might look scary at first blush, but it's actually not.
o o o
/\ /\ /\
o o o o
/\ /\ /\ /\
o o o o o
... ... ...
I like some well-structured separation in my coding languages. It's not a downside for me at all.
From 2015: https://news.ycombinator.com/item?id=9043561
And we're about to release Español Llano, the Spanish version of Plain English. This new compiler compiles Spanish and English. Or both, even in the same sentence. Kind of like a bi-lingual human.
No, I would not. Don't make assumptions on behalf of others.
But even then, I'm not convinced that "many" programmers would rather write the latter.
Cheesy, closed languages like C forgot that exponentiation was even a thing, or complex numbers. If you shift over to using C++’s “clevernesses”, you still don’t get exponentiation, because the traditional caret symbol is already used for exclusive-or, which has no sensible ASCII punctuation.
As for someone’s distant comment that Lisp has been used to create languages for years... sure, if the language you wanted was parenthesized lists with keywords inside the left parenthesis. Which is just Lisp with a few more operations and macros. Yuck.
If you want Avail to include all of C, you define C language capability in an Avail module and subtract the rest. Then your C program is also an Avail program, so you have the same tool chain you had with core Avail (still lacking, but getting there), you have dynamic optimizing compilation to JVM (and eventually native, perhaps), and you have a program that not only interoperates with programs written in other dialects, but allows direct connection between them. Like if you want Avail exceptions with your C code, you import exceptions and just use them. You don’t build them over and over again for each language. And if you want to support closures in your C, you’d probably just remove the limitation that treated the closures of Avail as contextless C functions. Similarly for backtracking, universal serialization, and sensible module dependencies (maybe call it #import, versus the horrible textual #include mess of C).
And finally, if you want your existing vanilla C code to be able to call “out” efficiently to some FORTRAN or Python code, you no longer have an impedance mismatch. Define those dialects as Avail, and you’ll have real garbage collection, multiplexed lightweight threads (fibers), dynamic optimization, and objects and functions that are reasonably compatible between these dialects.
Note that embedding PROGRAMMAR inside Lisp is exactly the kind of thing we’re doing with Avail… it’s just that the base metarules are more articulate for that sort of thing.
PROGRAMMAR rewritten today in Avail wouldn’t look at all like Lisp — thank goodness. But if you implemented that language via a compiler, you would have the limitations of the compiler technology constantly getting in the way of the linguistic forms you’re trying to express. For example, having to decide on a linear ranking of precedence levels for every operation, even if they couldn’t ever appear next to each other due to type or linguistic constraints.
Even C++ requires custom lexing tweaks because of spurious “>>” tokens in nested templates… and special backtracking rules to distinguish function definitions from stack object creation. But those bypasses aren’t readily usable in the available parsing tools. Avail’s compiling scheme is decades beyond that.
Rather than trying to get programming languages to look like human language, we need to get human language closer to computer language.
By this I mean that every argument I've ever been in has turned out to either be an intrinsic disagreement about definitions (fixable, and usually we agree) or an intrinsic argument about god (probably not fixable, we will probably not agree).
If the average person understood the beauty of a solid (and unambiguous) definition, I dunno, world peace and rainbows and butterflies? Probably not, but I'd definitely not have to rage-quit socializing so often.
Still, with that said, from a purely intellectual curiosity standpoint this is neat. I hope that the general saltiness of the internet doesn't discourage the devs from working on this some more.