Hacker News new | comments | show | ask | jobs | submit login
The Nile Programming Language (github.com)
80 points by ingve on Nov 10, 2015 | hide | past | web | favorite | 25 comments



I've seen it from Alan Kay's talk 'Is it really "Complex"? Or did we just make it "Complicated"?' [0]

Though GH repo is not updated for more than 3 years and maybe it's just my inexperience or I am spoiled, but from lack of proper documentation I have no idea how to setup and use this language? On what platforms does it work?

Also looking in the examples, it looks somewhat APL'ish due to use of special characters. How do I type them (and do it quick) on regular laptop?

[0] https://www.youtube.com/watch?v=ubaX1Smg6pY


Alan Kay talks about Gezira, a library built with Nile (and the motivation for making Nile, althoug Nile is much more general purpose). There's a JS version of Nile/Gezira which was used for Brett Victor's demonstrations, which can be found here [1]

The main Nile compiler here is written in Maru, a programming language of Ian Piumata's which is another awesome project on it's own. It's essentially a Lisp where you can define how the evaluator behaves for a given type.[2] The version of Maru used is included in the Nile repo though. It can be built with GCC or MinGW. The compiler translates Nile to either Maru or C code.

There's a squeak version which can be found at [3]

And also a Haskell vector library which exposes a similar interface to Cairo, but is modeled as Gezira under the hood (used for purely functional image generation). Although this implementation is function based and doesn't really provide a proper streaming abstraction as provided by Nile. [4]

[1]: https://github.com/damelang/gezira/tree/master/js

[2]: http://piumarta.com/software/maru/

[3]: http://tinlizzie.org/updates/exploratory/packages/

[4]: https://hackage.haskell.org/package/Rasterific


Thanks for the link. I had problems understanding what Nile was all about from reading the slides, which provide very little in terms of context and explanations.

Watching that talk and hearing that Nile is a declarative streaming language from the research group where Bret Victor works, I now understand what are the practical applications of such language.


Here's a more interesting demo of Gezira: http://tinlizzie.org/~bert/Gezira.ogv


> Though GH repo is not updated for more than 3 years and maybe it's just my inexperience or I am spoiled, but from lack of proper documentation I have no idea how to setup and use this language? On what platforms does it work?

Looks very interesting, it's a bit sad to see things like:

https://github.com/damelang/nile/issues/4

Mostly hanging from this summer. I would love it if more of these code dumps from research projects at least included a section on how to get the examples running (and something on requirements, such as which OS it has been tested on, even if that turns out to be "Only tested on DOS 2.1").


I'm wondering why this was implemented as a language, rather than as a library in, say, Python or Haskell.

Is "streaming" something that is overlooked in conventional programming languages?


I think the philosophy behind it is that they (View Point Research Institute) want(ed?) to be able to create lots of domain specific programming languages easily. So being able to implement those little languages from scratch and not as part of an existing language was part of the research effort, not just the language itself.

They have created tools like OMeta2 for this purpose, because they think that this way every problem is expressed in the natural language of the problem domain. A huge compiler like GHC with its >= 100.000 lines of code runs also against the goal of creating a system that needs orders of magnitude less code (I think their initial budget was ~10000 lines of code).


The Viewpoints Research Institute is going to be the Xerox PARC of our generation.

They're developing this really cool computing model which is user-friendly and extendable, and it just makes sense for removing unnecessary details in programming, allowing you to center on one aspect of the program at a time - reducing complexity in a way that classic text-based languages + IDEs are not able.

It improves the current state of interaction with computers, in the same way that PARC's windowing desktop with direct manipulation improved over the existing form-based applications of the time. I expect the next 30 years of computing tools to be heavily based on that model.


There are certainly ideas coming from VPRI which will make it into mainstream use, but there's a big difference between it and Xerox PARC - namely, Inertia. PARC was developing from mostly a clean slate and got to define how computing would become for the average user. (And let's face it, not all of what came out of PARC was good. The windowing desktop has its flaws).

Nowadays everyone already has an electronic device and there are thousands of other companies working on related research. It's unlikely, in this context, that the VPRI can have such a massive impact - particularly when they've decided to use tools (ie, Smalltalk) that the rest of the computing industry rejected decades ago (whether for the right reasons or not, the superstition is already in place, and reversing that attitude isn't going to happen quickly).

I'm a fan of some of the work VPRI has done, especially Gezira, but it's only a small part of a much bigger picture, and there's no silver bullet in computing. I've seen work from outside of the VPRI which is likely to have a larger impact on the state of software development, and hardware developments which seem impossible to succeed given the current state of software. It will be a nice fairytale in 30 years if we can point at one company for creating what has become, but it ignores the tens of thousands of research papers that contributed to the current state of computing, and the state which it will become.

What's more fascinating about the VPRI isn't the results, but the methods. Start by throwing away superstition about how computing is currently done and lets look at other ways we could do it, rooted in well-understood theory. It's a shame that this kind of development is rare and most of the industry is bandwagon-hopping.


> hey've decided to use tools (ie, Smalltalk) that the rest of the computing industry rejected decades ago

Hey, the Alto wasn't the technological basis for the desktop environment either - it took a different company to popularize the format (Apple's Lisa and the Macintosh). The good thing is that the ideas are spreading - Victor Bret's talks have had a significant impact in developer circles, and more people are taking parts of them in their own project, or at least recognize them and understand their value.

If this means that, this time around, adopting the methods will take a distributed approach of many people including piecemeal parts of the global idea in their pet projects - rather than everything coming together from a single entity, so be it. The times are a-changin'.

> Start by throwing away superstition about how computing is currently done and lets look at other ways we could do it, rooted in well-understood theory. It's a shame that this kind of development is rare and most of the industry is bandwagon-hopping.

Agree. I believe it's largely a matter of inertia, of which computing industry is a major culprit. Back in my time we learned to build the control flow of applications for the command line using a top-down main menu, that stalled waiting for user input, and then called the subroutine for the selected command when the user choose its action number; that was simply how applications were made, we simply didn't know there was a different way. Yet nowadays everyone knows how to use an active event-loop that mixes user- and application-initiated actions.

But at the end, all that inertia is overcome when knowledge about the best processes spreads. The form fields and radio options in HTML appear in essentially the same shape of banking applications in the 60s, which were the result of user-centered research and thus are well suited for the problem they solve. But we now use declarative languages for building the forms and have specific widgets for accessing their values, instead of relying on ad-hoc low-level code for showing the fields and reading their input.

So there's hope that in the long term the best ideas spread, and ultimately they become unattached from the legacy practices where they were first seen.


You might consider shifting some of your enthusiasm from VPRI to CDG Labs, where a fascinating group has come together around Alan Kay.

http://www.bloomberg.com/news/articles/2015-01-29/sap-looks-...


Yeah - I'm already following the development of Apparatus, the visually programmable slide editor. I didn't know they were related through Alan Kay, but it doesn't surprise me to learn it. :-)

[1] https://github.com/cdglabs/apparatus


VPRI and CDG operate in parallel, with Alan in LA and Bret Victor + others in San Francisco.


Here's a small comparison of Nile and (manually translated) JS code.

Nile[1]:

    DecomposeBeziers : Bezier >> EdgeSample
        ∀ (A, B, C)
            inside = (⌊ A ⌋ = ⌊ C ⌋ ∨ ⌈ A ⌉ = ⌈ C ⌉)
            if inside.x ∧ inside.y
                P = ⌊ A ⌋ ◁ ⌊ C ⌋
                w = P.x + 1 - (C.x ~ A.x)
                h = C.y - A.y
                >> (P.x + 1/2, P.y + 1/2, w × h, h)
            else
                ABBC    = (A ~ B) ~ (B ~ C)
                min     = ⌊ ABBC ⌋
                max     = ⌈ ABBC ⌉
                nearmin = | ABBC - min | < 0.1
                nearmax = | ABBC - max | < 0.1
                M       = {min if nearmin, max if nearmax, ABBC}
                << (M, B ~ C, C) << (A, A ~ B, M)
JS[2]:

    gezira.DecomposeBezier = function(downstream) {
        return function(input) {
            var output = [];
            while (input.length) {
                var a_x = input.shift();
                var a_y = input.shift();
                var b_x = input.shift();
                var b_y = input.shift();
                var c_x = input.shift();
                var c_y = input.shift();
                var _1_x = Math.floor(a_x);
                var _1_y = Math.floor(a_y);
                var _2_x = Math.floor(c_x);
                var _2_y = Math.floor(c_y);
                var _3_x = Math.ceil(a_x);
                var _3_y = Math.ceil(a_y);
                var _4_x = Math.ceil(c_x);
                var _4_y = Math.ceil(c_y);
                var _5_0 = _1_x == _2_x;
                var _5_1 = _1_y == _2_y;
                var _6_0 = _3_x == _4_x;
                var _6_1 = _3_y == _4_y;
                var _7_0 = _5_0 || _6_0;
                var _7_1 = _5_1 || _6_1;
                var _8 = _7_0 && _7_1;
                if (_8) {
                    var p_x = _1_x < _2_x ? _1_x : _2_x;
                    var p_y = _1_y < _2_y ? _1_y : _2_y;
                    var w = p_x + 1 - ((c_x + a_x) / 2);
                    var h = c_y - a_y;
                    output.push(p_x, p_y, w, h);
                }
                else {
                    var abbc_x = (((a_x + b_x) / 2) + ((b_x + c_x) / 2)) / 2;
                    var abbc_y = (((a_y + b_y) / 2) + ((b_y + c_y) / 2)) / 2;
                    var min_x = Math.floor(abbc_x);
                    var min_y = Math.floor(abbc_y);
                    var max_x = Math.ceil(abbc_x);
                    var max_y = Math.ceil(abbc_y);
                    var _9_x = Math.abs(abbc_x - min_x);
                    var _9_y = Math.abs(abbc_y - min_y);
                    var nearmin_0 = _9_x < 0.1;
                    var nearmin_1 = _9_y < 0.1;
                    var _11_x = Math.abs(abbc_x - max_x);
                    var _11_y = Math.abs(abbc_y - max_y);
                    var nearmax_0 = _11_x < 0.1;
                    var nearmax_1 = _11_y < 0.1;
                    var m_x = nearmin_0 ? min_x : (nearmax_0 ? max_x : abbc_x);
                    var m_y = nearmin_1 ? min_y : (nearmax_1 ? max_y : abbc_y);
                    input.unshift(a_x, a_y, (a_x + b_x) / 2, (a_y + b_y) / 2, m_x, m_y,
                    m_x, m_y, (b_x + c_x) / 2, (b_y + c_y) / 2, c_x, c_y);
                }
            }
            downstream(output);
        };
    };
Should be obvious why a DSL is preferred in this case. Although there are obviously some more succinct ways one could write it in the JS version, the constraints of trying to fit it into javascript syntax are probably not worth the effort, and it's unlikely you could ever get the same amount of information per character as an eDSL.

Conventional language don't really overlook streaming because there's not really any one solution that fits all when it comes to it, so it's typically left as a library problem.

[1]:https://github.com/damelang/gezira/blob/master/nl/rasterize.... [2]:https://github.com/damelang/gezira/blob/master/js/gezira.js#...


Unsurprisingly, this looks like it would benefit a lot from JS SIMD (when it rolls out).

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

https://blogs.windows.com/msedgedev/2015/05/21/intel-and-mic...


A proper [e]DSL should almost always be preferred to a library. For the very obvious reasons.


I don't think that's obvious, and my experience is the opposite. Domain specific languages make for pretty example code, but tend to lose some of the flexibility of the underlying general purpose language, which can cause you trouble later on when you want to feed the output of one command into another, do a slight tweak to the input format, or just about anything the author of the DSL didn't anticipate.

Libraries tend to be more composable, and can be used with other libraries. Mashing together two DSLs can be close to impossible.


Weird position. Libraries are:

- Limited to the host language syntax

- Littered with unrelated abstractions from the host language

- Not flexible and not composable enough to be usable

- SLO-O-OW - you cannot do domain-specific optimisations for libraries and so called "fluent" DSLs

- Very poor tooling (or no tooling at all)

With eDSLs you have:

- Pure essence of the problem domain, no unrelated abstractions

- Nice tooling (syntax highlighting, semantic navigation, context-sensitive suggestions, etc.)

- Flexibility - you can drop your entire DSL implementation and swap for another one without changing a single line of the user code

- Composability - it's very easy to mix DSLs together, to build new DSLs out of bits and pieces of the existing ones.

- Performance - you can perform as many domain-specific optimisations as you like

- Quality - add any kind of static type system to your DSL to check for the problem domain-specific constraints.

> Mashing together two DSLs can be close to impossible.

What?!? Unbelievable.


> > Mashing together two DSLs can be close to impossible.

> What?!? Unbelievable.

Let me put the unbelievable into concrete terms for you. On a previous project, I had a program that needed to repeatedly do the following

1) Download and XML config file 2) Parse said XML file for a set of voltages 3) Send the voltages over a set of power supplies 4) Retrieve a new array of voltages from our ADCs 5) Plot the voltages 6) Display the plots on a webpage

At an early point in the project, everything was using the best DSL for the job. The XML was parsed via XSLT. LabView was used to control the power supplies and ADCs. OriginPro provided the plots. The web pages were written in php.

However, the amount of effort that went into channeling the data between AWK, LabView, OriginPro, and php was found to be far greater than what it took to write a new application that did all of it in Python. No single stage of the process was as easy as it had been in the DSL, but the impedance mismatch between the languages was nightmarish.


sklogic is talking about DSLs embedded in a host language. You are comparing it to external DSLs, some of which were built not to play particularly well with others.

You can, of course, still have impedance mismatch between eDSLs, but you're just not talking about the same things.


Sorry about that. Looking back, he even said eDSL in his post.

Yeah, embedded DSLs are awesome.


Of course. Not having a single common fabric for all the DSLs can be a nightmare.

For this reason, eDSLs built on top of a common, sufficiently powerful metalanguage are much better than any standalone DSL.

The said Maru seems to be sufficiently powerful.


Composability, optimization, type checking per DSL, and type-checking globally in host language can combine to offer real power to someone wanting productivity and robustness in combination. Not enough work in this area considering potential benefits.


Neat. The speed-up with respect to cores is particularly impressive (though it would be interesting to know the exact conditions. Or is just some type of average-case?)


A very well designed DSL. And I did not know about Maru before, it's quite impressive.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: