Hacker News new | comments | show | ask | jobs | submit login

I'm not sure if anyone is still reading this thread, but I wrote up some thoughts on the differences between Sequoia and Legion and some of the lessons we learned. Maybe this is a little too much "inside baseball", but hopefully some will enjoy it.

Sequoia was basically an answer to the question, "what could a compiler do for you if it knew everything about your program?" And I do mean everything: in addition to your source code Sequoia had to be given, at compile time, the sizes of every input array in addition to the exact layout of the machine it was to execute on. While these restrictions were lifted to some degree later on, Sequoia basically had to be able to statically compute every task that it would execute along with the exact sizes of all inputs and outputs. And with this information it was able to do some pretty [fantastic optimizations](http://theory.stanford.edu/~aiken/publications/papers/ppopp0...). I believe, though I can't find the reference at the moment, that the compiler was able to generate (from completely hardware agnostic source code) implementations of BLAS routines competitive with Intel MKL. This was a pretty mind-blowing achievement and to this day I'm not aware of any work that has really been able to equal it.

Requiring programs to be fully static also had a dramatic cost, one which it turned out not to be so easy to lift. In what I believe was the [final paper to be published on Sequoia](http://theory.stanford.edu/~aiken/publications/papers/ppopp1...), Mike Bauer and others showed how to extend Sequoia to support limited forms of dynamic behavior. But the solution wasn't all that satisfying and the work to get there was fairly excruciating. Instead of continuing to work with the language, Mike ended up switching gears to start a new project entirely, Legion.

As an aside, one of the other lessons learned from the Sequoia project is that research projects tend to be tied to the Ph.D. students who work on them. In the Sequoia project, there was a gap in continuity between the first and second generations of students to work on the project. This meant that second-generation students like Mike basically ended up picking up a big legacy code base on their own. This turned out to be a huge drain on productivity and was among the reasons that Sequoia was eventually abandoned.

The biggest decision made about Legion up front was that it had to be dynamic. This was a huge risk because the analysis performed by Sequoia simply could not have been performed at runtime with reasonable cost. Sequoia relied on knowing every single task that would ever execute and the exact bounds of every array input and output. This meant that Sequoia could do perfect alias analysis on these input and output arrays. Because Legion would be a dynamic system, any analysis would have to be performed during the execution of the program. There was no way we were going to do this in Legion without fundamentally changing the abstractions of the programming model, and frankly it wasn't obvious up front if this would even be possible to make tractable.

One of the central insights of Legion was to [explicitly reify the notion of data partitioning](http://legion.stanford.edu/pdfs/oopsla2013.pdf) in the programming model. Sequoia didn't need this because it simply knew the inputs and outputs to every task. Legion couldn't afford to pay the $O(N \operatorname{log} N)$ cost to perform this analysis at runtime. By lifting partitioning into a first-class primitive, we were able to radically reduce the runtime cost of this analysis by leveraging what the user already know about their data partitioning. Originally this required users to declare certain properties of their partitions, such as disjointness, in order to make things efficient. Eventually we discovered a [partitioning sublanguage](http://legion.stanford.edu/pdfs/dpl2016.pdf) which allowed us to statically discover most of these important properties, and to provide many static guarantees even in the presence of data-dependent behavior.

The other big insight of Legion was what we call index tasks. Index tasks are basically collections of tasks that can be described in $O(1)$ space. This turns out to be essential if you want to scale your system to very large distributed machines. Any representation which is $O(N)$ inevitably becomes a scalability bottleneck, either because of memory-capacity constraints or because of runtime complexity. By making the index task representation $O(1)$ we were able to design an [optimization to keep the runtime analysis complexity $O(1)$ as well](http://legion.stanford.edu/pdfs/cr2017.pdf).

One consequence of being a dynamic runtime system is that Legion itself isn't able to optimize the low-level code executed by the system. It seems to be a general principle that the closer you are to the hardware, the more static you need to be. Conceptually, Legion solves this problem by splitting it into two parts: the kernels, or low-level code that has to execute as efficiently as possible on the processor, and the orchestration of the parallelism that exists between those kernels. Legion focuses nearly entirely on the upper layer, leaving the lower layer to be handled by programmers. We've recovered some of this capability via the [Regent programming language](http://regent-lang.org/) which sits on top of Legion, but we haven't taken this nearly as far as Sequoia did.

Despite being a dynamic runtime system, we still put a lot of effort into making sure that it had a principled foundation. Legion is one of the few runtime systems I'm aware of that has its own [type system and operational semantics](http://legion.stanford.edu/pdfs/oopsla2013.pdf). This also made it relatively easy to develop Regent.

With Regent in some ways we've now come full circle, since we're back to having a programming language again. However, one key difference between Regent and Sequoia is what happens when the compilers run into behavior they can't analyze. In Sequoia, it's of the utmost importance that the compiler understand your program, because if the compiler can't prove that parallelism is safe, then it will serialize that part of the program. In contrast, if Regent can't determine that parallelism is safe, it will simply fall back to the runtime system. This means that in general Regent suffers from fewer performance cliffs, or places where performance can suddenly (and sometimes inexplicably) degrade.

There were some lessons we failed to learn from Sequoia. Sequoia made big use of [graph-based IRs](http://theory.stanford.edu/~aiken/publications/papers/ppopp0...) for its compiler. This was a design I copied when developing the [RDIR format](https://github.com/StanfordLegion/rdir) for Regent, a decision I now consider to be a mistake. Graph-based IRs are superficially appealing because they seem to represent the fundamental invariants that you want to reason about. That is, if two tasks are mutually unreachable in the IR, then they are provably safe to execute in parallel. However, working with the graph-based IR as a representation of the program turns out to be a huge pain. RDIR has been by far the biggest source of bugs in Regent and continues to be largest maintenance burden. The other big motivation for a graph-based IR, that it makes certain aliasing properties of the program more explicit than a traditional CFG, is something that I now think could be done adequately well in more traditional formats.

One final note about hardware support: Sequoia made a big bet on the [Roadrunner supercomputer](https://en.wikipedia.org/wiki/IBM_Roadrunner). Roadrunner was an excellent match for Sequoia's programming model, because it used scratchpads (instead of caches) for the compute-heavy processing units. This meant that data (and even instructions!) had to be explicit copied in and out of the memories of the individual processors, something which Sequoia excelled at. Unfortunately, Roadrunner ended up not being so representative of the machines that would follow, and the effort required to retool all the infrastructure ended up being a big drain on the project. Though it would not surprise anyone if these ideas ended up resurfacing in the future, a project like this simply can't afford not to work on contemporary hardware.

In contrast, Legion made its bet on GPUs and heterogeneous architectures more generally. So far this appears to be turning out well and has positioned us to work well on architectures such as [Summit](https://en.wikipedia.org/wiki/Summit_(supercomputer)). However, investment in technologies such as LLVM mean that we're also in a better position to adapt if future technologies end up going in a different direction.




Thanks for this write-up. I can't wait to see what comes out of the Legion projects next. One of the most exciting things I think that is going on.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: