Hacker News new | past | comments | ask | show | jobs | submit login
MLIR: A Compiler Infrastructure for the End of Moore's Law (arxiv.org)
211 points by xiaodai on Feb 27, 2020 | hide | past | favorite | 68 comments



I'm the developer of a language that I think is in the target audience of MLIR: https://futhark-lang.org

I'm still not completely certain MLIR addresses the difficulties I face, which is exactly to design the specific IR that can express the passes I need. Having outside constraints seems like it would just make this work harder, and I'm not sold that the reusable components offered by MLIR offset this. At some point I probably will try generating MLIR from the compiler midend, though, just to see whether it might be a good idea.

The thing that concerns me the most might be that using MLIR seems to require writing your compiler in C++. LLVM can be accessed by non-C++ languages just fine, because it's just a consumer of bytecode, but the whole point of MLIR is for the IR to be extensible, and that will presumably require custom C++ to introduce the desired bespoke structures. In comparison, it's rare that a compiler needs custom LLVM passes.


Cool, I've seen the PLDI talk a couple of years ago! Would you mind describing your potential use case on https://llvm.discourse.group/c/llvm-project/mlir, there may be more people who are able to help or have similar needs.

Regarding C++, I would argue that MLIR is roughly as accessible from other languages as LLVM. That is, if you want just to run passes or construct pieces of IR using already-defined constructs, adding the bindings is trivial. However, if you want to add new concepts into the IR, you should do it in the language that the IR is defined in. Most of MLIR operations are actually generated from Tablegen definitions, there's not that much hand-written C++ for that. We can also provide bindings to inspect the IR from different languages, at which point you can envision calling back from C++ to the another language to implement passes. The risk is to end up with the "JSON of compiler IRs" as we call it, where an operation has a string id, a string kind, a list of strings of operation ids producing its operands and so on to a full stringly typed system.


> However, if you want to add new concepts into the IR, you should do it in the language that the IR is defined in.

And isn't that what most users will want to do? The point of MLIR is to provide a framework for writing mid-level IRs, citing among others the Rust compiler as an example. This will invariably involve adding both new concepts and new passes (for example, the Rust IR will presumably wish to represent and check ownership information explicitly), but the Rust compiler is written in Rust, not C++.

Don't get me wrong, I can't really think of a better language for implementing MLIR than C++. Everything that I would find subjectively better would be too obscure to gain industrial traction, and using straight C would probably be too painful. I just suspect that it will limit its use as a universal mid-level IR.


> I can't really think of a better language for implementing MLIR than C++.

C++ being the language of llvm, it makes sense. But surely any language would be good, and memory safe ones with stronger type systems as well would potentially lead to higher quality output, no?


I hadn't heard of Furthark before, very cool project. I've actually had "Learn How to write a compiler by writing an APL that supports GPU-optimized numerical ops." Part of the reason that I was interested in working on such a project (besides the fact that I find APL, GPU compute, and compilers really interesting) Is that I feel like the rise of new compute platforms (GPUs, BLAST accelerators, Neural Net accelerators, Crypto ASICs, etc.) and open hardware (RISC-V) could motivate a new wave of research in compilers (I guess that is what MLIR is).

It's really easy to picture how having a family of programming language for every class of accelerators can go very wrong, but it's kinda cool to picture what it would be like if biology, chemistry, meteorology, astrophysics, and other quantitative fields had open-source hardware accelerators and software stacks that became foundational components which "normal" programmers rarely ever touched. Then when we finally get practical quantum computing down to the size of something that could go in a PCI-E slot, there will already be a compilation toolchain waiting to target it.


I think you are right about needing to use C++ in implementing MLIR dialogs. It does look like you can define a lot of it in TableGen (a sort of python-like declarative language used in LLVM) which generates a lot of the C++ for you. I regret too that MLIR is tied closely to C++ at this point, but I'm still very excited about the possibilities of languages like futhark that make heterogeneous data processing easier to write and more efficient to run.

By the way, do you think futhark could support automatic differentiation?


> By the way, do you think futhark could support automatic differentiation?

I don't see why not. It's similar to other languages that have been the subject of AD research: https://www.microsoft.com/en-us/research/uploads/prod/2019/0...

Concretely, we've only experimented with the usual forward-mode stuff with dual numbers implemented as a library. It works well, but is of course limited in utility.


I just fundamentally disagree with the premise that it’s hard to build an IR.

It’s amazingly easy if you know how to do it and if you avoid overengineering. It takes about 3 months for 1-2 folks on average.

This new IR scaffold appears to be just general enough to handle things that Chris understands well (C compilers) and things Chris just learned about (ML). That’s cool but:

- there are other compilers that have constructs that require a fundamental paradigm shift compared to MLIR. That’s a good thing. Diversity is cool and shit. One example: the insane and wondeful IRs in JSC, which do things that aren’t compatible with MLIR.

- probably for ML it would be fine to just build your own IR. 3 months and 1-2 engineers, assuming they’ve done it before, and you’ll have yourself something purpose built for your larger team to understand, to fit into your coding style, and to surgically attack your particular problem.

On the flip side, I’m happy to see that location tracking is first class. The overall structure of this IR, like other IRs that Chris designed, is quite sensible and it seems like it’s not terrible to hack on. It’s just that it’s too much of an answer to a question nobody should be asking.


"3 months and 1-2 engineers"

That's nowhere close to the manpower you need for writing optimizations for your IR.


It’s exactly the amount of time it took to write JSC’s B3.


Basically 6 man months of some of the most senior Apple engineers? Seems fair. But at this point you "just" have an IR, and it is dedicated to your own use case, which has its advantage but also some inconvenient, especially when it comes to interacting with other tools/framework. So if every time you want to create a DSL for a use-case you need this amount of investment, this is a non-trivial cost.

Now think about use-cases like https://drive.google.com/file/d/19pSpEsi4I9-MKLRodD-po82HFCW... and what the MLIR ecosystem (if it develop can provide them). After 6 months of work, instead of having "just" an IR for their abstraction, they would likely have iterated on the IR, revisited their abstraction, write optimizations, and most importantly mapped their use-cases to heterogeneous HW! They can benefit fairly easily from the work of PlaidML on the affine/polyhedral optimizations for instance.

On the other hand, if a new HW vendor would like to expose their accelerator to a wide range of use-cases (whether it is Fortran OpenACC or the Stencil DSL), plugin lowering/backend to MLIR is a much efficient way than the alternative (which are almost non-existent today by the way).

If it was so easy to write its own compiler end-to-end from scratch, wouldn't LLVM be out of users by now?


After 3 months we had an IR, lowering to it, lowering out of it, and an optimizer. That IR is suitable for a broad range of use cases for us - basically anytime we want to JIT something and are willing to pay for compiler optimizations.

In the other two IR design efforts I’ve seen recently, it’s true that after 3 months you have less of the lowering and optimizations. But maybe for one of them it was because the person doing all the work was an intern who hadn’t ever designed an IR before.

You’re assuming that:

- 3 months is an outlier. It’s not.

- 3 months is more than it would take to glue your infrastructure to MLIR. Most likely there is a time cost to adopting it.

- that just because someone uses MLIR then they can somehow leverage - at no cost - the MLIR optimizations that someone else did. But MLIR is just scaffold so it’s not obvious that optimizations implemented by one MLIR user would be able to properly handle the semantics of code generated by a different MLIR user. I think this assumption reminds me of how folks 20 years ago thought that if only everyone used XML then we would be able to parse each other’s data.

I don’t buy that having MLIR makes it easier to interface different compilers. I’m used to writing lowerings from one IR to another. It’s not that bad and the hardest part is the different semantics; MLIR encourages its various users to invent their own instructions and semantics so I don’t see how to makes this any easier.

I think that it’s unfortunate that llvm has as many users as it does. It’s a good C compiler but whenever I see folks doing a straight lowering from something dynamic, I just want to cry. I think that the llvm community is doing a lot of up and coming compiler writers a disservice by getting them to use llvm in cases where they really should have written their own. It’s a big problem in this community.


"wonder IRs in JSC" Can you please provide a link/explanation to what JSC is? Did you mean this? https://github.com/eatonphil/jsc



B3 and Air are well documented: https://webkit.org/docs/b3/

DFG ThreadedCPS and DFG SSA are not well documented. That might be a monumental task at this point. They started simple but got so insane. But here are some slides: http://www.filpizlo.com/slides/pizlo-splash2018-jsc-compiler... http://www.filpizlo.com/slides/pizlo-speculation-in-jsc-slid...

And no, the thing you linked to is not t he JSC I think of.


I have a very hard time believing this will be as good as the abstract claims for anything but what the team at Google who developed this is using it for. The claims are quite extravagant and the case studies supplied were underwhelming (though I am willing to give them the benefit of the doubt that its been useful for tensor flow).

In particular, I would bet that MLIR is overfit for it's goal, which is not a problem for anything other than the claim that it's suppose to a more abstract setting for all matters of DSL and compiler infrastructure. Compilers often have many intermediate languages, so saying that Rust and friends have one on top of LLVM is not surprising nor convincing.


Obviously you're entitled to your opinion, but I think it's completely wrong. (Note: I don't work for Google, or know anyone on the MLIR team.)

Take a look at this recent talk by the people behind IREE[0] (who are not part of the MLIR team): https://drive.google.com/file/d/1os9FaPodPI59uj7JJI3aXnTzkut...

In a nutshell, they have made HUGE gains by taking advantage of the MLIR infrastructure for a use case not anticipated by the MLIR authors (i.e. making the TensorFlow-to-hardware path easier to develop).

In my opinion, MLIR is going to be much more successful long-term than LLVM is today, and by long-term, I mean "in the next few years." The number of teams consolidating already around MLIR is astounding, and having a multi-dialect SSA-based IR is a freaking game changer for compiler writers.

[0] https://github.com/google/iree


Thanks for the video.

I'd like to highlight minute 46 when they praise IMLR and explain how it enabled them to accomplish feats that were either too hard or impossible before.


I don't see how it _can_ improve anything by anywhere near the claimed amounts. Most practical work is basically dot product, which has been so heavily hand-optimized (and then some - via Fourier/Winograd for convolutions), there's no blood to squeeze out from that particular stone. Kernels of this quality can not be arrived at by the compiler - a competent human always beats the compiler when she has to.

As a result, your optimization efforts will necessarily have to improve things elsewhere, where a competently written deep learning framework spends single digit percentages of its inference time. As a result of that, in turn, the best you can really hope for is single digit percentage gains, barring some exotic architecture which doesn't consist almost entirely of dot products or some degenerate cases where hand-coded convolution and dot product kernels suck, which are, by definition, uncommon.

I mean it's great to have this - a rigorous, testable, theoretically grounded approach to execution graph manipulation is a hard thing to achieve for something as general as this, and everyone will benefit from not rolling everything from scratch every time, but let's be realistic here - this is not going to "fix" Moore's law or anything like that.


The sad thing is that there is so much NIH around MLIR. The infrastructure investments in LLVM are massive, but now lots of people are rushing off to reinvent the world rather than bringing the ideas to LLVM itself.


Whatever it is, can't be NIH given that Chris Lattner invented LLVM ...


Then call it reinventing the wheel. It still deserves to be weighed properly against the alternatives.


Modifying/extending LLVM is very hard to infeasible when the problem is with the fundamental architectural design. For example, according to the MLIR project, parallel compilation of a compile unit is basically not possible with LLVM.


They refer mostly to how globals and constants maintain use lists as linked lists, which is indeed a nightmare for synchronization. And yes, fixing that would require some difficult surgery.

But let's be honest about what is more work: refactoring that, or rewriting all users of LLVM out there. My bet is on the first one being cheaper by at least three orders of magnitude.


Experimentation in something easier to change than LLVM to make sure they know what exactly they want to do to LLVM later?


> The sad thing is that there is so much NIH around MLIR. The infrastructure investments in LLVM are massive, but now lots of people are rushing off to reinvent the world rather than bringing the ideas to LLVM itself.

What if your idea is completely opposite to LLVM's goals? For example, you want a very small (e.g. in the sense of LOC) compiler?


I don't know why but for some reason everyone loves to shout "NIH" complaints on HN. I don't believe in the success of MLIR but there is no reason to not attempt a fresh start. LLVM is a gigantic project that would have to be significantly rewritten to make MLIR possible. Your development team would need a lot of LLVM expertise even though it doesn't care about LLVM. The end result would be something completely different from LLVM so what benefit is there to base it on LLVM? It's not like the rust compiler will automatically update to MLIR just because MLIR is a descendant of LLVM IR.

LLVM IR is also clearly not suitable for anything other than LLVM. It failed in the browser (Webassembly won), it failed for GPUs (SPIR-V is its own thing) and now it might fail again when it is replaced by MLIR. There is clearly demand for something that is better than LLVM IR.


LLVM IR wasn't competing with WebAssembly. WebAssembly is designed as an abstract machine _target_ instruction set whereas LLVM IR is an abstract _internal_ intermediate instruction set for compiler passes. I think they fit nice together but don't have deep expertise in either of these IRs, just general compilers.


A similar argument applies to SPIR-V. While I believe Apple are still trying to use LLVM bitcode for some purposes, the reality is that for a spec like Vulkan, you really want a shader program representation that is properly designed with a view to compatibility and independent extensibility (in the sense of hardware or platform vendor extensions), which LLVM IR isn't, pretty much on purpose.


It makes sense to me that SPIR-V be a complete instruction set for an abstract GPU shader core. You can then have a GPU compiler translate to raw shader instructions. Much like a WASM compiler can translate from wasm to raw CPU instructions. I think an important design consideration in either case is "no surprises" meaning the translation should be fairly tight and predictable and serve just to smooth out some small surface area differences. If they do any funky magic/startling behavior the combined stack will start to exhibit weird performance properties


Does IREE compete with TVM? I think it has a backend used for designing hardware.


As far as I understand, it is entirely different: IREE is a runtime system.


It will be as good as the sum of work put into it by the people in the ecosystem. That is the reason why MLIR was open-sourced very early in the development process, instead of thrown over the fence after being driven to completion by Google for a particular task.

Rust and friends do have IRs on top of LLVM IR. Developing and maintaining those IRs comes with an engineering cost. MLIR lets them partially share that cost with other projects that have similar needs.


Totally agree. Like it has no story for managing my dependencies that arise from aliasing and no particular goodies for handling OSR at scale, etc.


> Instead of using φ nodes, MLIR uses a functional form of SSA where terminators pass values into block arguments defined by the successor block.

Does anyone know if this equivalent (or very similar) to SSI (static single information) form? It would be cool to see something like this in a real-world compiler, since it is strictly more powerful than SSA. For example:

    if (x == 0) { f(x); } else { g(x); }
Here SSA doesn't help you express "x == 0, but only in the first branch, and x != 0, but only in the second branch", while with the automatic renaming you get with SSI/"functional form" you can express this as something like (ugly syntax I made up on the spot):

    if (x == 0)
        { f(x1); }[x1 = x]
    else
        { g(x2); }[x2 = x]
and then just record the (now flow-insensitive) facts that x1 == 0 and x2 != 0.


Basic blocks with arguments is purely an alternate syntax for SSA.

From a teaching perspective, I like blocks-with-arguments a lot better, since it lets students reuse their intuitions about tail-calling functions. However, there's no difference in expressive power between it and the standard SSA representation.


So it doesn't actually introduce new, separate names for the values in the different successor blocks? There's no good example in the paper how this is actually used in practice.


See also: https://mlir.llvm.org/docs/LangRef/#blocks

"Context: The “block argument” representation eliminates a number of special cases from the IR compared to traditional “PHI nodes are operations” SSA IRs (like LLVM). For example, the parallel copy semantics of SSA is immediately apparent, and function arguments are no longer a special case: they become arguments to the entry block [ more rationale ]."

Block Arguments vs PHI nodes: https://mlir.llvm.org/docs/Rationale/#block-arguments-vs-phi...


Thanks for these pointers.

> https://mlir.llvm.org/docs/LangRef/#blocks

From the example there it's clear that the same value can have copies with different names, which would allow one to associate extra infos with the value used under certain names. In ^bb4 the same value can be accessed as %c, %d, or %e. In addition, this example really cannot be represented in the SSA flavor used by LLVM because LLVM does not have a copy instruction.

> Block Arguments vs PHI nodes: https://mlir.llvm.org/docs/Rationale/#block-arguments-vs-phi....

That contradicts itself: "This choice is representationally identical (the same constructs can be represented in either form) [...] LLVM has no way to represent values that are available only in one successor but not the other".

I'm still not convinced that these are just different syntaxes for the same thing.


You're right - BB arguments are more general than LLVM-style PHI nodes. They are a strict superset, but are otherwise isomorphic for the cases they both support.

The talk we gave at CGO yesterday includes a few slides talking about the tradeoffs here. Those slides should be public in the next day or three.


I will be sure to check those out, thank you!


I've read from previous descriptions of MLIR that "dialects" could be reused in new compiler projects, such as embedding XLA in a non-TensorFlow product. This paper mentions a new example I had not considered: language-independent OpenMP.

In other words, an MLIR dialect of OpenMP could be reused in both Fortran and C. But it can also be used in a brand new programming language that just wants to enable multiprocessing. With OpenACC extensions, targeting GPUs would be nearly automatic for a new language!

One question I've had, and which the paper does acknowledge, is the AST. The paper states that MLIR does not currently target syntax and recommends ANTLR instead. The AST is still a manual process (my own language uses a port of CPython's ASDL).


I wouldn't say that we "recommend" ANTLR, but yes it is likely "possible" to represent some AST in MLIR, but we just aren't convinced that MLIR is the best tool there so far.


So is it like an extension of the way LLVM separates the "front end" from the "back end", but with many more layers of abstraction that can be mixed and matched to more easily give new languages the benefits of long-standing compiler code?


In some ways LLVM becomes an extension of MLIR. MLIR has dialects where LLVM ops can be one of those dialects. Other sets of operations can be dialects too (like SPIR-V). This allows many more layers to exist in the ecosystem but they can't just be mixed and matched. The compilation flow still needs to be thought out carefully.


I played with a heterogenous computation simulator [1] a little bit when I was an undergrad, I wonder how these efforts could work together.

For example, if I could build a simulation of some MLIR in M2S across a variety of hardware setups, I feel like that could be pretty valuable. Granted the M2S project is a bit outdated, so I'm not sure what would need to be changed... boy time flies.

[1]: http://www.multi2sim.org/


I work in one of the labs that helped build M2S! Small world. Unfortunately I had no role in the matter (just got here) but it’s really cool to stumble into some users for the type of software I work on.

I am not entirely sure how you imagine a simulator would run on MLIR, I wasn’t aware it was capable of running IR.


It's been years now, but my memory is that it could simulate arbitrary ELF executables. So I'd imagine there's got to be a path for this kind of thing. But I don't really understand what targets MLIR works with or anything, so this is all just some rambling.

I'd love to learn about newer work done to simulate things like these weird tensor cores. Or other fancy new hardware (quantum anyone...) in a heterogenous way.

P.S. I bet we go to the same school... if you wanna hang out and talk about this kind of thing (or anything else), hit me up.


I fail to see why such an infrastructure would've been unnecessary if Moore's law had not ended.

Architecture has become crazy heterogeneous by now. It was heterogeneous enough to warrant such a thing 10 years ago. Better late than never.


> I fail to see why such an infrastructure would've been unnecessary if Moore's law had not ended.

This is because Moore's Law in the heyday of the 1990's provided a 2X boost every 18 months to software for "free". This meant for many cases, it was a waste of engineering time to try to optimize other than selecting good algorithms.


It's been dead for 20 years. Fucking people need to start caring about clock cycles again, and not running bloated Electron UI elements.


I am in full agreement with you here, however, I got in before the obligatory ‘developer time is worth more than CPU time’ post which is often the response to comments like yours. It seems to be the most frequent dismissive comment anytime performance comes up on HN. I find this analogous to the ‘pre-mature optimization is the root of all evil’ comment that seems to be the most popular on StackOverflow.

As a note, I’m not too interested in MLIR because I have not seen it applied to any of the areas that interest/effect me. But I guess I will remain slightly optimistic about the possibilities.


I remember seeing someones rough estimates about effort required to automate a task vs the time saved by not having to manually perform it. Their claim was automation wins a lot more than you would think.

Consider you have 4 workers using your software. Doesn't take much of them having to wait around for your slow ass software before you need 5 workers.


Here's the reference:

Is It Worth the Time? https://xkcd.com/1205/


Agreed. Widely distributed software should be power optimized. Megawatts shouldn't be needlessly wasted.


It hasn't been. Denard scaling is what ended 20 years ago, hence multicore and vector architectures.


I am not disagreeing with you. I am describing what the case was when it was not dead.


"for the End of Moore’s Law" seems like meaningless clickbait in this context.


Unless End of Moore’s Law is synonymous with the Rise of Accelerators?


Exactly: the "why not just LLVM" can be greatly explained by the need to target heterogeneous hardware from DSLs.


I have no idea why you’re being downvoted. I thought the same.

Like, even before the end of Moore’s law, folks were making spectacular advancements in IR design and compiler design generally. SSA was invented before it ended for example. The lineage of graph coloring register allocators that culminates in Briggs and then later IRC started in the 80s or earlier at IBM Yorktown. It’s crazy to suggest that the end of Moore’s law means we now suddenly have to start thinking about how to build good compiler architectures.

[edit: fix date, 80s not 60s]


It goes earlier than that, after all most mainframes, starting with Burroughs Large Systems have been making use of IR based toolchains.

The research paper about the PL/8 (also known as PL.8) compiler toolchain, used by IBM for their RISC research, is the oldest I have read were they proud themselves of having a toolchain quite similar to the initial LLVM design.


No matter the computer technology, I find that IBM did it first and had their own name for it. Someone ™ should compile a list of IBM nomenclature and list it next to contemporary names. Bonus points for including other languages, IBM had translations for French, German, Swedish and who-knows-what other languages.


(Early disclaimer: I am an author of the paper and of MLIR, but this is a personal opinion)

I am somehow surprised by the reaction but at the same time I expected as much. We chose to make MLIR open-source early in the design and development, may be half a year after the first line was written, because we value open source and believe that a community can help build significantly better things that serve a broader audience. I find it unreasonable to expect a new infrastructure to have the same amount of frontends/optimizers/backends/utilities/bells-and-whistles as a project that has been around for 17 years and has got contributions from hundreds of developers employed by dozens of companies. 15 years ago, LLVM only had a fraction of what it has today... By making MLIR open from the start, we let everybody add the whistles they need, or collaborate on them with other people who also need them. That is sort of the point of open source community. Personally, I am happy to work with anybody who shares my needs, but I won't do somebody else's job instead of them. In a sense, MLIR does not intend to solve any specific compiler problem, but rather give people tools that help them focus on the problem at hand instead of IR infrastructure, storage, serialization etc.

The alternative would have been to develop it internally for a long time, driven exclusively by the internal needs, and then just throw it over the fence into public. It would have had significantly more "features". I am certain some people would have complained about that process as well.

As for the need of MLIR when LLVM already exists: yes, you could express a lot of things by (ab)using LLVM IR, but sometimes it's worth considering whether you should. Loop optimizations is the canonical example of LLVM IR's limitations [1,2]. Peculiarities of GPU execution model are another [3,4]. People spent years on trying to fit those into LLVM's constraints. Generic passes like DCE, CSE and inlining are (re)implemented on many levels of IRs, which seems more like the time-consuming wheel reinvention than doing an infrastructure that can be reused for those.

TL;DR: if you want a compiler giving faster code for language X today, MLIR is probably not for you, also today; if LLVM already does everything you need, MLIR is probably not for you either. There are plenty of other cases around.

[1] http://polly.llvm.org [2] http://lists.llvm.org/pipermail/llvm-dev/2020-January/137909... [3] http://nhaehnle.blogspot.com/2016/10/compiling-shaders-dynam... [4] http://lists.llvm.org/pipermail/llvm-dev/2019-October/135929...


Nice things first: I think that MLIR is a great solution to the problem of reusable IR scaffold (infrastructure, storage, serialization). If you believe that this is a problem and you don’t do anything that MLIR can’t express at all or well (OSR at scale, non-SSA forms), and you don’t have a plan to change the scaffold to fit your use case, then I think that MLIR is a pretty nifty achievement.

I just don’t buy the premise because IR scaffolds are something I’m used to building quickly.


You are making good point, but I think you're missing some aspects and the title hints about it: we are also trying to address today's heterogeneity. By having a flexible infrastructure and (hopefully) an ecosystem with which the interaction cost is lowered, you can re-assemble more easily custom compilers for specific use-cases.

This does not make MLIR the best infrastructure for building an industrial embedded Javascript compiler for instance (just like you wrote B3 to move away from LLVM), but I am not convinced that between these two, MLIR is the niche ;-) Time will tell!


But MLIR is so limited in what it can do - a specific style of SSA, a specific module and procedure structure, etc. Even the features that make it general (regions) represent a specific choice of how to do it.

Great IRs represent an ideal fit between data structures, data flow style, control style, and the domain. Llvm is successful because it fits C so darn well - it’s like SSAified C. I’m not sure what MLIR is an ideal fit for. It just feels like another Phoenix.


if one believes Jim Keller the end is pretty far off




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: