It will be interesting to see if this solves any issues that aren't already addressed by the likes of matlab / SciPy / Julia. Reading the paper it sounds a lot like "SciPy but with MLIR"?
It's more like OpenXLA or the PyTorch compiler, that codegens Kokkos C++ kernels from MLIR defined input programs, which for example can be outputted from PyTorch. Kokkos is common in scientific computing workloads, so outputting readable kernels is a feature in itself. Beyond that there's a lot of engineering that can go into such a compiler to specifically optimize sparse workloads.
What I am missing is a comparison with JAX/OpenXLA and PyTorch with torch.compile().
Also instead of rebuilding a whole compiler framework they could have contributed to Torch Inductor or OpenXLA, unless they had some design decisions that were incompatible. But it's quite common for academic projects to try to reinvent the wheel. It's also not necessarily a bad thing. It's a pedagogical exercise.
I think the exactly opposite, if someone was able to build a framework that doesn't overly constrain the problem, and doesn't require weeks of screwing around with the build, integration of half baked components and insane amounts of boilerplate, that would be a fantastic contribution in and of itself even it didn't advance the state of tensor compilation in any other way.
FORmula TRANslation, the clue is in the name. It's great at math, but yeah, strings and OS stuff is a PITA. The modern vector-based syntax is still really nice and I've yet to come across a C++ library quite as slick.
But I think what it was really missing last time I looked at it was good access to compiler intrinsics (or otherwise) to hit vectorizations and math optimization instructions. The OpenMP simd pragmas weren't really doing a fantastic job. I hope that's better now it's in LLVM.
I suspect main benefits are they have no need to maintain the hardware or software for any longer than it makes sense for their own needs, and don't have to handhold users through a constantly evolving minefield of performance and technical capabilities.
I think the current generation of tools have a long way to go before I trust any numerical algorithm they implement, based on our recent experiments trying to make it implement some linear algebra by calling LAPACK. When we asked it to write some sparse linear algebra code based on some more obscure graph algorithms it produced some ugly stepchild of dijkstra's algorithm instead, which needless to say did not achieve the desired aim.
Have meetings to figure out how to interact with the other 9990 employees. Then try and make the skeleton app left behind by the team of transient engineers who left after 18 months before moving on to their next gig work, before throwing it out and starting again from scratch.
Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.
Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.
Were they just piling up cash in the parking lot to set it on fire?
At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.
Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.
Even within the XR industry, we had no clue where all that money went. During the metaverse debacle, the entire industry stagnated. Once metaverse failed, XR adjacent shops started to fail. There was no hardware or technique innovation shared with the rest of the industry, and at the time the technology was pretty well settled.
Since then we lost all the medium players and it's basically just Facebook, Valve, and Apple.
Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.
This book provides a high level overview of many methods without (on a quick skim) really hinting at the practical usage. Basically this reads as a encyclopedia to me, whereas Nocedal and Wright is more of an introductory graduate course going into significantly more detail on a smaller selection of algorithms (generally those that are more commonly used).
Picking on what I'd consider one of the major workhorse methods of continous constrained optimization, Interior Point Methods get a 2-3 page super high level summary in this book. Nocedal and Wright give an entire chapter on the topic (~25 pages) (which of course still is probably insufficient detail to implement anything like a competitive solver).
But it can be even worse than that. It's "we assassinated the phone", "algorithm says vehicle has suspicious travel history and must die". There's no real thinking human in the loop for some of this stuff, just some model decided the metadata has a high probability of being associate with an opponent of some flavor and then everyone in the vicinity is blown to bits as computer said kill.
reply