One of the interesting things you can do with LLVM is bully it into emitting a SPIR-V compute shader, supposedly for GPUs, but I think you can probably throw it into a very contrived SYCL ugly-ugly-ugly pipeline and somehow get it on an FPGA.
The FPGA tooling is pretty nice, once you get past the "how did this work for anyone else bugs". Albeit with limited card support and I haven't seen others jumping onto the bandwagon. As far as I can tell it's mostly just "SYCL with feature limitations" and a couple of niceties to connect kernels together.
I've wondered how much work it would be to get Rust->SPIR-V->OneAPI compiling, but it sounds like an exercise in masochism so haven't gone down that route yet.
With work like this it becomes clear (even in GPU-land) why picking the right components and sticking with them (where sensible) is so much more important in hardware than software.
In software we want nice abstractions and so on, with FPGAs if those abstractions are even possible you probably can't afford to write them.
I spoke to an ASIC designer recently, the EDA industries weird.
The big thing with SYCL and oneAPI and so on for me at least is that you have all this stuff bolted onto C++. C++'s ability to do generative programming is just not as good as it's competitors, that and C++ just being a bad language. If things are more language-agnostic than I realize then yippee
The clever thing about the DCompute project that I linked above is that it uses a simple but powerful introspection system (provided by D, could really be any competently designed language) to pick up the right stuff and generative the compute kernels in the background for you.
Unpopular opinion: trying to synthesize a language designed to run on a CPU into logic gates tends to produce terrible results, and we would be far better served by building better HDLs instead (e.g. Chisel).
(The commercial ASIC design world is very conservative, with some good reasons and some bad, and tends not to use either advance, preferring the VHDL/Verilog of our ancestors)
Both approaches have virtues. HLS is simpler for software programmers, but tends to rely more on 'magic' and is not a very clean abstraction of the hardware that's generated.
That said, loops aside, most compiler IR is a sort of dataflow graph which could reasonably be synthesized into hardware. But most control flow does not map very well.
Higher-level HDLs such as Chisel (although I have never used it) instead abstract directly over standard HDLs like Verilog or VHDL. These abstractions are simpler because they do not alter the fundamental design ideas; they just offer increased flexibility (for example VexRiscv, which is written in SpinalHDL and has most functionality in 'plugins' which extend CPU components, which is a very powerful form of metaprogramming that is not possible with Verilog).
If your issue is with Scala, there's other HHDLs such as Clash (based on Haskell) or Migen and Amaranth which use Python.
Ah. Indeed, I don't have an FPGA on hand to synthesize all this to real silicon, but it would totally be possible (Digital has support for FPGA synthesis)
an HLS would compile rust directly to verilog or some netlist, as opposed to running the ARM core with a different technology. this is an awesome project, great writeup :)