If you want something more interesting, see High Level Synthesis (aka C to RTL, SystemC, etc) . But this is still very beta (and has been for over a decade), so it's almost unused in the industry on designs.
The majority of hardware design time is now dominated in the hardware verification space, aka SystemVerilog.
It's also incredibly time saving in that with HLS you can actually rapidly unit test your designs (outside of a simulator), and you can usually get a codegen for bindings for the SoC you're designing for so you don't have to write any headers/interface code for the onboard processor yourself.
IMO, people who don't think HLS is a win either have years of experience invested in older toolchains and are more productive with them (and don't see HLS as worth their time), or haven't actually tried it.
I've mostly used a slightly hamstrung C++ as a HLS language with a Xilinx SoC. C++ is infinitely nicer than an HDL, but I can't help but feel that Rust's safety model more closely matches how an HLS wants to structure its invariants, so I feel like it could be an excellent HLS language in the future, given the chance.
For complex designs I always came back crawling to vhdl and systemverilog.
(Jury is still out on Chisel, but it doesn't look good right now. Looks more like it was designed by CS folks who just didn't get modern hardware design)
I get the 20% number from a real world case, guys who converted a huge existing vhdl design into HLS with the help of several Xilinx FAEs, the application was ideal for HLS.
> On the other hand, if you write code in a way that naturally maps to the hardware you are using then the results can be every bit as good RTL
You only believe this if you're deep in the Xilinx marketing bubble. HSL covers maybe ~20% of the usecases of FPGAs. Even the guys who teach HLS will not tell you it's a general solution.
> I think that this has more to do with the quality of the current compilers, not some inherent limitation with the concept.
This concept has been researched for more than 25 years, C to FPGA has failed except for the aforementioned case. Btw, I'm not saying that a general high-level synthesis solution isn't possible, I'm saying that it should never be based on C or C++.
It can be done but all the advantages of HLS are gone. The code is filled with a ton of pragmas that make the code unreadable and a lot longer than the VHDL or SV equivalent.
Register-rebalancing (other companies call it retiming) is a very old technique. You can do it with SV & VHDL, just add delays & the synthesizer will know what to do. Vivado has caught up with the solutions from Altera but there are better (more expensive) synthesizers that easily beat both, the have supported this feature for at least 15 years.
This is actually a good thing, Verilog (and even more SystemVerilog) is designed by people who don't have a clue about language design resulting in an incredible mess of a language.
Example Ada: http://perso.telecom-paristech.fr/~pautet/Ada95/e_c16_p5.ada
The more promising avenue to me is structured synthesis: using straightforward abstraction but keeping the HDL model. Chisel (based in Scala) seems to be the biggest open source one; bluespec is a more established run at the same concept which is based in Haskell, and interestingly they seem to be throwing all of their marketing weight behind RISC-V solutions. So in a way, high-level functional HDL is dominated by RISC-V today.
It was announced 6 years ago as far as I know but still only few commercial products have it inside.
Chiesel itself would be sunken together since it uses RISC-V as a ad.
NVIDIA is already embedding RISC-V deeply into every one of their chipsets (all Tegras, all mobile discrete GPUs, all desktop discrete GPUs).
I don't see how somebody with any knowledge of the embedded industry could conclude that RISC-V will fail to penetrate that market.
As for consumer mobile devices, such as smartphones and tablets, you could make the argument that ARM has a lot of traction. However, the cost and flexibility benefits of RISC-V are not moot. For platforms like the Chromebook, you have a lot more flexibility in terms of the hardware platform. There's no practical reason why RISC-V would fail in Chromebooks.
Workstations, desktops, and general purpose (usually windows) laptops are a lot tougher. There is a lot of consumer value in the dominance of x86 in these markets so I probably wouldn't expect much to budge there in a short amount of time.
As for servers, however, there is no strict reason why any new ISA would fail. The biggest handicaps would be lack of out-of-the box distro support, platform inconsistency (huge problem for ARM, where you can't generally share images between boards), and low quality of compilers and language runtimes.
If there is a good JVM port, a good v8 port, a good GCC port, and a good LLVM port, and maybe a few other VMs (like BEAM), there is no reason why RISC-V would be doomed to fail on servers. In fact, I see huge reasons why RISC-V would win the server market. Imagine an SoC which has the ethernet MAC, the remote management system/iKVM (with low-level access to the platform serial console) and a big wide superscalar OoO for running your server application on it. Imagine that the clean separation of ABI, SBI, HBI, and MBI drives up the quality, isolation guarantees, and performance of virtual machines. Imagine that you could have a TPU on the same die as the application processor, with a low-latency preemptible interconnect and unified memory.
I see RISC-V's technical decisions as almost uniquely suitable for the server market, and I think the server market is probably the most amenable to new ISA penetration.
(Notice which language won, who won, and what company he worked for...)
It was a PR stunt by the Synopsys guys who a that time wanted to kill VHDL. The VHDL guys had to work with slow & broken VHDL simulators. The the problem was devised by verilog enthusiasts. All VHDL engineers who showed up (in much smaller numbers than verilog engineers) felt like they they didn't have a fair chance.
1. Guy worked for synopsis
2. Unlike competition synopsis didn't have a vhdl product
For the language subset needed for this test, the languages are EQUAL (different syntax, same structure and outcome).
Also, the guy who run the competition (and his employer) have for years tried to kill vhdl (for personal and $$$ reasons).
Chisel with Scala, MyHDL with Python, Bluespec and Lava with Haskell are all low level HDL languages to name a few that piggy back on typical programming languages.
And if you want a higher level integrated solution you could use Altera OpenCL or Xilinx HLS.
Not to say we couldn't ask for a lot more from FPGA tools!!
MyHDL parses a normal Python AST and translates it to Verilog statements. In the end, you seem to end up with the exactly same abstraction layer as Verilog/VHDL, just with a Python syntax. There is a very limited subset of advanced Python composability constructs you can use without hitting a limitation of the AST walker and translator. And when you do hit it, it's fairly annoying to debug. In the end, I don't see it's use over writing straight Verilog/VHDL, apart from it being easier to test.
Migen, on the other hand, is a system where you write native Python code that doesn't pretend to be then running on the FPGA. Instead, your code stitches together an RTL AST, which can then be either simulated or converted to Verilog. Your Python code is effectively a RTL logic generator (AFAIK this is exactly how Chisel/SpinalHDL work, too). You can then compose these low-level constructs into higher layer abstractions- for instance, the stock Migen framework comes with an FSM abstractions, which takes care of all the boilerplate if declaring state machines (state register and values, separate synchronous/combinatorial logic statements, etc.). It also comes with abstractions for Arrays, (B)RAM, FIFOs, priority encoders...
Finally, Migen is the base of the MiSoC project, which abstracts away enough digital logic to let you connect, in Python, high-level constructs like buses, CPUs, DRAM controllers, etc. in order to dynamically construct a system-on-chip.
As a solo founder CEO/CTO/Electrical, Mechanical, Software and FPGA Engineer and, let's not forget, Janitor at the time there was one thing I valued far above anything else: Time.
VHDL, I found, is very verbose. Verilog not so. To me this meant having to type twice as much to say the same thing. This didn't sit well with me and that was the primary driver in deciding in favor of Verilog.
Another driver was familiarity with C. I had a need to switch contexts with some frequency. I was using C and C++ for embedded and workstation code development. As a mental exercise, switching to VHDL felt, for lack of a better word, violent. Switching between those languages and Verilog felt natural. Again, a time saver.
It also seemed far easier to find and hire a good Verilog engineer when the time came for me to let go of that title and focus on being a better Janitor. To me Verilog was the clear winner on all fronts. At the time it lacked a few constructs that were later added into the language yet I never found that to be a problem and managed to successfully design and manufacture many successful products.
Verilog isn't C, it's C-ish, just different enough to make me make mistakes all the time like 'not' in verilog is '~' instead of '!', or the lack of overloading, the weird rules with signed & unsigned numbers and the implicit wire declarations etc. Verilog is full of surprises.
Do you like determinism? Have you ever tried running the same (System)Verilog design on multiple simulators? Almost every time you get different results, VHDL doesn't have this issue.
Also, who said Verilog is C? It feels like C but it isn't. It's simply a lot easier to context switch between C and Verilog than between C and VHDL. That's my opinion. You don't have to agree with it. There is no implicit obligation to agree with anything at all.
I have shipped millions of dollars in product very successfully using Verilog. Others have done so using VHDL. In the end it is a choice.
My intent was to give the OP one criteria he or she might be able to use in some way in making a similar choice.
And, please, it isn't code it is a Hardware Description Language. We all use "code" for short but let's not lose sight of what it is.
Many people come to FPGA design treating the thing as software. It isn't software. It's a hardware design and it is a hardware description language. Maybe it helps that my work in electrical engineering predates FPGA's and even PAL/PLA's. In other words, I spent years designing "raw" electronics.
When typing Verilog I think about circuits not software and I don't make a lot of mistakes because the circuits are designed on paper before typing code. Code is the hardware description, not the design environment.
I find that older hardware engineers are far better at this. Younger engineers treat it like software and go into this crazy type->debug->type->debug cycle that simply isn't the way you design hardware. Decades ago you had to know your shit. You couldn't throw a bunch of chips at a board and have to redesign it due to simple mistakes. Again, it ain't software.
So, no, I have no issues with Verilog debugging. I can't remember any serious debugging events in, say, twenty years.
(Note there are also Haskell-oriented HDL's, but my experience with those wasn't great, at least 4-5 years ago. Proprietary, poor interop, and let's face it, it's not Haskell. On the surface FP seems like a good fit for FPGA, but in the trenches you constantly need lower-level tricks to keep your program's gate count from exploding.)
But please enjoy (System)Verilog with it's random often undefined coercion, implicit wire declarations, non-deterministic simulation and lack of any meaningful compile-time checks. Honestly, as a huge Haskell fan, I can't believe you're a Haskell fan.
SV's simulation features are really top-notch. If you're going to pick one to learn, I'd go with SV because you'll probably be using it anyways for verification tasks.
OpenCL is a bad language for synthesis. The strength of an FPGA is deeply-pipeline-able sequential algorithms. C-based languages just aren't good at expressing/controlling pipelining, so the synthesizer/compiler has to infer this but that problem is too hard. You could change 1 line of code, a minor fixup and the clever optimizer fails and your design is suddenly 10x as big.
Strong typing is useful when you use the same types for multiple instances, as in SW. However in HW, you construct your data to a set of bits, and then take as many as needed for the particular case. The type you need is a bit vector. Bit slicing checks can be done in the Linter tool, if needed. In DSP designs you can get benefits from VHDL types for signed and unsigned integers, but in my opinion even that does not cover the VHDL related deficiencies.
When you are working with synthesizable subset, you will never have problems with race conditions in simulation. You stick to a strict coding style, and it becomes the second nature. For beginners there are always issues, including this.
I find comparing the merits of SW languages to HW languages a bit silly. Describing SW does not benefit from the same features that describing HW does. For example Functional Programming (aka Haskell people) has nothing to do with HW. In FP you want to separate state from your functions. In HW the intention is to do the exact opposite. You want capture everything as local as possible, in order to enable maximum performance in speed, power, and area. This implies that OOP is pretty close to what HW requires, but not FP.
Can't be done.
One language suffers from a serious lack of expressiveness and forces its users to effectively carve their gates from stone with incessant low level bit twiddling. The other lets you describe high level behavior and have the tooling do all the hard work.
The basic structure was:
1. Write a function that calculated the next bit from a the existing state and the target poly.
2. Write a second function that called that first function repeatedly (once per bit in the target word).
3. Call the second function on the input data for a combinatorial output.
However, speaking to many FPGA professionals I know, VHDL has a much more traction in industry and patricianly in defense contractors.