Hacker News new | past | comments | ask | show | jobs | submit login
VHDL or Verilog? (fpgasite.blogspot.com)
115 points by chclau on May 1, 2017 | hide | past | favorite | 57 comments

Little known in the industry, VHDL struct syntax allows for very neat and concise method to be used, called [Gaisler's method](http://www.gaisler.com/doc/vhdl2proc.pdf). The entire Processor and architecture is written using it (http://www.gaisler.com/index.php/downloads/leongrlib) - and it is fantastic to work with - it feels like a piece of object oriented programming, while using low level language of VHDL. Not sure if anyone has tried to apply Gasler's method to SystemVerilog, but verilog on its own falls quite short in this regard.

Was Gaisler the first to write about it? I actually first heard about it from Mike Treseler, who is mentioned in this Powerpoint (http://ens.ewi.tudelft.nl/Education/courses/et4351/structure...)

Slide 22 of this powerponit, Mike explicitly calls it Gaisler's Method. The term he uses "Structured VHDL" comes next slide.

VHDL and Verilog are virtually no different from eachother. It's basically equivalent to ARM assembly vs MIPS assembly -- they are both machine-level assembly languages.

If you want something more interesting, see High Level Synthesis (aka C to RTL, SystemC, etc) [1]. But this is still very beta (and has been for over a decade), so it's almost unused in the industry on designs.

The majority of hardware design time is now dominated in the hardware verification space, aka SystemVerilog.

[1] https://en.wikipedia.org/wiki/High-level_synthesis

I programmed FPGAs using both VHDL and Verilog for many years. Recently I have started at a start-up where we predominantly program using C++ HLS. I never want to go back to full-time HDL again. We have found it is possible to get the same performance as carefully written RTL, but you still have to write with the underlying device architecture in mind. There are advantages with HLS, simulation is vastly faster, and C++ templates can be used. This makes it easy to try many iterations and find clever optimizations. If you try to do the same with HDL it would be a nightmare with a large design. More people should move to HLS and push for the tools to improve. the world would be a better place.

Be honest though, HLS works well for DSP-like applications, not for anything else. Not every digital design is image processing and if it works well you're still sacrificing ~20% of your LUTs.

HLS lets the compiler optimize your layouts for you more, since you're expressing your algorithm's goals at a much higher level (and have left more actual implementation details unspecified). I've implemented the same algorithm in both HLS and Verilog and with a few pragmas the HLS code, the result was infinitely more tweakable on the space/speed gradient - I was pretty much able to utilize the maximal space I could to achieve maximal execution speed - and limited to the same area I had used for the verilog implementation, it had almost identical performance.

It's also incredibly time saving in that with HLS you can actually rapidly unit test your designs (outside of a simulator), and you can usually get a codegen for bindings for the SoC you're designing for so you don't have to write any headers/interface code for the onboard processor yourself.

IMO, people who don't think HLS is a win either have years of experience invested in older toolchains and are more productive with them (and don't see HLS as worth their time), or haven't actually tried it.

I've mostly used a slightly hamstrung C++ as a HLS language with a Xilinx SoC. C++ is infinitely nicer than an HDL, but I can't help but feel that Rust's safety model more closely matches how an HLS wants to structure its invariants, so I feel like it could be an excellent HLS language in the future, given the chance.

Having worked with a few variations that promised gold and delivered very little, allow me disagree.

For complex designs I always came back crawling to vhdl and systemverilog.

(Jury is still out on Chisel, but it doesn't look good right now. Looks more like it was designed by CS folks who just didn't get modern hardware design)

I don't agree with this, but I think I see what you are trying to say. I think typically a full design would include some interface portion with maybe DMA or PCIe or whatever that would be done in HDL, and maybe a processor or not. HLS works for the processing that is done inside the FPGA. If this is what you mean by DSP-like, then sure, but it does not have to be image processing, it could be anything done in fabric. It is possible to write a specific truth-table, and similar basic elements in C++, so why would you be forced to do any specific type of application? Where did you get %20 percent number? At least for Xilinx HLS, I don't think that has anything to do with anything (maybe for some other compiler??). If you take some generic C or C++ code and try to put it in an FPGA, the number will be more like 80%. The results will be horrible. On the other hand, if you write code in a way that naturally maps to the hardware you are using then the results can be every bit as good RTL. But this is not necessarily easy to do. I think that this has more to do with the quality of the current compilers, not some inherent limitation with the concept. I like HDL, but I would much much rather program in C++. It is a more sophisticated language. I think Intel/Altera is supposed to release some HLS tool, and I know there are others that I haven't tried. What I am saying is that it would be nice if enough effort was put into these tools to not have to worry about whether there are limitations. Even more so since I think the newer C++ standards are moving towards multi-threading/concurrency.

> Where did you get %20 percent number?

I get the 20% number from a real world case, guys who converted a huge existing vhdl design into HLS with the help of several Xilinx FAEs, the application was ideal for HLS.

> On the other hand, if you write code in a way that naturally maps to the hardware you are using then the results can be every bit as good RTL

You only believe this if you're deep in the Xilinx marketing bubble. HSL covers maybe ~20% of the usecases of FPGAs. Even the guys who teach HLS will not tell you it's a general solution.

> I think that this has more to do with the quality of the current compilers, not some inherent limitation with the concept.

This concept has been researched for more than 25 years, C to FPGA has failed except for the aforementioned case. Btw, I'm not saying that a general high-level synthesis solution isn't possible, I'm saying that it should never be based on C or C++.

Forget about Xilinx marketing. I am curious, what did you find causes a design fall outside the '20%' usecase situation? Are you talking about asynchronous clocks? feedback? IO configurations? or what? Why do you say HLS should not be based on C++? Is this related to concurrency or something else? I am not disagreeing with you necessarily. I would say that C++ has the good/bad quality that it is possible to express the same thing 20 different ways. Also, the behavioral simulation is vastly faster. Given that a design has to be done in a limited amount of time realistically, there is an advantage to being able to iterate rapidly and make many structural changes to optimize a large design for both area and clock frequency. Only trivial designs or very specific blocks would be hand placed. I want the compiler to do register re-balancing and other optimizations. The same way that almost no-one could beat the performance of a modern C compiler by typing in machine code. Definitely Vivado is not there yet, but it should be.

There are many examples: a pci-express bus or an application optimized ddr controller or a full tcp/ip stack or a caching/prefetch system or any advanced processor with feedback .... these kinds systems require precise control.

It can be done but all the advantages of HLS are gone. The code is filled with a ton of pragmas that make the code unreadable and a lot longer than the VHDL or SV equivalent.

Register-rebalancing (other companies call it retiming) is a very old technique. You can do it with SV & VHDL, just add delays & the synthesizer will know what to do. Vivado has caught up with the solutions from Altera but there are better (more expensive) synthesizers that easily beat both, the have supported this feature for at least 15 years.

Uh, yes, that is all true (except maybe the processor with feedback bit is debatable..). I agree with all this, and yet my arguments for why C++ HLS is a good thing remain the same.

What tool do you use for this? Also, are you targeting ASICS or FPGAs? I think for ASICs there are probably a ton of custom non-public tools that use C++, one is for example described in http://scale.eecs.berkeley.edu/papers/krashinsky-phd.pdf, the Mill Architecture people seem to plan the same, but I haven't seen any in the open so far.

I last did either VHDL or Verilog in 1997, which is 20 years ago (and reminds me I'm getting old), but if things are remotely similar, the apt comparison is not ARM assembly vs. MIPS assembly, but rather Verilog=~C vs VHDL=~Ada

I don't know ADA, I have always thought that Verilog is like old C and VHDL like C++. Not that VHDL is object oriented, but is a strong-typed language.

VHDL is an old version of ADA combined with a build-in discrete event simulator. The syntax, type system, general semantics are all copied from ADA.

This is actually a good thing, Verilog (and even more SystemVerilog) is designed by people who don't have a clue about language design resulting in an incredible mess of a language.

VHDL's syntax was borrowed from Ada (and much more closely resembles it than Verilog, which was inspired by C but not adapted from it)

Example Ada: http://perso.telecom-paristech.fr/~pautet/Ada95/e_c16_p5.ada

VHDL's syntax and type system are quite Ada-ish, from what I gather.

On the last releases of Vivado, Xilins is making a big push for HLS. Personally I have not still had the chance to learn and try it.

I think there is room for something in-between. High-level synthesis is probably never going to meet the needs of most ASIC designers, especially as progress on manufacturing processes has effectively stopped.

The more promising avenue to me is structured synthesis: using straightforward abstraction but keeping the HDL model. Chisel (based in Scala) seems to be the biggest open source one; bluespec is a more established run at the same concept which is based in Haskell, and interestingly they seem to be throwing all of their marketing weight behind RISC-V solutions. So in a way, high-level functional HDL is dominated by RISC-V today.

I have doubt that RISC-V will take visible portion of mobile/embedded core any time soon.

It was announced 6 years ago as far as I know but still only few commercial products have it inside.

Chiesel itself would be sunken together since it uses RISC-V as a ad.

At least embedded seems to be embracing RISC-V like it's a race. Microsemi markets RISC-V tooling, Lattice semiconductor markets it, and basically all of Bluespec's marketing material is around their RISC-V development tools (for formal and differential verification of new RISC-V cores).

NVIDIA is already embedding RISC-V deeply into every one of their chipsets (all Tegras, all mobile discrete GPUs, all desktop discrete GPUs).

I don't see how somebody with any knowledge of the embedded industry could conclude that RISC-V will fail to penetrate that market.

As for consumer mobile devices, such as smartphones and tablets, you could make the argument that ARM has a lot of traction. However, the cost and flexibility benefits of RISC-V are not moot. For platforms like the Chromebook, you have a lot more flexibility in terms of the hardware platform. There's no practical reason why RISC-V would fail in Chromebooks.

Workstations, desktops, and general purpose (usually windows) laptops are a lot tougher. There is a lot of consumer value in the dominance of x86 in these markets so I probably wouldn't expect much to budge there in a short amount of time.

As for servers, however, there is no strict reason why any new ISA would fail. The biggest handicaps would be lack of out-of-the box distro support, platform inconsistency (huge problem for ARM, where you can't generally share images between boards), and low quality of compilers and language runtimes. If there is a good JVM port, a good v8 port, a good GCC port, and a good LLVM port, and maybe a few other VMs (like BEAM), there is no reason why RISC-V would be doomed to fail on servers. In fact, I see huge reasons why RISC-V would win the server market. Imagine an SoC which has the ethernet MAC, the remote management system/iKVM (with low-level access to the platform serial console) and a big wide superscalar OoO for running your server application on it. Imagine that the clean separation of ABI, SBI, HBI, and MBI drives up the quality, isolation guarantees, and performance of virtual machines. Imagine that you could have a TPU on the same die as the application processor, with a low-latency preemptible interconnect and unified memory.

I see RISC-V's technical decisions as almost uniquely suitable for the server market, and I think the server market is probably the most amenable to new ISA penetration.

Whenever the VHDL vs Verilog argument is brought up, I am reminded of this competition from a long time ago:


(Notice which language won, who won, and what company he worked for...)

This is a very old (I think > 15 years ago) competition, the results say nothing about the situation today, both language have changed a lot.

It was a PR stunt by the Synopsys guys who a that time wanted to kill VHDL. The VHDL guys had to work with slow & broken VHDL simulators. The the problem was devised by verilog enthusiasts. All VHDL engineers who showed up (in much smaller numbers than verilog engineers) felt like they they didn't have a fair chance.


1. Guy worked for synopsis

2. Unlike competition synopsis didn't have a vhdl product

This poat was heavily cirticized when it was published.

For the language subset needed for this test, the languages are EQUAL (different syntax, same structure and outcome).

Also, the guy who run the competition (and his employer) have for years tried to kill vhdl (for personal and $$$ reasons).

Verilog and VHDL are certainly fundamental, but there's plenty of alternatives...

Chisel with Scala, MyHDL with Python, Bluespec and Lava with Haskell are all low level HDL languages to name a few that piggy back on typical programming languages.

And if you want a higher level integrated solution you could use Altera OpenCL or Xilinx HLS.

Not to say we couldn't ask for a lot more from FPGA tools!!

I'm hoping to try MyHDL someday. If it's good, great, otherwise probably I'll try to develop my own metaprogramming (likely based on R7RS scheme) system on top of VHDL.

When it comes to Python 'frontends'/'preprocessors'/'high-level-synthesis tools' I prefer Migen over MyHDL, mostly because of the vastly different abstraction level.

MyHDL parses a normal Python AST and translates it to Verilog statements. In the end, you seem to end up with the exactly same abstraction layer as Verilog/VHDL, just with a Python syntax. There is a very limited subset of advanced Python composability constructs you can use without hitting a limitation of the AST walker and translator. And when you do hit it, it's fairly annoying to debug. In the end, I don't see it's use over writing straight Verilog/VHDL, apart from it being easier to test.

Migen, on the other hand, is a system where you write native Python code that doesn't pretend to be then running on the FPGA. Instead, your code stitches together an RTL AST, which can then be either simulated or converted to Verilog. Your Python code is effectively a RTL logic generator (AFAIK this is exactly how Chisel/SpinalHDL work, too). You can then compose these low-level constructs into higher layer abstractions- for instance, the stock Migen framework comes with an FSM abstractions, which takes care of all the boilerplate if declaring state machines (state register and values, separate synchronous/combinatorial logic statements, etc.). It also comes with abstractions for Arrays, (B)RAM, FIFOs, priority encoders...

Finally, Migen is the base of the MiSoC project, which abstracts away enough digital logic to let you connect, in Python, high-level constructs like buses, CPUs, DRAM controllers, etc. in order to dynamically construct a system-on-chip.

Thanks. I'll definitely give Migen a try. Looks like right up my creek.

Back many years ago when I had to make a choice between these two I took a very practical approach. I learned both of them to a level adequate enough to implement a few small test projects and then I made my decisions.


As a solo founder CEO/CTO/Electrical, Mechanical, Software and FPGA Engineer and, let's not forget, Janitor at the time there was one thing I valued far above anything else: Time.

VHDL, I found, is very verbose. Verilog not so. To me this meant having to type twice as much to say the same thing. This didn't sit well with me and that was the primary driver in deciding in favor of Verilog.

Another driver was familiarity with C. I had a need to switch contexts with some frequency. I was using C and C++ for embedded and workstation code development. As a mental exercise, switching to VHDL felt, for lack of a better word, violent. Switching between those languages and Verilog felt natural. Again, a time saver.

It also seemed far easier to find and hire a good Verilog engineer when the time came for me to let go of that title and focus on being a better Janitor. To me Verilog was the clear winner on all fronts. At the time it lacked a few constructs that were later added into the language yet I never found that to be a problem and managed to successfully design and manufacture many successful products.

The verbosity of VHDL isn't 2x it's more like 20% bigger on average and since VHDL 2008 it's pretty much the same. VHDL can be wordy but it also reads a lot easier & it looks more structured.

Verilog isn't C, it's C-ish, just different enough to make me make mistakes all the time like 'not' in verilog is '~' instead of '!', or the lack of overloading, the weird rules with signed & unsigned numbers and the implicit wire declarations etc. Verilog is full of surprises.

Do you like determinism? Have you ever tried running the same (System)Verilog design on multiple simulators? Almost every time you get different results, VHDL doesn't have this issue.

You seem to be taking this personally. What I described was my reasoning for choosing Verilog. It worked fine for me. I am not going to debate minutiae.

Also, who said Verilog is C? It feels like C but it isn't. It's simply a lot easier to context switch between C and Verilog than between C and VHDL. That's my opinion. You don't have to agree with it. There is no implicit obligation to agree with anything at all.

I have shipped millions of dollars in product very successfully using Verilog. Others have done so using VHDL. In the end it is a choice.

My intent was to give the OP one criteria he or she might be able to use in some way in making a similar choice.

For many years now I have been using mainly Altera and VHDL. Lately I started a new job and returned to Xilinx which I have not used for long. Part of my job is integrating three major IPs, and guess what? One of them is VHDL and two others are Verilog. So yes, sure, you can wrap a Verilog code with VHDL and forget it... but, the Verilog IPs come also with their Verilog testbenches, etc. So for me it will be a bilingual reality for some time to come.

The time it would take to debug and correct your Verilog code would greatly exceed the extra time needed to write in VHDL, in my opinion.

Not if you know what you are doing.

And, please, it isn't code it is a Hardware Description Language. We all use "code" for short but let's not lose sight of what it is.

Many people come to FPGA design treating the thing as software. It isn't software. It's a hardware design and it is a hardware description language. Maybe it helps that my work in electrical engineering predates FPGA's and even PAL/PLA's. In other words, I spent years designing "raw" electronics.

When typing Verilog I think about circuits not software and I don't make a lot of mistakes because the circuits are designed on paper before typing code. Code is the hardware description, not the design environment.

I find that older hardware engineers are far better at this. Younger engineers treat it like software and go into this crazy type->debug->type->debug cycle that simply isn't the way you design hardware. Decades ago you had to know your shit. You couldn't throw a bunch of chips at a board and have to redesign it due to simple mistakes. Again, it ain't software.

So, no, I have no issues with Verilog debugging. I can't remember any serious debugging events in, say, twenty years.

What you are describing is just one aspect of programming with HDL that sounds similar to what one would do with a schematic editor. Fortunately, with HDL you can work at either a primitive level, or a more abstract behavioral level or anywhere in between. What if someone wants to design a library without knowing the specific device that will be targeted? You could use behavioral algorithmic style code using parameterized functions and lots of generate statements that could support multiple architectures. In that case there would be a lot of 'code' that does not have anything to do with the actual circuit but is still perfectly valid HDL. It is the software-like functionality of HDL that made schematic editors obsolete in my opinion.

As a Haskell fan I was drawn toward VHDL because of its "strong typing". Haha, it's nothing like what I expected, mainly just a bunch of annoying forced casts from signed to unsigned etc. I ended up liking Verilog better: more characters are dedicated to your intent than to your casts. But ultimately they're not that different.

(Note there are also Haskell-oriented HDL's, but my experience with those wasn't great, at least 4-5 years ago. Proprietary, poor interop, and let's face it, it's not Haskell. On the surface FP seems like a good fit for FPGA, but in the trenches you constantly need lower-level tricks to keep your program's gate count from exploding.)

Let me guess, you were downcasting everything to std_logic_vector? If you have to cast things all the time in VHDL you're not using it correctly.

But please enjoy (System)Verilog with it's random often undefined coercion, implicit wire declarations, non-deterministic simulation and lack of any meaningful compile-time checks. Honestly, as a huge Haskell fan, I can't believe you're a Haskell fan.

Geesh, does reporting my experience with the two languages really warrant this kind of snarky ad-hominem attack?

I'd say go with Verilog. SystemVerilog 2013 and VHDL 2008 are competitive when it comes to synthesis features, but synthesis is only half of the story.

SV's simulation features are really top-notch. If you're going to pick one to learn, I'd go with SV because you'll probably be using it anyways for verification tasks.

Both. The differences are not that great. I don't believe anyone who says they can done something in one that can't be done in another, they just don't now how to do it. After staring at a large design day after day, it is a pleasure to switch to the other one just to break up monotony. If you inherit someone else's design, I prefer to have the component list in VHDL to get an overall picture of what is going on. I guess it is easier to write Verilog, but easier to read someone else's VHDL.

At a recent Intel event I spoke to a senior Altera engineer and he recommended just using OpenCL and not bothering with either VHDL or Verilog. Apparently the OpenCL to FPGA compilers these days have become so good that the performance is within 10%-30% of hand-written VHDL / Verilog, which for most use-cases is very much in the "good enough" range.

No they haven't and they are not even close.

OpenCL is a bad language for synthesis. The strength of an FPGA is deeply-pipeline-able sequential algorithms. C-based languages just aren't good at expressing/controlling pipelining, so the synthesizer/compiler has to infer this but that problem is too hard. You could change 1 line of code, a minor fixup and the clever optimizer fails and your design is suddenly 10x as big.

If you're happy to throw away up to 30% of your performance I think you've got to question why you're designing hardware, even on FPGAs. It probably is good news for Altera as you'll now need a higher spec device to solve your problem.

Having used both for multiple years, I prefer Verilog. I think that for the synthesized subset of RTL, that HDL is typically used for, Verilog is way more productive than VHDL. The endless type conversions that are required for VHDL, just means too much mental burden and is away from actual design effort. VHDL is not practical.

Strong typing is useful when you use the same types for multiple instances, as in SW. However in HW, you construct your data to a set of bits, and then take as many as needed for the particular case. The type you need is a bit vector. Bit slicing checks can be done in the Linter tool, if needed. In DSP designs you can get benefits from VHDL types for signed and unsigned integers, but in my opinion even that does not cover the VHDL related deficiencies.

When you are working with synthesizable subset, you will never have problems with race conditions in simulation. You stick to a strict coding style, and it becomes the second nature. For beginners there are always issues, including this.

I find comparing the merits of SW languages to HW languages a bit silly. Describing SW does not benefit from the same features that describing HW does. For example Functional Programming (aka Haskell people) has nothing to do with HW. In FP you want to separate state from your functions. In HW the intention is to do the exact opposite. You want capture everything as local as possible, in order to enable maximum performance in speed, power, and area. This implies that OOP is pretty close to what HW requires, but not FP.

Try to implement a general purpose, synthesizable CRC for any polynomial of any size in Verilog without resorting to hard-coded lookup tables.

Can't be done.

One language suffers from a serious lack of expressiveness and forces its users to effectively carve their gates from stone with incessant low level bit twiddling. The other lets you describe high level behavior and have the tooling do all the hard work.

I actually did this exact thing in Verilog at a former employer, and got pretty good results (at least with the Xilinx toolchain). It just took some creative use of Verilog's 'function' construct.

The basic structure was:

1. Write a function that calculated the next bit from a the existing state and the target poly.

2. Write a second function that called that first function repeatedly (once per bit in the target word).

3. Call the second function on the input data for a combinatorial output.

Verilog of course. try to do nested if without else in VHDL!

Neither, get yourself some Chisel.

I've done both VHDL an Verilog extensively in school but not professionally. From my experience verilog was much nicer to use. I designed a complete MIPS pipelined CPU: https://github.com/eggie5/SDSU-COMPE475-SPRING13

However, speaking to many FPGA professionals I know, VHDL has a much more traction in industry and patricianly in defense contractors.

OCaml with HardCaml (http://www.ujamjar.com/hardcaml/).

Vhdl + vunit. HLS will (latency/bandwidth)-wise do equally good, provided you have some spare resources(large enough fpga).

I recommend Clash. I've used it in production.


As Ada fan I would say VHDL.

Of course, systemverilog!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact