However, I now know why FPGAs aren't as popular as they could be: the tools are horrible. There is no way FPGAs will become popular among software developers unless the tooling improves dramatically. You can really tell that it was all build by hardware engineers who know their electronics very well, but not good software design.
Care to expound in what way? I don't disagree with your opinion completely, but I suspect our reasons differ.
IMHO, the barrier isn't so much the tools, but a clashing of paradigms. All too often, traditional software developers without a solid foundational background in digital hardware architectures think they can easily pick up a HDL as if it were yet another programming language. The HD part is quickly forgotten as behavioral processes are hacked away using all too familiar constructs like if-then-else, for/while loops, etc. Inference warnings are ignored, timing constraints are shrugged off, metastability and synchronization aren't even a thing, let alone driver/receiver selection, pinmap planning, and signal integrity considerations at the PCB level...set to default and things should just work, right?
Anecdotally speaking, the only thing in common that HDLs have with traditional programming languages is a superficial charset. I think every software developer who has ever used tools like VS, Eclipse, Emacs, Vim, Make, Doxygen, Git, etc. would agree that each has its quirks and it takes a bit of use before a comfortable flow is discovered. FPGA vendor tools are no different, except--testbed simulations aside--there's that immutably distinct custom hardware integration end-game that most traditional software has the pleasure of conveniently abstracting away.
Any kernel/driver developers with HDL proficiency care to chime in?
P.S. ground-up PCIe/Ethernet FSM+datapath architectures after a few months without prior HDL experience is really impressive.
I totally agree that HDL tools are astonishingly crap. The market is just so much more specialized, expensive, locked down and closed.
Imagine that you had to use the Intel compiler to get source code compiled for intel processors. You had to use their special headers, you practically had to use their huge 8GB Intel IDE with a million lists and buttons, you had to choose what features were on your intel processor (VT-X? SSE4?). Imagine AMD is the same, but separate. This is all closed source stuff of course. And you have to use it to run on any smartphone/laptop/desktop/server class processor.
Imagine that there's no GCC, no CLANG, no GDB, no Jetbrains, no Eclipse. All the related tools we use wouldn't have much of a reason to exist because just about everyone had to use the huge IDEs for each vendor anyway.
The vendor stuff is huge and inefficient and crap, because features sell, and crap doesn't not sell because there's no meaningful competition.
To be more specific, "crap" means high bloat and low reliability. Crashes, inexplicable errors and failures. Huge amounts of bloat. But you can make a tweak and try again move on, so any sane hardware person just gets used to it, it's not like they can do anything about it.
> Imagine that there's no GCC, no CLANG, no GDB, no Jetbrains, no Eclipse.
This is quite easy to imagine when you remove the key element which allows these L7 abstractions to be meaningful: an underlying kernel with a well defined interface. What equivalence does the reconfigurable world have when every target device requires its own unique "kernel", if you will? I can't think of any...which would explain why 3rd-party tools are constrained to synthesis, while place-and-route is an explicit function of the vendor tool. Perhaps we too easily conflate the size of the tool with the size of the target?
On the flip side, supposing there were some open reconfigurable interface standard, I don't think this would fly in the current market given the high-performance nature of these devices. Top two FPGA vendors apparently change slice/CLB structure every generation, let alone agree on an open fabric interface standard.
> Crashes, inexplicable errors and failures. Huge amounts of bloat.
Putting the whole kit and kaboodle aside and focusing on just synthesis, isn't it strange that even the big, specialized 3rd party vendor tools (e.g. Cadenace, MG, Synopsis) suffer just as much? I think it's a genuinely difficult problem given the multi-disciplinary nature of the things EDA engineers have to deal with. As much as I dislike dealing with flake tools, I'm nevertheless humbled by their challenges.
The tools are trying to help. The end product is the physical device. The various models are all just abstractions of the physical device. The tools are reporting the problems on the abstractions to assist you to improve the physical device. If you can understand the reports, you can improve things, either altering the RTL or adding more constraints.
The point being that the only time RTL is actually run like a software program is during simulation. This simulation is only an approximation of how the actual thing will work. It is not like SW. The tools do a lot of other things with that RTL. Maybe if people don't throw garbage in, it wont crap itself trying to figure it all out.
Thankfully, we now have a pretty decent open source alternative:
and you are welcome to contribute!
P.S. The only dabblings I've had with Lattice devices was designing a custom tool to configure legacy (pre-JTAG) ispLSI family of CPLDs using our tech, not theirs. Goal was to eliminate their obsolete tool chain (ispLever Classic and piece of shit USB dongle) from the configuration management loop for existing stable applications in longterm lifecycle sustain mode.
I don't need to imagine, none of them existed when I started programming.
I bought a xilinx spartan dev board 10 years ago and it was just like this. ISE was 4gb of java bloatware, 2 gb (!) more of "updates", crashed frequently, barely ran, imploded under its own weight. I managed to make some lights go blinky and said "eff this".
It's sad to hear that 10 years later nothing has changed.
And yes, maybe my goals were too ambitious. Ethernet and PCIe have proven to be hard to master.
You describe a physical and electrical reality at register transfer level when writing implementation code in Verilog and VHDL to be used in FPGAs and ASICs.
I've seen quite a few implementations (esp in VHDL) by people not understanding that what they write will end up as hardware.
As an example, ff your design contains numerous wide variables written to and read by several instances, that means there will be buses going back and forth between your logic gates. There are only a few levels of wires crossing eachother (that can either be built in an ASIC or are available in a FPGA device). When that limit has been reached, you will have to route around.
Think HW (at fairly ideal RTL) and write code to efficiently describe that in a way that _all_ tools in your chain can understand and parse correctly.
As for the tooling, to me it looked too low level with too many knobs for a beginner without significant domain knowledge to grasp. Maybe that or [Footnote] was what intimidated GP. I understand the reasons why high level building blocks in high level programming are easy to use self contained modules while high level modules in HDL have that many twists and knobs to twiddle with.
I guess software developers are so much used to taking prebuilt modules and something-something-fudge-fudge-rinse-repeat-until-works'ing them together (not meant as an insult in any way; I am myself guilty of that), that we sometimes forget that `x if y else z` actually moves the magic smoke and pixies in ICs. So I think I can sum up your comment on traditional developer sloppiness and GP's comment on atrociousness of tooling in one sentence: too often we forget that HDL programs must be formally correct.
[Footnote]: I remember some fragments of my fights with FPGA when things worked intermittently because my signals were not stable (race conditions, dear past me) or whatever have I written did not even make sense on given hardware (probably no hardware would support such weird trigger conditions, but there I was staring at the screen with blank eyes and only beginning to seriously consider actually thinking), yet the tooling happily synthesised my code.
I guess another issue is that most EE guys are happily using GUI tools and don't have any ideological problem with commercial tooling, as what matters is the final result.
It comes down to choice...or rather lack thereof. EDA tools are very expensive and tend to have a steep learning curve. Professional divisions tend to standardize to keep costs in check. Here's an example of one that's "affordable".
If you do software you should be fluent with data structures, while on the FPGA/hardware side you "speak" metastability control and timing constraints. Just separate fields of interest.
When high level synthesis tools arise (such as Xilinx HLx, C -> straight to FPGA implementation) I still feel they are aiming the wrong audience: these tools still require a more-than-average knowledge of the FPGA fabric and implementation to work with, and that may be a good thing, but defies the HLS paradigm at its roots IMHO.
Source: humble experience through my job as an FPGA designer
I'm just wondering what you used for PCIe out of interest? A while ago I picked up an Igloo2 board, but found their software impossible to use under Linux alas.
I wish I could simply synthesize my algorithms from C code instead writing that VHDL or Verilog code. SystemVerilog is a bit better, but the company does not allow using it.
A raeson for this is that we have so many tools and the subset we can use the the GCD of all parsers in all tools.
Linter, EC-checker, simulator (at least one, often more than one), planner, synthesis/build tool, integration tool etc.
The synthesis subset of Verilog 2001 and 2005 are as far as I have seen accepted by virtually all tools. I tend to err on the side of caution and use Verilog 2001.
Luckily I read  and realized that VHDL can be written in a better way, even with old toolchains. This 2proc style is so much easier to debug than typical RTL syle. It was unfortunate that the 2010 era Quartus II toolchain did not optimize my behavioral code well. The CPU caches were the worst offenders, which isn't too surprising. Tons of enormous, almost certainly inefficient, muxes really pushed the LE limits on the FPGA and my patience during the very long synthesis time.
(no idea why emddudley is claiming otherwise)
In short, not directly - it's the front end tool, and Magic is a different front end tool (direct, full custom layout).
Of course an ASIC cannot be made with only HDL, and not sure if it is a correct expression to say 'to write' an ASIC.
I think both Verilog and VHDL are fairly terrible and are unnecessarily stuck at the connecting-wires-together paradigm. I have high hopes for systems like Clash/Chisel/SpinalHDL to gain more widespread usage and finally make higher abstraction levels and metaprogramming standard in the industry.
 - http://www.clash-lang.org/
 - https://chisel.eecs.berkeley.edu/
 - https://github.com/SpinalHDL/SpinalHDL
As better advice, it is worth writing something in both languages to get a feel which one you like better. Both have their own advantages and disadvantages (VHDL: Strongly typed, Verilog: Less verbose).
Verilator is only useful if you don't need backannotated timing constraints.
And how do you propose simulating asynchronous logic and delays? How do I back annotate synthesized net and cell delays into your one-true-simulator?
I do FPGA design as part of my job. This is the most concise document I've found that covers everything you need to know to get started. Many verilog books just focus on the language. This short pdf focuses on how to use the language to do hardware design, which is what most people actually want to learn.