Transistors are abstracted into logic gates. Logic gates are abstracted into higher-order digital functions like flip-flops, muxes, etc. It is the mapping of algorithms/functions onto gates that is the essence of digital design. This is difficult work that would be impossible at today's scales (5-billion+ transistors) without synthesis tools and HDLs. And, given that an ASIC mask set costs 1MM+ for a modern geometry, it needs to be done right the first time (or at least the 2nd). Furthermore, the mapping to gates needs to be efficient, throwing more gates at a problem increases area, heat, and power, all of which need to be minimized in most contexts.
My first job out of college was designing 386 motherboards. Back then we were still using discrete 74xx ICs for most digital functions. The boards were huge. PLDs allowed better intergration and were cost effective since a single device could implement many different functions, and reduced board area and power consumption. CPLDs moved this further along.
FPGAs grew out of PLD/CPLDs and allowed a significantly higher level of integration and board area reduction. They offered a way to reduce the cost of a system without requiring the investment and expertise required for an ASIC. But, FPGAs themselves are an ASIC, implemented with the same technology as any other ASIC. So, FPGAs are a compromise; the LUTs, routing, etc are all a mechanism to make a programmable ASIC. Compared to an ASIC, however, FPGAs require more power and can implement less capability for a given die size. But, they allow a faster and lower cost development cycle. To bring this back around, the LUTs and routing mechanisms are functions that have been mapped to gates. To use an FPGA, algorithms still need to be mapped onto the LUTs and this is largely the same process as mapping to gates.
This article was pointless, even the author acknowledges: "I don’t know what abstraction should replace RTL for computational FPGAs." And, "Practically, replacing Verilog may be impossible as long as the FPGA vendors keep their lower-level abstractions secret and their sub-RTL toolchains proprietary." As I have argued above, knowing the FPGA vendors lower-level abstractions won't make the problem any better. The hard work is mapping onto gates/LUTs. And that analogy is wrong: "GPU : GPGPU :: FPGA : " An FPGA is the most general purpose hardware available.
The best FPGA/ASIC abstraction we have today is a CPU/GPU.
Where did you argue that? Why is it reasonable to expect that proprietary synthesis tools are going to be better than an open source one? That definitely was not the case, long term, with proprietary C compilers of yesteryears. LLVM is the future, and mostly because of LLVM-IR. So ASTs are well optimized in a general format, why shouldn't digital logic circuits be similar? Yes, actually mapping this to xlnx (etc.) primitives is going to be different for each vendor... In the same sense that mapping LLVM-IR to aarch64 and amd64 is going to be different. So what? That doesn't mean that all is lost.
> The hard work is mapping onto gates/LUTs.
I think it's reasonable to expect that things like FIRRTL have the potential to outperform the synthesis tools that exist currently. The closer the representation gets to a pure graph theory problem, the better chance we have at reasoning about it.
The author makes a good point about Verilog being the current interface. Look at how FIRRTL has to be transpiled back to Verilog to be piped into synthesis tools. That's madness, and it's very opaque, and there's a lot of information lost that we just have to trust the tools to recover. Verilog is a lossy format, and that's the takeaway from this article for me, and you haven't addressed that point at all.
I never argued anything regarding proprietary vs opensource tools. I love opensource tools and appreciate projects such as FIRRTL, Symbiflow, Yosys, Chisel, Clash, etc have amazing potential. Having access to the FPGA vendors low-level abstractions enables the broader use and development of these tools, which is important. My point was only that gates/LUTs are the fundamental building blocks of all digital computing. They are not easily abstractable, and to say that FPGAs have the wrong abstraction is not the best way to look at the problem. FPGAs aren't going to fundamentally change, they aren't going to evolve from FPGA : GPFPGA (to answer the author's analogy). But, tools can always be improved and make FPGA design more accessible.
Lol citation needed. Last I checked the performance oriented commercial closed source C/C++ compilers still outperform Clang and LLVM. And so does gcc for most cases for that matter.
New languages typically are not implemented with Gcc, though. So, long term, LLVM probably wins.
Though gcc still trumps clang in some areas, neither gcc nor llvm beat the commercial compilers like ICC in a wide range of workloads.. obviously there is still a market for ICC and AOCC.
The original point was that open sourcing would necessarily lead to better performing tooling. And again to that I maintain... citation needed.
When I started with MCUs I started with an arduino. The thing it did for me was to give me a feeling when to use a microcontroller and when to use something else entirely.
Of course the level of control I had with an arduino was far from optimal, but it worked out of the box and guided me into the subject (a bit like a childrens bicycle: neither fast nor special, but helps in avoiding pain and frustration for the learner).
I wished I had this kind of thing in an affordable fpga way. Simple enough to get me hooked, with examples and good sane defaults etc.
This is what mainstream means: idiots like me who didn’t get a formal education on the subject but want to try things out.
Cheap FPGA boards for educational purposes: https://store.digilentinc.com/fpga-for-beginners/
The software is free: https://www.xilinx.com/products/design-tools/ise-design-suit...
The hard part is several semesters worth of textbooks to go through that cover digital logic (try Mano's "Digital Design: With an Introduction to the Verilog HDL" to start with) through computer architecture in order to know what to do with the board.
Community support is indeed just beginning but the risc-V community and the HiFive1 - SiFive community support these polarfire FPGA's. Even 50 Risc-V softcores fit on this FPGA.
Just a happy user, not affiliated with Microchip/Microsemi
If you want something even cheaper look for some ICE40 boards, like up5k MDP, or tinyfpga.
A few interesting things to try out:
An intro to verilog:
About the simplest environment for doing what you just described is http://papilio.cc/
While using the existing Xilinx Webpack tools for actual synthesis, place and routing, etc. the Papilio Design-IDE will LITERALLY let you add peripherals to virtual Arduino like appendages!
It takes advantage of a number of community projects like the Wishbone bus, and achieves a nearly drag-n-drop level of visual design tool.
Once you have loaded your custom arduino chip onto the papilio boards FPGA you can program it with a modified version of the Arduino IDE!!!!
One of their virtual chips you can start with IS the arduino atmega 328!
Another is the ZPU-ino, an implementation of the Zylin ZPU (a 32bit mcu) done by Alvie Boy that allows you to program this much more powerful device ALSO by the Arduino IDE!
That some difficulties are hidden doesn't mean they are easy. Unless you have at least a proposed solution (language, compiler, architecture, ASIC) that lets people solve similar problems to the FPGA tool chain, it's just complaining.
Fundamental misunderstanding of FPGAs, presents no alternatives. Zero worth article.