Recommendations on how to get started with FPGA development? What cards are good? Do they make external cards to plug into a laptop like eGPUs?
2) The two major FPGA manufacturers are Intel and Xilinx. Each has their own tools. The tools are very different. You should pick one (flip a coin) and stick with it, at least starting out.
3) FPGA manufacturers typically have a free tier of their tools, and a paid tier. The paid tier is usually thousands of dollars. You'll only need the paid tier if you're using a high-end FPGA (those are usually also very expensive - thousands of dollars for an evaluation board).
4) Buy a popular FPGA board in the $100-$1000 range. I recommend something from Terasic (Intel FPGA boards) or Digilent (Xilinx FPGA boards).
5) Accept that you won't have a high-speed connection to a PC. Inexpensive FPGA boards normally only have gigabit ethernet, at best. You can get a PCIe FPGA board, or connect a USB 3.0 PHY via FMC connector, but you're looking at more money invested, and you'd have to learn a lot more about the inner workings of the FPGA.
6) The tools you're looking for are Intel Quartus II HLS, or Xilinx Vivado HLS.
7) Look into OpenCL for Intel and Xilinx FPGAs. This is the ultimate goal you're trying to attain - getting arbitrary algorithms optimized for FPGA fabric. Good luck!
Unfortunately, the tooling and hardware isn't currently quite at the point where HLS languages, or openCL for that matter, have had substantial impact for real-world problems, especially if you're targeting a low tier FPGA. This is especially true when implementing interfaces to PHYs or other IP.
I think part of the problem is that successful HDL design requires a fundamentally different approach to program architecture - especially when it comes to closing timing - and there's a fairly steep learning curve to getting everything set up.
I appreciate the idea of skipping verilog or low level logic introductions, but it seems to me like that's asking for substantial problems later on when you actually want to do something practical.
Then again, I can't really think of any better (ie; easier) learning pathway than what you mentioned.
A fun FPGA project which doesn’t require too much skills/knowledge is to implement a VGA graphics card.
An Ethernet card might also be an interesting first project, albeit a bit harder.
For lattice, the open source iceStorm project is a good way to go.
I'm slightly surprised by this - are there really toolchains/kits that are turnkey enough that you can drop a compute kernel and some kind of high speed comms onto the board without having to do any HDL?
Having said that, if you don't know what your design would look like in HDL then your performance will never be good enough to justify doing it on FPGA in the first place. Even if you do the performance will likely not be as good as GPU and probably an order of magnitude worse than an HDL implementation.
Firstly, development kits are great - they provide the hardware you need and all of the IP to drive that hardware. Grab a dev kit - there are lot's of other posts advising you which one. The IP that actually interacts with the real world is by far the most difficult bit.
Secondly, 99% of hardware design is simply know what actually maps to hardware. It's important to understand that hardware languages fit: FPGAs, ASICs, simulators. So what can be coherently written in VHDL/Verilog can't necessarily work if it doesn't map to the hardware platform you're writing for. So you need to actually understand the structures of an FPGA - LUTs, Registers, DSPs and Memories. Once you know what you're writing you're 90% of the way there. Real FPGA devs spend months in simulators before they get to actually hardware test. The simulator will allow you to do things that the compiler will warn you about and the hardware will simply do wrong. Each compiler (modelsim, Vivado, Quartus, VCS etc.) will have a unique interpretation of the language you're using.
Finally, in terms of tools, use whatever comes with your dev kit, but in terms of simulators Modelsim is industry standard and Intel (Altera) ships a free version.
I don't really have advise for learning material- I learnt at Uni.
> [black box components] often have been validated much more than anything you can write and may be much smaller or lower power than you would naively write.
Sure but if an entire community was validating and improving these common components, they’d be way better still. (See gcc/llvm, linux, git, etc).
Software is like music, as long as the instrument is works, all you need is time and will to develop/create.
First thing people need to understand: this whole world operates at scale. You don't bother making an ASIC unless you really need one -- you're either making millions/hundreds of millions of them, or the individual application is so high-value that the hassle, complexity, and expense of moving off a general-purpose CPU onto something completely custom is worth it.
Second, 4K is chump change. You need an entire team of people to do anything meaningful with FPGAs/ASICs. Less than that, just buy a $100 CPU that will do more, be immediately available, and have an entire software stack (OS, user-space programs, compilers, etc) for it. And teams that know how to build this stuff well cost MONEY. A lot of it. Not $100K/year. More like $250-300K minimum per person at Silicon Valley rates.
I get that the OP wants to do this as a sort of hobbyist undertaking, just realize, he's cutting against the grain of the entire ecosystem. Everything about this stuff is made for huge companies to do massive projects that will ship at gargantuan scale. Not hobbyists in a garage.
The major part of hardware development is NOT tooling. It is manufacturing the hardware.
You are the first person in this thread about FPGAs to mention making an ASIC.
They're really, really not.
Linting tools, simulation tools, formal verification and synthesis tools are all the same. Verification methodology is also the same.
However, for open source to flourish, high quality open source simulators are required. I think if we had that, synthesis and place&route would follow. You don't need synthesis or par to do design and verification, but you do need simulation.
I look into the state of OS simulators periodically. Some are impressive, but mostly just handle the design basics... an OS systemverilog, static and dynamic, simulator would be a game changer.
It's a Herculean task though. I don't know how much Xilinx makes off tooling. I have to think they would make more from FPGA sales if tooling were free, and if it were open source, it would no doubt be improved upon by degrees I can only fantasize about.
Simulators, and by extension synthesis and place & route tools, are complementary goods to FPGA hardware. Make your complementary goods cheap, and you make a lot of money.
But there are tool companies out there such as Mentor and Synopsys that make a lot of money from such proprietary tooling. I imagine simulators have a tough patent field to navigate.
It takes a critical mass of knowledgeable people who don't necessarily want to be compensated monetarily for their efforts for this to happen (think early GCC and Linux teams). It took a few decades of largely unpaid effort before GCC and Linux convincingly took over as the leading compiler and OS respectively and people started being paid to work on them. To achieve the end you have in mind, you will need to start building that community
I'm not sure you understand FPGA development. Until FPGA manufacturers do something like software vendors where they bundle actual malware with the closed-source hardware they sell, it doesn't really bother me.
Personally, I kind of prefer having detailed documentation of a black box rather than trying to read source code. The latter is all too common in software development.
I don’t really care to explain the obvious net win that open source designs (and tools) are for society. The only argument I’d consider reasonable is that the FPGA/hardware world is too small to justify the initial investment.
Unfortunately, the number of people who are competent to design, test, and implement these cores is fairly small, and usually well funded. Oftentimes, these cores represent very non-trivial time savings, especially when it comes to characterizing performance and closing timing, so there is tremendous value in the marketing and selling of these cores.
On the other hand, sites like opencores are great for finding useful IP that people have contributed and may be adapted to your device.
I used the lattice icestick (20 dollar board) and IceStorm (quick to install, open source toolchain) to get started a few years back and the community around the toolchain has really come a long way since (new boards and great documentation), check it out : )
In my reconfigurable computing course we used the Xilinx Zybo board which has a SoC on it which includes two Arm processors and an fpga (I think the new xilinx boards also have a gpu). This was really cool but I wouldn't recommend starting with it since the dynamic reconfiguring has a big learning curve in itself.
I actually don't know what a good way to learn is since I just followed my professors notes which was helpful, but I'm sure there are sources online to help. What I can say is that fpga development isn't the same as software development. You have to think differently when implementing designs since you're literally describing the circuitry instead of designing code that runs on pre-determined hardware. I used Xilinx Vivado Webpack (it's free) throughout my fpga usage.
There was a good article yesterday on HN that went through the process of taking the Cordic algorithm and implementing it in Verilog, it's a good example of using FPGAs to implement a specific algorithm.
Then, take a look at http://www.fpga4fun.com which has lots of tutorials to help you get started.
They had a chat with the guy who wrote them a few weeks ago and there are more coming. Very accessible and you don't have to use the icestick very much -- most of it is done with free tools.
A nice free one for Verilog is Icarus Verilog. The FPGA vendors have toolchains that include simulation, so you could also download Xilinx or Altera's free tools and start there.
I have to say I like it, but the learning curve is steeper and you can still code 'foot shooting solutions as it is backward compatible - so better to learn the new, more correct way than to start with the old verilog and then learn the new features.
Agree with another commenter that an AWS EC2 F1 instance might be closer to prod for your purposes.
There's also the Arduino MKR Vidor board which combines an Arduino programmable processor with an FPGA.
As some others mentioned, the most important distinction is knowing what the logic synthesizer and place/route tools (could be looked at as compiler passes) can do to infer circuits from you code.
Fundamentally you’re designing digital logic circuits and subsequently describing them in code. So go get an introductory digital logic text book and read it. (I.e. Logic and Computer Design Fundamentals, Mano & Kime) You’ll need to understand the basics of “timing” of data propagating through logic circuits. Further into the book you’ll get some ideas of basic digital logic patterns.
Here’s one nifty trick; if you can make sure that the function to compute the next value of a bit can fit into one lookup table (LUT) you can be pretty sure that your circuit will run fast. For the uninitiated, a typical lut has about 4 to 6 inputs connected to a single bit memory (flip flop) on the output.
You can try to hack it with OpenCL, and you may have some success. But do not expect GPU-like ndrange kernel duplication to get you very far. Instead use single work item kernels and make a systolic array. Look up systolic array matrix multiplier.
Also, FPGA tools suuuuck compared to software tools! So expect bizarre pains in the but.
Good luck! It is doable.
The last time I really did FPGAs was over 10 years ago, but unless the tools have gotten orders of magnitudes better, in addition to the other concepts mentioned you need to understand clock domains, metastability, pipelining, etc.
I preferred VHDL to Verilog because you shouldn't need a lint checker for your HDL. But unless it's changed, Verilog is a lot more popular.
Training is available, for example https://www.xilinx.com/training/atp.html . It probably costs more than you're interested in paying, and you should have the basics covered first.
Also seconding the need to understand clock domains and metastability, timing constraints, etc.
You're looking for (1) probably a first course in the basics of circuits, (2/3) a first- and second-course in "digital design" or "logic design", (4) an operating systems course (emphasizes how this stuff all works with "The real world"), a basic programming class if you don't have one, and some domain-specific stuff such as digital signal processing, graphics, statistics, finance, or whatever else you're trying to do with this thing.
Probably 5-6 good university classes in total. If you could do these classes a la carte, it's probably the fastest way to learn this stuff (skip the gen eds and all other non-degree requirements). Frankly, it's a lot if you have no prior experience, but I'm not sure that's true.
External non-serial memory (DDR/Flash etc.)
Things that are relatively easy to add, and are not so much of a big deal to wire up yourself.
Character (e.g. 16x2) LCDs
Anything I2C/SPI and relatively low speed
VGA (with low colour depth)
I like having a board with many (at least 8) SPST switches and LEDs, and momentary buttons. Unlike a microcontroller where it's relatively easy to spit debug information out of a serial port or to an LCD with a single C function call, debugging FPGA designs is a bit harder. LEDs provide a zero fuss way to break out internal signals for visualisation - if you're tracking the progress of a complex state machine, you can light up an LED when it gets to a certain point without adding any extra logic. While these are easy enough to add yourself, I find that it's better to get a board that has them so that you don't waste valuable user IOs or waste time investigating failures caused by your terrible soldering skills."
Found here: https://joelw.id.au/FPGA/CheapFPGADevelopmentBoards
So that's how my experience went. I don't know whether modern resources are more or less computation-heavy.
Any fpga dev kit in that class will work. I was able to jump right into it with the basic examples and had my own hardware uart up in a couple weeks.
The only downside is when I was using it the software was only for windows and very bloated.
I am sure you will find something suitable there.
You can write your algorithm in C/C++/OpenCL and let SDX tools do their job for creating RTL and creating system. SDX supports X86 flow for functional verification which saves great time over RTL verification. Then you can move to hardware emulation which can verify timing related issue with your algorithm. Finally you can have hardware flow from same tool. SDX supports eclipse based debug perspective in all flows.
As for Hardware, Amazon AWS instance F1 is easiest thing to start with. For people with GPU background, SDAccel will nice way to go.
Xilinx recently announced Versal product which is GPU like card that can be connected with machine having PCIE support.
Happy Programming Hardware!
 Free Range VHDL. The no-frills guide to writing powerful code for your digital implementations by by Fabrizio Tappero (Author), Bryan Mealy (Author), www.freerangefactory.org
There's a very nice article choosing a text editor, here: https://www.fpgarelated.com/showarticle/37.php
ModelSim is the industry-standard simulator and (as someone has already mentioned here), there is a free version on the Intel (was Altera) web site. You will need to sign up to get the download. This version of ModelSim will let you run any design up to a certain number of lines of code, but it's plenty to get started with.
Xilinx Vivado has a built-in simulator, but I have spent most of my career using ModelSim, and I haven't had the need to switch. You also need to sign up to download Vivado; the free version is called Vivado HL WebPACK (Device Limited).
Synthesis and Place/Route:
Synthesis turns your HDL code into a netlist which uses the primitive logic blocks of the FPGA that you're targeting and defines how they are logically connected to each other. Place and Route takes those blocks, finds a place for them to go in the FPGA and tries to connect them up so that the logic delay from one flip-flop (through combinatorial logic + wire delays) to the next flip-flop will meet your target clock speed.
When it comes to seeing what your HDL code will turn into once it's been put into an FPGA, I find that Xilinx have the most advanced FPGA toolchain (Vivado) with some excellent visualisation features for learners. Stay away from Vivado HLS for now; if you want to learn HDL code, the HLS tool is a distraction that won't teach you core HDL design concepts. Vivado does not support Spartan-6 parts; only 7-series and newer.
Using Vivado, take a sample piece of code and try running it through synthesis (don't place/route yet). Look at the result in the schematic viewer. Look at the hierarchy viewer, too. Try to trace a line in the source code to its synthesised result. Once you're happy that you can do this, try running place/route (Run Implementation). You can cross-probe from the schematic view to the chip view and highlight wires and blocks with your choice of bright colours. Take a look at how the schematic primitive blocks end up getting placed and routed in the chip.
Note: If you don't care about connecting up I/O pins in Vivado and you just want to see what a small bit of logic looks like, you can specify "-mode out_of_context" under "Project Manager -> Settings -> Project Settings -> Synthesis -> Options -> More options".
Anyone can make an FPGA design work at 10MHz or 20MHz, so try something harder. Start off by setting a slow clock speed constraint (50MHz), then try increasing the speed in 50 or 100MHz increments until you hit 300MHz (400MHz or 500MHz if you're adventurous). When does your timing fail? What do you need to do to your logic to make it meet timing?
I don't mean FPGA is worthless. I mean, as a former EE student, a FPGA career requires pretty high expertise which are quite far from the CS topic, such as HDL, signal processing, and low-level logic design. I hardly ever have seen job openings for FPGA devs that requires lower than a master degree in that specific field.
For material career help it depends. Having some experience with it will give you more flexibility, though it might be hard to get resume worthy experience with it.
I have an undergraduate degree in computer engineering (basically this stuff) but do mostly software these days.
You're better off going deep in one or the other, either how to design this stuff or the algorithms/math/CS-heavy parts.
I've written a lot about this on my personal blog.
Another specific question that comes to my mind -- does anyone have anything to say on languages? VHDL vs Verilog? Others? Which is best for beginners?