Hacker News new | past | comments | ask | show | jobs | submit login
Cheap FPGA Development Boards (joelw.id.au)
113 points by peter_d_sherman on July 29, 2018 | hide | past | favorite | 60 comments



Unfortunately, cost of the development boards is in no way the problem blocking wider use and knowledge of FPGAs.

FPGA tooling is stuck in the dark ages. FPGA pipelines are windows-only, the development environment UIs look like a toddler was given access to VB6, and the licensing is crazy.

Lattice semiconductor will give a free licence for iceCube2 to a resident of the USA in less then 24 hours but will deliberate for weeks over giving one to someone in Australia (so you should just lie and say you live at the white house, that's what I did after three weeks of no response to my licence request).

Xilinx's free licence for ISE states that by using the free licence you agree to let ISE upload all your FPGA code to them and they get to use it for whatever they want. It actually states directly in the licence that if you don't like that you should install the software and then airgap the computer from the internet forever. Cheeky as hell in my opinion. I installed mine on a virtualbox machine and then removed the virtual network card.

The reality is, FPGA manufacture and tooling is stuck where 3d printing was a decade ago, but for slightly different reasons. Designing and manufacturing FPGAs is pretty high investment which is a major barrier to entry for competitors (the same way patents was for 3d printers). Since FPGAs are desirable for many high end tech companies, it's more profitable for Lattice/Xilinx/et al to charge a small number of companies a squillion dollars for high end development boards and commercial software licences than it is for them to lower the barriers of entry and sell to a wider audience.

I'd put a lot of support behind an open hardware FPGA design, even if it was rather shitty in performance, cost and/or features to the current top of the line FPGAs.


Regarding open-source FPGA design, you're in luck! "Project IceStorm" reverse-engineered the Ice40 line of FPGAs from Lattice, and has a fully open-source toolchain for them. They're not super high end chips, but the HX8k line isn't terrible (~7680 LUTs, I believe).

That being said, both Xilinx and Intel/Altera have both supported Linux for years, and the free versions of their software basically support all of their chips that don't cost several hundred dollars per chip. The licensing and such is pretty awful if you're a purist about these things, but they are giving you a fully fledged hardware design suite.

I think the main issue is that using FPGAs is actually doing hardware design, and most people don't want to do hardware design, they want an automagical way to make their software go faster. FPGAs are a great way to implement digital logic without having to incur the huge capital costs of taping out a chip, not a magical 'go faster' device for a random algorithm.


>Regarding open-source FPGA design, you're in luck! "Project IceStorm" reverse-engineered the Ice40 line of FPGAs from Lattice, and has a fully open-source toolchain for them.

I've used it. It's at the "technically doesn't not work" stage. I take my hat off to them for the work already done, but it's night and day between icestorm and, say, arduino studio.

In fairness to them, they are fighting an uphill battle reverse engineering the chip, but as much respect as I have for their work, I don't believe there's a long term open source option in there. At some point Lattice will stop making the Ice40 FPGA line. When we reach that point, the icestorm team will have to reverse engineer a new chip, which is pretty close to starting from scratch.


People are already working on that, and there are current reversing projects underway for both Xilinx -7 series chips as well as Lattice ECP-series chips. FPGAs tend to have long lifespans anyway, and the ICE series seems to have no indication of going away soon. Nobody is standing still here.

Besides, I'd argue the biggest problem isn't even reversing the chips, but it's recreating all the useful middleware outside of the chip that will take a long time (e.g. ChipScope alternatives, interactive floorplanners, etc). A lot of this isn't even vendor-specific work, it's just a huge effort. On top of the effort of already writing synthesizers and routing descriptions for VTR...


I don't understand your comparison between Arduino Studio and Project Icestorm. I'm going to assume you mean the ease of use.

If your pain point is a lack of a good IDE, then this isn't something any other suite of FPGA vendor tooling will fix. They all have terrible UX. And in my opinion, that responsibility lies in tooling around your text editor, anyway. It's going to happen sooner or later, as the difficult problem of linting and analysing RTL is already solved in the F/OSS world (Verilator). I've also seen some effort on Twitter of building an IDE around Yosys.

Yosys/Arachne/Icestorm are lacking in features compared to vendor suites (timing driven PnR, floorplanning, ...), but none of them really should be required for beginners. And the fact that the entire suite is cross platform and installable from a package manager more than makes up for those deficiencies for simple projects. All in all, the pipeline for running the fully open toolchain is still much less complex than today's average frontend project.

(disclaimer: working with authors of yosys&Icestorm, but speaking for myself)


>I don't understand your comparison between Arduino Studio and Project Icestorm. I'm going to assume you mean the ease of use.

Lets call it "time to hello_world".

If you buy an arduino, you can go from opening the box and reading the pamphlet to altering how quickly the LED blinks in about 5 minutes, and most of that is dependent on your internet speed.

A lot of that is because of really basic things like having a friendly list of the different boards (development boards, _not_ chips) you might be using so you can just pick the name you recognise from the list without actually having to know what you're doing. It's really important for people to be able to learn by flailing around without knowing what they are doing if you want mass adoption of a technology.

On the other hand, pretty much the first thing you notice on the icestorm page is a massive list of different packages and the command line arguments you need to use for them, which means you need to know the exact fpga package you are working with. I'm not saying that all the other FPGA toolchains don't do the exact same thing, because they definitely do.

Never the less, this whole process feels a lot more like circa-1995 gcc commands than it does "go build". It's such a little thing, but enough little things can make something too hard to bother with, and there are _lots_ of little frustrations with FPGAs. The hardware of arduino was fairly unremarkable, the revolution was that you could just buy it and start bashing around with it in minutes on any computer. No signups, no technical manuals and no compiling from source.


> It's really important for people to be able to learn by flailing around without knowing what they are doing if you want mass adoption of a technology.

I'm not really sure that this is applicable to hardware design.

I see it this way: Arduino got really popular because it pushed the software paradigm to electronics, which was previously dominated by careful planning, studying of datasheets and having at your disposition at least a multimeter and bench power supply. Now you could just write a few lines of imperative code instead of using a few transistors for turning a fan on and off when you press a button. And if things went wrong, you just serial.println until you figure things out.

FPGAs are entirely unlike this. By the point where you're first synthesizing a design onto silicon, you should've thoroughly tested it on a simulator, just because of how much easier it is to debug there than on really hardware. And that all requires discipline - something that is not really achievable by changing things around until they work.

Sure, you can get by making hacked together designs that will probably synthesize - but they'll be unstable, slow and unmaintainable. And you will be no wiser by the end about what you did wrong until you actually pick up an EE book and understand setup/hold times, metastability, clock domain crossing, FSM design and signaling techniques. Or even what exactly does an always (*) block with mixed blocking and nonblocking statements synthesize to.

I don't know, maybe I'm just underestimating people. Sure brings me closer to actually starting a series of lectures or blog posts on this.


>FPGAs are entirely unlike this. By the point where you're first synthesizing a design onto silicon, you should've thoroughly tested it on a simulator, just because of how much easier it is to debug there than on really hardware.

In fairness, I have not experimented much with FPGA simulators. My casual research into them suggests they're less reliable than testing on actual hardware, and even harder to set up and get licences for than the FPGA toolchains. Aka they're a deeper level of the same shitshow as FPGA design suites.

If I'm mistaken I'd love to hear it. Is there a non-terrible free-as-in-beer simulator out there I can try on a linux machine?


For pre-synthesis, try Verilator (converts your Verilog module into a C++ class - super cool for cosimulation) or Icarus Verilog (takes traditional Verilog Testbenches). They both can emit VCD waveform trace files that you can then analyze using GTKWave.

I have never heard of a simulator being less accurate than hardware, unless we're talking about very obscure bugs (and those are mostly limited to UI bugs).


If you code in VHDL, I have had a good experience with GHDL: http://ghdl.free.fr/


Excellent, I'll check it out, thanks!


> I've used it. It's at the "technically doesn't not work" stage.

What did you try and when, why does It not not technically work? I haven't tried myself, but i've seen a few little videos of people synthesising and running little things on the ice40 using all open source toolchain. One of them was a little open source CPU. I could understand "technically works but not practical for many things", but not work at all?


I think the main issue is that using FPGAs is actually doing hardware design, and most people don't want to do hardware design

Half the time FPGA's come up, people ask "which FPGA can I get that has an integrated ARM core". So I suspect you may be on the money.


Tools are running on Linux at least last 5 years very well. Xilinx as well as Altera.

Never heard such ridiculous claim, that ISE with WebTalk enabled sends every piece of code back to Xilinx. And I was intern at Xilinx! You can always seem in WebTalk report what is leaving your computer. You also proposed usable solution for avoiding WebTalk.

The problem is that FPGAs will stay niche product and never became mainstream. What will you do as a hobbyist with some Ultrascale+ device? What can’t you do with 50$ SoC that you can do with FPGA in same price range? SDR is cool, for everything else development cycles are just too long or complex peripherals are needed. How many hobbyists can properly route 64 bit wide DDR3 interface?


> What can’t you do with 50$ SoC that you can do with FPGA in same price range?

Among other things...

Timing-accurate reproduction of vintage platforms, for maximum compatibility with original software and peripherals [1] [2] [3] [4]. Modern full-power desktop processors can do some of the earlier/slower ones, but virtually no SoC-type chips, even the ones in flagship smartphones, have the performance to keep up. The software architecture is like splitting the difference between an HDL simulator and a traditional computer/console emulator, so you can probably guess how slow it is.

Conversely, modern peripherals compatible with the original hardware ([5] [6]) can also benefit a great deal from FPGA, since many modern capabilities and densities are only available in chips with more complex interfaces than the flat system bus typically used by old-school expansion slots/ports. Because the host interfaces are parallel (and often directly driven by the timing of reads/writes on the system bus rather than dedicated handshake signals), the required timing is hard to consistently achieve by bit-banging the bus with any kind of microcontroller or SoC (the PRUs in a Sitara might be able to keep up with the raw interface, but are very limited in how much data they can fetch that fast).

[1] http://kevtris.org/Projects/console/index.html

[2] https://github.com/MiSTer-devel/Main_MiSTer/wiki

[3] http://www.fpgaarcade.com/core-status/

[4] http://c64upgra.de/c-one/

[5] https://github.com/mntmn/amiga2000-gfxcard

[6] https://sd2snes.de/blog/


>Never heard such ridiculous claim, that ISE with WebTalk enabled sends every piece of code back to Xilinx.

Well then buddy, jesus christ, email your former colleagues and get them to reword the installer for ISE. That's absolutely how it reads. I didn't make that shit up out of thin air, that's how I interpreted the license agreement on the ISE installer.


Webtalk sends usage statistics and nothing more (i.e. device, resources, number of luts, runtime, etc). ISE has bee deprecated since 2012 as vivado has replaced it.


Both tools are monstrosities. Xilinx should once and for all decide if they are in the business of selling hardware or selling software.

If the latter they're terrible at it and it's painfully obvious that the culture of the shop is not software-centric.

[edit] the hobbyist argument doesn't hold water. Software and normal computer manufacturers made the exact same argument in the 70's (why would anyone ever need a computer at home?).

If Xilinx opensourced their toolchain - or much better - OpenSourced their protocols, my bet is their chip sale volume would at least double in a year. In particular because folks would choose their H/W to avoid vendor lock-in.

My bet is they're so ashamed of the pile of spaghetti code they're peddling to their customer that they just can't OpenSource it for fear of losing face and reputation for ever.


> they're so ashamed of the pile of spaghetti code they're peddling to their customer that they just can't OpenSource it for fear of losing face and reputation for ever.

LOL no. Everyone knows how this particular sausage is made. It's obvious we are talking of an organically growing decades old codebase. It won't surprise anyone.

To be more on topic, ever since I learned of the PicoEVB a few months ago, I am just giddy of the possibilities, it's amazing we finally got a mobile and PCIe connected FPGA. I wasn't even looking for one because I haven't thought it possible. We truly live in the future.


> It's obvious we are talking of an organically growing decades old codebase. It won't surprise anyone.

Indeed, I'd be surprised if the current teams even know how to actually set up a working build environment, as opposed to something like "Here's a pile of VM templates, each one from a former subsystem lead who left the company 5-10 years ago, and a how-to document that assumes you're running VMWare Workstation 4.0 on Windows XP. Talk to Joe if you need licenses for anything.".


Do you use Viado or ISE? They are like night and day. Vivado is amazing, and constantntly innovating how to make hardware design more tractable and robust. Check out a video of Vivado IP integrator and tell me you're not impressed.

I've no idea what the code looks like but it's clear whatever code was there for ISE was replaced by a ground up rewrite when Vivado came along.

Open sourcing fpga tools won't make them better. Look at any of the gEDA offerings. Compare free verilog simulators like icarus with commercial ones like VCS - even the system verilog standard that was first released in 2005 is still not supported in icarus. If open source woukd work for EDA then why is there such a chasm here?


I've never used a "real" commercial simulator to compare with (I'm assuming the student/locked ModelSim doesn't count), but Verilator seems to work pretty darn well.


Don't get me wrong, Verilator and Icarus are very impressive in their own right, but neither support the full standard of SV 2005 - either choosing to support most of Verilog (pre SV) or only the synthesizable sub-set. Unfortunately this is a far cry from where the commercial tools are these days, and the standard has moved on also.

Similarly for FPGA tools, one fpga seems to have been cracked (ICE) so there is the yosys tool for it, and again while very impressive for what it does, it's a long way from any commercial offering in terms of usability and capability. So while everyone calls for open source EDA to solve all the 'vendor issues', we must ask why for the cases it does exist has it not beat to commercial offerings?


The reason Xilinx won't open source their toolchain, I'm guessing, is that it would reveal to their competitors the optimizations they are using to get more speed or density.

Benchmarks and business are won on speed and density (how much I can fit on a given part, and how fast it runs).

I can't think of many software projects that had active business competitors (taking sales away from them) that went open source.

I don't think code quality has anything to do with it.


There are tons of PhD works in open access on the topic.

Here things are a mirror image of what you have in "physical IP" world. TSMC, Intel and co. all publish tons of research in open access, but real world commercial value of such works is near 0.


the development environment UIs look like a toddler was given access to VB6

But does anyone really use the GUI? I was always under the impression nobody, including Xilinx, really cared about the GUI because most commercial projects probably did things headless, invoking the toolchain directly. In other words, it wasn't really meant to be a full time development environment like an IDE.


You can use it batch or in GUI mode. The vivado gui is very good for design visualisation and interactive feedback, but as it's tcl driven you can run headless for regular builds/ regressions. Personally I use the GUI more often than not.


The toolchain has gotten so gnarly at this point that the only way to build a batch version of it is to launch the GUI, build a project and mine what the toolchain does from the log.


This is completely false. The toolchain is now tcl driven, and all commands are fully documented in programming guide: https://www.xilinx.com/support/documentation/sw_manuals/xili...

Basically: synth_design; place_design; route_design; write_bitstream

What could be more straight forward and transparent than that?


When did you last use an FPGA or the tools?

Linux is fully supported on all tools from Xilinx (and I'm sure altera). Vivado is an amazing tool, and has a very intuitive UI, with great design visualizations and cross-pobing, ip-integrator, etc (ISE was replaced by it years ago, it more fits the VB6 style you referred to)

There are advanced tools like SDK for embedded SW development, HLS for auto C to RTL convrsion, and SDx for automatic HW/SW offload (SDx basically drives Vivado, HLS and SDK under the hood). The FPGA world has moved past the ASIC design world in terms of tool usability, and I speak from the experience of using both toolsets almost daily.

Take a look at the latest offerings, I think you'll be pleasantly surprised


Very nice list, thanks to Peter for putting this together.

It's worth noting that for learning/basic experimentation there's lots of even cheaper chinesium stuff available on ebay:

https://www.ebay.com/sch/i.html?_from=R40&_nkw=FPGA+board&_s...

[edit]: For those who want to try, while back I bought a couple of low-end boards from an ebay vendor called qmtech and very happy with them.


> thanks to Peter

Odd, the list was put together by Joel...?


Looks like submitted before I was done, and replied to the wrong person....


My bad, Joel indeed.


The cost of developing on one of these cheap boards is only meaningful if you're able to leverage a vendor's free but limited-to-select-architectures development suite. E.g. briefly looking through the list, the MiniZed with a Zynq XC7Z007S is listed at $89, but according to Xilinx, this isn't targetable by Vivado WebPACK[1]; a node-locked license will run you an additional US$3k.

EDIT: Manual browsing missed XC7Z007S in the list, which is indeed targetable by WebPACK. List of dev boards is lengthy, but the principle takeaway is not to presume that a cheap dev board is targetable by a vendor's free development suite.

[1] https://www.xilinx.com/products/design-tools/vivado/vivado-w...


Its good to check for that, and it is good to know that you can sign up for a class at a community college (pretty much any class) and qualify for the "student" rate on the bigger development packages[1]. That said it looks like most of the chips mentioned in the list that I can check with either WebPack or Quartus seem to be supported.

[1] I took a class on machining and realized I could pick up a copy of Solidworks for $50 in the bookstore.


Your link does list the Zynq XC7Z007S as supported by Vivado WebPACK.

It also shows up as an option when I create a new project in my installed copy.


Manual browsing on mobile...whoops! You are indeed correct.

However, the principle nevertheless stands and it would behoove anyone interested in tinkering with one of these boards to verify that the architecture is indeed targetable by its respective free tool offering.


It's only the really expensive ones that require a paid license. You generally aren't going to be getting a $3000 FPGA just to tinker around.


I ordered a snickerdoodle ~3 years ago now. It's been changed and delayed a lot and I'm still waiting to have mine ship. I'm still excited for it though, people that have gotten theirs seem to really like them.

A list like this should only include things that are in stock to ship when you order.


FPGAs are very good to learn digital logic design, but dev tools are terrible. They are not open source, binaries may fail to work on any Linux due to failing binary dependencies, closed file formats, inefficiently long synthesis, vendor-specific black-boxed libraries, deprecated interfaces.

Just compare it with Beaglebone or Raspberry Pi dev tools. They are much better for any DIY home-automation or replica projects.


I'm not sure why this meme is constantly repeated - the modern tools are not terrible. I guess it underscores how difficult it is to shake a bad first impression - ten+ years ago they were very poor, but today they are very impressive. In fact the FPGA based design tools are much better than ASIC based HW design tools.

You are right about underlying architecture and synthesis engines being closed source, but that does not necessarily make a tool bad. There is a strong drive to have the tools confirm to industry standard interfaces for sharing of designs and IP now (IPXact, SDC-constraints, Oopen-CL/Open-CV, etc).

FPGAs are more difficult to use than Rpi / Beaglebone, but that's because HW design is very different. They can be used to run SW only stacks (e.g. linux), but that's somewhat missing the point as FPGAs are all about programmable hardware - creating custom accelerators, doing things you simply cannot do efficiently in SW. So low-frequency things like home basic automation might not be a good fit as that can be done in SW only.


It's really not a meme that the tools suck. They still suck. They suck a lot less than they used to but they still suck. The latest version of Vivado still doesn't properly support basic constructs of systemverilog [1]. That doesn't fly in the world of software engineers.

The compile times are still in the order of hours, and there's no guarantee that you won't get to the end of your compile and find out either it didn't work, or it did something that you couldn't possibly have meant to happen. You don't know what went wrong because you've got a 27MB log file 97% of which is 'WARNING's.

[1]:https://www.xilinx.com/support/answers/55135.html


Perhaps it's the case that EDA sucks full stop. I think in the general case that's true, but FPGA based EDA tools, by and large, are better than ASIC EDA tools, and much better than any open-source tool available - and they are essentially free. If you've every used the ASIC tools design-compiler or primetime, you'd see what a difference tools like Vivado have brought in terms of design visualization, ease of use, robustness, all while still being compatible with the industry standard constraints and file types, etc.

The list of unspported items you showed is interesting but I'd imagine they rarely affect HW design and are more on the testbench / simulation only subset (except maybe arrays of interfaces).

For any HW design I've done Vivado has been well able to handle it. In fact, I've used Vivado to 'sanity check' RTL thats intended for an ASIC flow.

Runtimes are long, but I guarantee if you look at the runtime of any opens source synthesis tool you'd get an order of magnitude worse, with worse results. The FPGAs these tools are handling are also huge compared to ten years ago, so while they have progressed in a direct comparison, it looks like they have not if you're later devices.

The one bugbear I have is that DRCs are not checked as early as possible, and only at the end are some things flagged, like and unconstrained IO port. Things that can be caught earlier should be, in all cases.


One thing that I miss from this list are Lattice's breakout boards. You get board with all the FPGA/CPLD pins broken out onto 0.1" headers, integrated USB programmer, 5V input power supply cascade, few LEDs and nothing much else for price that is comparable to the part itself in one-off quantities.

Edit: I'm not sure that Lattice still sells these minimal development boards.


They are still sold. Lattice MACHXO2 and MACHXO3A series boards.


FPGA's are cool and all, but the window of applications where FPGA's are the best tool for the job is getting narrower and narrower.

For compute heavy applications, GPU's usually offer more performance per watt and performance per $, especially if your application requires lots of add/subtract/multiply/floating point operations.

For large scale designs, low end ASIC's are surprisingly cheap (Starting from $10k), and by far beat an FPGA on perf per watt, and per $ with enough volume.

"I have a big state machine I need to run" is usually misleading. Usually your state machine isn't big enough to require an FPGA, and just converting it to a bunch of switch statements on a microcontroller will do the job just fine.

Microcontrollers usually get close to FPGA's on the "super accurate timing" front too - there are lots that can bit bang things into the hundreds of Mhz range with fancy serialization hardware.


What you're missing is the fundamental reason industry uses FPGA's, flexibility and parallelism. An ASIC will always outpeform an FPGA in some specialized front, but the development time and cost associated with an ASIC is signifantly longer and more expensive respectively. An FPGA can be plopped into your design, and use cases can be added without refabbing the entire PCB. Good luck doing that with an ASIC...

A microcontroller is more flexible than a FPGA, but it lacks the ability to parallelize (as many) tasks as an FPGA. Hardware interrupts on a uC are definitely not free and can impact performance from the overhead associated with processing the interrupt, where an FPGA can process any interrupt for "free" without impacting the performance of any parallel module. Real time apps can be implemented on uC's and DSP's, but may not be able to meet them depending on the time constraints on your system.

FPGA's and GPU's were never meant to compete. GPU's will destroy an FPGA on any parallel data processing task, but that data needs to first be populated onto the GPU. A GPU's bottleneck is (usually) never it's processing speed, but it's data population (RAM) bandwidth. A GPU will never have direct access to hardware, where as an FPGA does. The overall speed of a GPU exceeds that of an FPGA for specialized parallel processing, but the latency of a GPU is larger.

The point is that FPGA's are specialized, and so are other co-processors/hardware. They should be used only when it makes sense to use them, and more ad-hoc or flexible solutions are not viable.


I agree completely, but I would like to point out that FPGAs are fantastic educational tools. Being able to get your hardware design into actual hardware for < $100 is amazing, even though the performance isn't the greatest.


A good example of their application is low end oscilloscopes. The heart of the machine is an FPGA running the sampling, DSP, and all the channels. It's very parallel, does a good amount of compute, and also provides lots of arbitrary IO. It's too expensive to spin an ASIC for such a low volume product. A serial processor is orders of magnitude too slow. Not familiar enough with DSP to know why not DSP, possibly too specialized.


Nice list, but considering the warnings about not having the software to program the FPGA with, can some good samaritan recommend a few chips where you can definitely get free or cheap tools, the "synthesiser" and the "simulator"?

Yeah, I have no idea about FPGAs except that I once did a project that was reading data from one over SPI. I always thought the dev/hobbyist boards were much more expensive than that list says. But if you can get away with $150... and if the needed software doesn't cost $15000... I'd like to try :)


The Xilinx SW has a free edition called webpack that works for the lower end devices (basically it will cover all of these starter boards) and it includes all the SW you need, including a simulator.

Interestingly I didn't see the Ultra96 board listed - http://zedboard.org/product/ultra96 - This is the next-gen Zynq, which has A53's plus a load more features, plus it part of the 96boards initiative. I had a chance to play with one briefly and was very impressed. It also can run the PYNQ environment, which combines the power of the FPGA with Python very seamlessly - so gives the possibility of custom built HW offload/processing and interfacing directly in python.


Thanks. I see the WebPack download requires registration - does that mean manual approval after you sign something in blood or just making an account on their site?


I believe it's just an account and immediate download - I did it so long ago I can't remember. Either way there's no way you'd be refused access as it's intended be a free to use version so I can't imagine any approval being required.


See Sir_Substance's problems with getting approved for something from LatticeCube because he's in Australia below. Mind, different vendor, but since I'm in Eastern Europe I had to ask.


Which ones of these have companion textbooks (preferably a nice one) associated with them?

Ideally I'd love one with gigabit ethernet and a multicore arm that slots into a pcie slot and doesn't cost the earth but I'm probably dreaming...


You don't need such a powerful device if you're learning from scratch. It will be more of an impediment because of its high complexity and low count of easy to access breakout pins for experimenting.


I heard, Digilent has a book for their ZynQ boards. I haven’t tried by myself: https://store.digilentinc.com/the-zynq-book/


I have this book. It won't teach you neither digital logic nor hardware design.

I highly recommend against learning any of these with a complex board like the Zynq, and instead getting a simple iCE40-based FPGA and starting from blinking a few LEDs and writing an UART peripheral.


I agree. My first project was a dual port SRAM emulator on iCE40 with UART bus snoop to enable real time tuning of really old BMW Motronic ECUs. This was a pretty fun/easy first project and had about the right scope. I started in icecube and eventually moved over to the OSS toolchain which was nice as well. If I had started by trying to accelerate some kind of complex algorithm attached to a microcontroller I doubt I'd have finished.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: