
Xilinx Vitis and Vitis AI Software Development Platforms - tfmkevin
https://www.eejournal.com/article/xilinx-vitis-and-vitis-ai-software-development-platforms/
======
gsmecher
Note that all of this is predicated on High Level Synthesis (HLS) becoming
real, after many years of industry over-promises and under-deliveries.

A jaded RTL engineer may fairly ask, "what's different this time?" I'm
convinced it's two things:

1\. These chips have scaled to the point where the toolflow isn't just
annoying, it's crippling. (I say that as an RTL designer who likes Vivado,
believe it or not.) Making effective use of the largest FPGAs in traditional
FPGA applications is difficult enough. Plus, the marginal cost of baking e.g.
ARM cores onto an FPGA is now so low that these parts are quickly becoming
heterogeneous SoCs and it requires extensive cross-domain knowledge to get a
modern design off the ground. More importantly, though, Xilinx is trying to
expand FPGAs beyond traditional FPGA markets, and it can't do so without
expanding the pool of available talent to program them. Finally, new FPGA
applications (e.g. AI) come with their own technical arcana and finding FPGA
designers who are proficient in all of these domains is only growing harder.
It is necessary for the tools to help, and Xilinx is amply incentivized to
pour money into the R&D programs necessary, even if the new tools are simply
bundled alongside the old ones.

2\. This may be a surprise, but LLVM. Doing HLS correctly is a forced march
through every conceivable corner of compiler, language, and RTL design, and to
date, that's been much too difficult for EDA companies. I believe that LLVM as
an accessible compiler workshop has been instrumental to Xilinx's success to
date.

I hope Xilinx does genuinely open-source some of the pieces. (Not the
synthesis flow, in this case, but all the stuff built on top of it.) The open-
source EDA community is small but motivated and talented, but it's crippled by
balkanization caused by the commercial EDA market. This might be an
interesting shot in the arm.

~~~
spamizbad
I'm curious: why is FPGA tooling so poor? I used a Spartan-3 over a decade ago
for a simple project, and as cursory look it doesn't seem like things have
changed much despite the devices becoming significantly more diverse and
complicated.

~~~
gsmecher
It's not quite fair to say the tooling is flat-out awful. Vivado is much
better than ISE. The place/route algorithms matured (analytical placement
rather than simulated annealing), and the software it's wrapped in grew up
(it's now tcl-driven and much more script- and revision-control-friendly.)
Vivado was a heroic effort and Xilinx deserves credit for taking software
seriously, and getting so much of it right. If you don't believe me, try
hiring an FPGA engineer today to work on an ISE project. You'll get an earful.

That said, a solid, open-source, mixed-language simulator with good
SystemVerilog and VHDL-2008 support would change my life. Bonus points if I
can embed it in C/C++ code a la Verilator. And, while I'm asking, can it
simulate encrypted IP too?

This is one of the places where HLS may sideswipe the traditional RTL market:
if you can effectively develop FPGA designs in C++, the entire approach to
development/testing/integration changes completely. It may sound like a
detail, but it touches every single pain point associated with a complex FPGA
design.

~~~
jhallenworld
I'm using Vivado right now. Things I hate about it:

\- It's sluggish. The underlying algorithms may be fast (they handle huge
designs), but even with an empty design it's sluggish. I'm not sure if this is
from the Java or the TCL. On the other handle Altera/Intel Quartus is faster..
(it also uses Java and TCL..)

\- I hate TCL. This whole concept of an integrated extension language in a
massive tool is terrible. It's the standard way in the EDA world, but it's not
good at all. I prefer simpler tools tied together with Makefiles..

\- I hate the schematic-like block designs. I just want a single Verilog file
that represents the design, no XML. The block designs are difficult to
integrate with git.

\- I hate projects. They are difficult to integrate with git: you have to
export project tcl, then regenerate the project from this checked in file.
It's aweful and breaks when the design gets complicated.

\- I'm using Zynq Ultrascale+ and I hate eclipse. The EDK is eclipse-based. I
just want a command line tool to generate a project from the hardware
description so that I never have to open the Eclipse. True, once your hardware
is stabilized, you can avoid it (it does generate a Makefile). I've not tried
Altera's SoCs, so not sure if better.

That being said, Xilinx did a good job with the BSP software and the Linux
support. At least it's better than what we see on some embedded processors
(Nvidia, ugh..)

Also I recently ported a design from Zynq (32-bit) to Zynq Ultrascale+
(64-bit). Not bad at all.

~~~
btashton
You can generate the sdk including the makefiles from the command line. I
don't think I have opened Eclipse on my latest project.

They also do have some articles on preparing the projects for git which work
OK, but yeah there is a lot more to be done.

I'm not sure how you would replace the tcl scripts with makefiles. Most of
them contain functions to wire together not build targets.

------
kop316
"The adoption of FPGA technology in the market has always been limited by the
severe learning curve required to take advantage of it."

I vehemently disagree with this. FGPAs and their toolchains are notoriously
expensive and proprietary. I have no doubt that if tool chains were opened up
and an average person could program for it, you would see much more.rapid
adoption because then an average person could start the learning curve much
easier

~~~
jhallenworld
I also disagree. The main problem is much simpler: they cost too much for
fundamental reasons. The die-size for an FPGA that would have equivalent power
of an Intel CPU or Nvidia GPU would be huge and expensive.

~~~
aseipp
What does "equivalent power" even _mean_? It's completely useless without any
qualifiers or specific metrics/goals.

My Nvidia GPU also doesn't run x86 programs at the speed of my desktop, but I
never expected it to. It's true in the vacuous sense only.

~~~
jhallenworld
They are all Turing machines, all that matters is can I run my algorithm at
higher performance than equivalent priced conventional solutions.

~~~
rowanG077
This is incredibly naive. There are algorithm where CPU but GPU easily. There
are also algorithms where a GPU beats a CPU easily. There are also algorithms
where an FPGA beats a GPU and a CPU easily. You simply can't say x is better
then y in general. They all have their own domains.

