
The Lack of Open Tooling for FPGAs - cgag
http://curtis.io/others-work/open-tooling-for-fpgas
======
ChuckMcM
This does a good job of capturing the first part of the problem. Note that
Xilinx did make an "open" FPGA part, the 3000 series, it did not do well for
them.

The critical thing to understand, is that FPGA _users_ (the big ones, not the
casual ones) don't want them to be open. They want the information they
program into them to be secret so that their hardware is not easily duplicated
by the folks in China and mass produced to under cut them. You can't patent
schematics. So these folks want to have a way that a board can be assembled in
China but not reproduced there in a way that isn't an exact clone (which you
can get blocked as a counterfeit product).

Ok, so where does that leave us? Well it might be easier to create your own
FPGA design, have TSMC make it on their cheapest process. And then try to sell
those chips.

And if you're saying "Jeebus Chuck! That isn't 'easy' at all." you would be
right. And that is why a 200Mhz general purpose CPU is easier to turn into a
"custom chip" than an actual custom chip.

So where does that leave us? Well, all the bits between HDL and chip can be
done, in a low scale way, either in simulation or on bulk small scale
hardware. You can program CPLDs to be simple Logic Units (LU) for your FPGA
and wire them together on a PC board. You can find the things that need to be
parameterized (intra-LU timing, global clocks, I/O configuration) and build
tools which can synthesize, place & route, and download to your "pseudo" FPGA.
And if you have all that tooling, and you can show it works, you might be able
to get a partner to make some small scale parts for you (200 - 1000 LUs)

It won't happen over night, that is a 5 year plan minimum I think. And you
have to get over the hump of developing tooling for an FPGA that may never
exist. On the plus side it should be good for several Masters level projects
and probably a PhD or two, and if your simulation work was robust you might
become the 'standard model' for testing assumptions about reducibility of
different constructs into hardware[1]

[1] Sort of the Lena[2] for FPGAs.

[2] A playmate digitized by USC which became the standard image processing
image for research.

~~~
phkahler
>> They want the information they program into them to be secret so that their
hardware is not easily duplicated by the folks in China and mass produced to
under cut them.

So how does having a closed tool chain achieve that? Aren't the FPGAs
programmed either on the line, or from ROM at power up? Either way, board
producers should be able to replicate the product. How does being closed help?

~~~
fpgaminer
> So how does having a closed tool chain achieve that?

It doesn't. It slows reverse engineering, but doesn't itself prevent cloning.
I'm not sure what ChuckMcM was trying to argue. I don't know of any big
companies that care whether or not the toolchain/devices are open source. They
just want a way to get their design into the world. What we (users of FPGAs)
care about is price; the toolchains are sufficient (though yes, cruddy in
various ways).

And to be honest, this conversation about the FPGA (and ASIC) toolchains being
closed sources crops up again and again. They're closed source not because
Altera/Xilinx/Microsemi need to keep secrets. They already know eachother's
"secrets", and the FPGAs themselves are rather trivial devices. They're closed
source not because users want them to be; again they couldn't care less.
They're closed source because the userbase is absolutely tiny. A small
userbase means we will only ever see tools that are sufficient. There is no
benefit to Altera/Xilinx/Microsemi to put in extra time and effort open
sourcing their work. And honestly, I wouldn't want them to. I'd rather their
time and resources go into continuing to drive the price down and features up
(as does every other company).

Not that I don't think opensource toolchains/devices wouldn't be great. I
follow the MiGen/Milkymist mailing list where some open source FPGA tools are
being developed. But in the commercial space, open sourcing FPGAs and their
tools isn't the highest priority.

Going back to protecting designs, that's what bitstream encryption is for,
which all modern FPGAs possess. Though Altera was quite late to the ballgame
on that front with their Cyclone series...

~~~
diamondman
This feels like circular logic to me. We do not want the hardware companies
spending time 'open sourcing' their docs because they need to spend that time
and money 'making more features' which they wouldn't have to do alone if it
was open, but we can't have it open because spending the time and money 'open
sourcing' their docs......

I think there is a big difference between 'users wanting tools to be closed
source' and 'users not caring if their tools are closed source'. And I would
argue that even if the primary user base does not care, we can do better than
that. It is no excuse to use these fragile lumbering behemoths of bad design
and super seeeeecret tricks that you can learn in a university or the
internet. And history keeps showing the problems of big black boxes with the
words 'trust us' written on the outside. All we need is the layout and a map
of the bitstream and open developers will do most of the work for these
companies.

As for spending their resources 'driving prices down', that is quite relative.
And if you look at the pricing model of their software it is clearly not their
goal to make that reasonable (at least Xilinx). And every version of the
software stretches to provide arbitrary bullet points on the back of a box
that either mean nothing, were tested in suspicious conditions, or conflate
multiple optimized tests together and say 'we are better than everyone at
everything. How many times will that be said by all competitors at the same
time, and how many times will it be believed? Anyways, with all those
features, somehow ISE is still unstable on windows.

~~~
gedrap
>>> which they wouldn't have to do alone if it was open

Well, that's an assumption. User base is very small, and potential
contributors base is even smaller. So that would be a big bet hoping that
someone would contribute to make it worthwhile.

However, your statement is true when it comes to more mainstream technologies
like ruby or javascript.

~~~
phkahler
But each company has it's own tools. That means redundant development. If they
just worked on one set of open source tools, it should reduce development
costs. It also opens the door to researchers to try things directly in the
mainstream tools. That means not having to re-implement things published in
papers, but instead deciding weather or not to merge a feature from a branch.

------
kristoffer
A big reason why the fpga companies keep everything secret is to protect
themselves against patent lawsuits (exactly as gpu companies). So once again
we can see how the patent system is good for innovation....

~~~
diamondman
I am OP

Hmm, that is actually a good point. Might as well not draw any attention to
yourself incase some ass hole has a patent on the concept of moving electrons
down copper or something insane.

------
scottchin
Having done my graduate studies in FPGA architecture and software, I can
definitely see where the author is coming from. In fact, it seems like the
entire hardware development industry has to face the issue of most tools being
closed-source. Although I don’t have a solution for the technology specific
phases (Place and Route, Bit Stream Generation, etc.), I am actually part of a
company that is trying to help solve this problem higher up in the tool chain.

One of the biggest barriers to developing EDA (electronic design automation)
software tools is knowing all the nuances of the various hardware-description
languages like VHDL, Verilog and System Verilog. We learned this the hard way
through our previous startup (later acquired) which built a hardware
verification tool.

I don’t want to be self-promoting here, but in case anyone in this thread is
interested, we are building a platform called Invio that lets you build your
own EDA tools. We try to solve more than just the language support side of
things and all of our platform’s inputs and outputs are open standards:
Python, TCL, Verilog, SystemVerilog, VHDL, etc. You can look at my profile to
find more info, or google “Invio”.

~~~
jjoonathan
Do you have an offering that would allow me to code up a
SystemVerilog->Verilog translation script on a hobbyist budget? I'm stuck
coding for my SP605 in Verilog (or VHDL) but I would _really_ like to be able
to use structs and interfaces from SystemVerilog.

Coming from software dev it's insane: The $500 devkit doesn't support structs
and I would have to pay $1300 to get one that does. Meanwhile C has had
structs for 40 years... :(

~~~
elovlie
I've had similar needs but couldn't find an existing tool, nor a parser on
which to build. There are some open source parsers [0], but they don't seem to
do preprocessing and hence lose a lot of context.

So I made a parser that might work for your use-case given some work:

[https://github.com/svstuff/systemverilog](https://github.com/svstuff/systemverilog)

It works for some fairly big codebases, so I know it's not completely broken.
I'm not very proud of the scala code, it's quite ugly in places. But at least
there are some tests :p

[0]: this also seems active worth checking out:
[https://github.com/gburdell/parser](https://github.com/gburdell/parser)

EDIT: added link to other parser.

~~~
jjoonathan
Thanks, looks promising! My needs are pretty minimal at the moment but it will
be both refreshing and educational to give this a go.

Duplicating all the DRAM wires at each and every level of abstraction was just
hideous.

------
pslam
I've been wondering recently how viable it would be to implement an FPGA-
within-an-FPGA:

    
    
      * Create a model of a simple, open FPGA.
      * Create tools to support the open FPGA.
      * Synthesize the FPGA for an existing proprietary FPGA.
      * Install bitstream in SPI NOR.
      * Ignore the proprietary FPGA from now on - work with the open FPGA that's inside it.
    

Yes, it would be absurdly inefficient. However, you can buy pretty huge FPGAs
for very little these days. If the ratio of host:target LUTs isn't too bad,
and you can find constructs which synthesize efficiently, it might yield
something usable.

Perhaps it could be a way of bootstrapping an "open" FPGA effort.

~~~
diamondman
Amusingly the Xilinx license states that you are not allowed to use the
bitstreams generated by their software in any device that they did not
manufacture. This means if you implemented a xilinx fpga in an altera fpga, it
would be against the agreement to program the inner xilinx fpga with the
bitstream. The more you know!

------
zwieback
The JTAG mess resonates with me. I probably have 10 different debugger dongles
now and the higher end ones from GreenHills or WindRiver are pricey.

As to open FPGA tools, I think the main problems are:

\- there's a ton of competitive advantage in the algorithms to generate the
bit stream so Altera and Xilinx have no good reason to give that up until a
viable competitor emerges

\- FPGAs are usually in small to mid size designs so it's hard to scale up to
the point where just selling the chips makes enough money

~~~
diamondman
I am sorry you suffer from the dongle collection woes too. The only reason I
can see to buy expensive dongles are the ones that have a big blob of RAM on
them for quickly reading debug output from the chip without slowing it down,
and ones that can adapt to highly exotic voltages and pin outs automatically.
But usually the software for these only work in windows, have secret drivers,
and are NOT worth 2 thousand dollars.

Xilinx and Altera would not have to give up their custom algorithm. It is like
building GCC instead of using Intel's compiler. We just want to know what the
chip's instructions are, we will find a way to make it optimal ourselves.

~~~
snops
In the microcontroller world, expensive debug dongles are essentially hardware
keys for the software included. Lauterbach's[1] Trace32 for instance don't
even have software keys, the license for their very extensive debugging
software is stored in the dongle itself.

Another piece of software that isn't often mentioned in these discussions is
the driver for the embedded flash memory inside parts. USB has quite poor
latency (1ms) and flash peripherals require quite a few read-modify-write
operations, which soon adds up. More expensive JTAG dongles can run these
operations locally on the dongle, with USB just for bulk data transfers, or
even download and run code on the target itself for even lower latency,
greatly increasing download speed. Since this software needs to be tested for
each target, requiring a physical chip to be bought, this gets very expensive
to develop. Even a "cheap" dongle like the Segger J-Link has an incredibly
long list of targets[2], and I suspect a substantial amount of the purchase
price is in engineering and testing flash drivers for those.

ARM seem to be trying to solve some of these problems in the ARM world with
CMSIS-DAP[3], an open standard for JTAG/SWD dongles USB protocol. It uses USB
HID, so no drivers should be needed on any platform, and they have even
created an Apache licensed implementation[4], which Freescale are now using in
their FRDM boards.

[1][http://www.lauterbach.com/](http://www.lauterbach.com/)
[2][https://www.segger.com/jlink_supported_devices.html](https://www.segger.com/jlink_supported_devices.html)
[3][http://www.keil.com/support/man/docs/dapdebug/dapdebug_intro...](http://www.keil.com/support/man/docs/dapdebug/dapdebug_introduction.htm)
[4][https://github.com/ARMmbed/CMSIS-DAP](https://github.com/ARMmbed/CMSIS-
DAP)

~~~
diamondman
Very well written.

I had no idea some companies went so far as directly making their debugger the
software key. That is intense but makes a lot of sense.

In the case of the Xilinx Platform Cable I wrote firmware to, they have a CPLD
that does the JTAG writing of data, and a Cypress FX2LP handling the USB
information. They use a quad buffered USB endpoint to let the PC load as much
crap as possible into USB and in my experience the buffer fills up fast and
the jtag work takes the time. Of course this is only because multiple 'pages'
were preloaded, otherwise the delay to request more pages would be insane.

Are you talking about the flash driver for loading external SPI flash? If so
you likely know this but you have to load the FPGA with a program to load the
flash with the program you want the FPGA to load on reboot. Each of these
programs have to be compiled and tested per chip. While looking through
OpenOCD I found they had drivers for the actual flash, and I am not sure if
this is required for the fpga flash boot strapper I just described.

I will look over these links. I recently started messing with ARM chips and
they are programmed very differently than FPGAs so I have some stuff to learn
on that.

~~~
snops
>Are you talking about the flash driver for loading external SPI flash?

I was slipping into talking about ARM microcontrollers there, which have
internal flash. Reads from it are memory mapped, but writes generally require
feeding peripheral registers with commands, hence the latency sensitive read-
modify-write. Some ARM microcontrollers do have more direct methods though,
some of Freescale's Kinetis range have a feature called EzPort, where you hold
a pin low on boot and it pretends to be SPI flash instead, and tiny 8 bit
micros like the AVR all have something similar as full JTAG would be too big
for them.

>If so you likely know this but you have to load the FPGA with a program to
load the flash with the program you want the FPGA to load on reboot. Each of
these programs have to be compiled and tested per chip.

Yeah, this is what I meant by having code run on the "target", this being the
microcontroller you are programming. You are right in thinking this means the
JTAG dongle no longer needs to know how to program flash itself, but it still
needs to code to load in and instructions on how to do it (setting up clocks
etc), which are also very platform specific. You could do the same for
external flash, but its often easier just to connect to it directly and bypass
the microcontroller.

Feel free to PM me if you have any more questions.

------
UncleOxidant
Xilinx's tools are horribly buggy. And they don't want to get bug reports from
you anymore unless you're a top-tier account. So their tools are a shit-show
and they don't want to hear about how they can be made better unless you're
already buying a lot of their parts every quarter. I don't have any experience
with Altera's tools to be able to comment, but I don't hear good things from
their users either.

The only way around this is to create a completely open source FPGA
architecture. Most of the basic patents have expired so this should be doable
now (it wasn't doable 10 years ago because too many of the basic patents were
still in effect). An open FPGA architecture is the only way we're going to get
open FPGA tools. You'd think that if such an architecture were created that
several semiconductor companies could then produce parts.

~~~
scottchin
One of the major challenges in creating a _competitive_ FPGA architecture is
tuning the architecture. Commercial vendors do this tuning through direct
insight into their customer's designs. And since the vendors have a large
portfolio of customer designs, they have a lot of data points to perform this
tuning. That's one of the many reasons why we haven't seen a successful FPGA
startup in a long time (decades probably). Kind of like a chicken and the egg
problem.

------
goodcanadian
I'll offer a guess as to why the FPGA manufacturers are hostile to open tools.
Open tools reveal the internal details of their FPGA hardware which is their
bread and butter. I think it is probably the wrong approach, but that sort of
thinking is common around intellectual "property."

~~~
diamondman
I am OP

You are correct some of the information required to build a computer for these
chips are 'internal' details. However they already have strict patents on most
of it, and knowing the configuration bits of a chip and the the delay of
various paths is not going to give away their fabrication technique. They
could potentially release just enough details to satisfy compiler writers
instead of making us do it ourselves so we try to rip every details out of it.

~~~
bravo22
Often times there are clever optimizations and work around that have to be
implemented to work around internal limitations that they don't publish so as
not to give their competitor marketing advantages.

What we should be pushing for are cross platform tool. Being open-source isn't
something that I necessarily would care about as an EE in this particular
case. It doesn't get me anything I don't with vendor tools. The BIGGEST thing
in FPGA/ASIC design is certainty. Error in the tool costs me time and money.

You'll find it difficult to convince anyone to a tool that isn't supported by
the vendor because errors and bugs, in the tool or the tool data, won't get
resolved quickly.

Most of the money in ASIC/FPGA is spent on "verification" either as pre-
build/pre-tested cores or as tools that do verification, such as formal logic
tools.

~~~
diamondman
Hmm, that is a way I did not think about. Documenting hardware limitations for
compilers gives competitors leverage for saying theirs is better because it
does not suffer from X.

I understand that open source is not particularly important to you, but I am a
bit more skeptical about the verifiability of a product that is all secret
sauce and promises than I am about something with open check-able code and
test suites. Open source software very rarely tries to hide its flaws to
prevent a PR issue and then lazily fixed in the future because it is 'low
priority', instead they are fixed by whoever can, verified, etc. There are
always counter examples, but I thing the verifiability of the tool is in the
same world and of similar importance to the verifiability of the output.

You do make a point in the catastrophic cost of a screw up when casting an
ASIC though.

~~~
bravo22
FPGA/ASIC interchangeable for the purpose of verification. There are two types
of verification: A) does the post synthesis gate match my RTL B) Does my RTL
do what I want it to do, i.e. matches the spec.

For B you have a whole host of third party IP, verification library,
assertions, etc. that you can use

For A there are formal verification tools. They mathematically match A to B.
There is no need for that tool or anything in the chain to be open.

Synthesis is complex and an optimized synthesis is very important. Timing
closure is where a lot of this stuff comes to the forefront and that's the
part that vendors won't release. That's their secret of what sucks in their
chip, or what workarounds they have to use. You'll pry it from their dead cold
hands.

As an engineer I don't care about their secret. I care about making sure that
I don't have to chase down a synthesis bug and that their compiler gives me
the most optimized, fast design. Synplify used to be a third party product
that did FPGA synthesis (Synopsys bought them). They discontinued it, even
though though they had full specs/details from Xilinx/Altera. The main reason
is Xilinx/Altera tools are excellent. They know their chips better than anyone
and for marketing purposes it is in their interests to give you the tool that
does the fastest design, or smallest design. Otherwise you would switch to
their competitor.

I love the idea of what OP is trying to do but it is a solution in search of a
problem.

There is a bigger problem that an open source tool could solve and that is
Verilog simulation. Currently we have Icarus Verilog but someone should
improve it and add SystemVerilog support. Simulation is much easier to solve,
since there is a spec to design against, and has many more users. There aren't
good, inexpensive simulation tools. Simulation is as important as compilers.
Imagine a world where gcc didn't exist and you had to pay to get a good
compiler.

People will always stick to the big vendor for synthesis. I can't imagine a
day that they wouldn't.

~~~
diamondman
OP here

You are right that my particular project is not the biggest piece, but it is
the part that pisses me off. I can deal with Xilinx's crappy compiler if I can
program my chips with ease..... or maybe I should say UNTIL I can program my
chips with ease. Then my focus will change hehe.

Icarus works but it has been maintained by one 'eccentric' guy for quite a
while and the code base is semi unapproachable. I believe this is why the
Yosys guys started from scratch.

You are also right that the place and route, as well as verification of the
LAYOUT in the chip are super big problems. It is in fact what my friends
toying with the compiler side are dreading dealing with because we have no
idea the delays of individual traces in the chip.

I disagree that people will stick with the vendors tools. Open compilers
dominate most of the Intel CPU market besides on windows since Visual Studio
is the only thing that deals with all the quirks reasonably well. But the
windows case is more of a lack of interest.

ARM compilers are more interesting to me because most people use Keil. Keil...
works. But it feels crazy retro paying for a compiler. I believe the only
reason that Keil is being used for most embedded ARM projects is there simply
are not enough people with compiler knowledge using ARM regularly yet to pour
enough work to make a better alternative.

But it will come.

And one day we will have synthesis tools for FPGAs that are comparable to the
big guys or better.

~~~
bravo22
I fully support the spirit of what you're doing btw. I hope you don't take my
comments as somehow dumping on your work ;).

What is "crappy" about Xilinx tools aside from the UI? I'm genuinely curious.

I don't think the comparison between synthesis tools and compilers hold. They
are different beasts. I think compiler equivalent is sim tools which should be
open and free.

Everyone that I've talked to who uses Keil uses it because of SUPPORT. If they
need something or there is a bug they know it'll get fixed. That's the ONLY
reason anyone has EVER cited to me. It is not true that lack of alternatives
is due to lack of knowledge. There just isn't a big enough market for it. ARM
itself also makes all the patches for GCC tools. I use gcc toolchain as do
many people.

~~~
coryrc
Xilinx iMPACT (the FPGA programmer) bugs:

On one version, it can load the project it saved, but then crashes when you go
to program. So you have to scan and reload all the data files every time you
start it.

Later version, failed to program the chip at the last stage of the process.
Diagnosed that over phone tag where the user was non-technical and on a
machine not connected to the Internet. Exact same chip as above.

Another later version has different iMPACT bugs than that first version above,
but right now I haven't been able to get it to work on any Win 8.1 machine so
I don't remember which bugs it has.

~~~
diamondman
You have to change out a certain DLL for POST windows 8 in order to get ISE to
behave and not crash randomly. YAY! Xilinx said they didn't care but somewhere
on their support forum there is some kind soul who said what dll to rename
(they have two versions of the DLL in the install anyways but they did not use
the win 8 one by default and refuse to patch).

~~~
copascetic
This is entertaining. I've used Xilinx's tools for a while, almost exclusively
on Linux (CentOS usually). I thought that renaming/symlinking libraries to get
things to work was something only Linux users needed to do, and I figured this
was just because Xilinx tests more on Windows (makes sense). If these kinds of
problems also affect the Windows versions, then, I don't even know what to
say. At some point you just can't make excuses for this stuff anymore.

------
astrodust
The tools for FPGAs are horribly clunky and extremely difficult to script
since most are Windows GUI based.

Glad to see people working to rectify this.

~~~
kornholi
Not just FPGA tools, pretty much the whole EE industry.

Few days ago I had trouble saving results from a semiconductor analyzer. The
problem? Path had spaces in it. If you have the time to display a warning that
it doesn't support spaces (on windows!), you should be able to fix the issue.
Just imagine how bloated their codebase that you can't fix issues like this
easily.

Recently I was trying to use ModelSim (HDL simulation tool) on a Linux
machine, which unfortunately was not running a 10 year old distribution. It
was failing with obscure Tcl/Tk errors :). Should have known better not to
waste my time. Reinstalled it on a windows machine and now it was having
problems connecting to our license server so I had to use a crack from a
Chinese BBS (ugh).

I really want work to be done in this area, but one of the giants will either
sue you into oblivion or make an offer you can't refuse. Either way nothing
productive will get done... :(

~~~
diamondman
I am OP

astrodust: That is exactly the problem :). Turns out ISE (Xilinx's tool) has
several command line tools that are run by the GUI in order, but the arguments
are not documented. I was able to make a Makefile that did the ISE compilation
so I could edit in emacs, but that was way too much crap to be real. The new
generation of Xilinx's tools, vivado, is written in Java and does not output
any temporary files so during compile it can fill over 32 gigs of RAM and
crash. Xilinx suggests that until max ram gets higher to use ISE for larger
chips.

kornholi: I do not want to insult hardware engineers since they are very
intelligent, they just often do not respect the same things software engineers
do. I went to an Atmel event where they were demonstrating how to use their
new ultra low power chips. It turns out that most of the people who went were
software people driving down from SF. They started off by reminding us that
they have a new IDE based on the powerful and versatile Visual Studios.
Everyone in the audience groaned. Almost everyone in the audience asked for
assistance to find where the Makefile was. The following exchange happened
over and over. host: Oh, but you see, with AVR Studios, you do not have to
_worry_ about the Makefile, it does it for you. guest: yeah but I do not want
to use it. host: but... why would you not want to use the tool we provide, it
works. It seems to me that most of the hardware industry will take whatever
tools they can get, even if they have to have the company pay several thousand
dollars. The ones I have met forget that they can write tools. When the tools
need to be created they just pick whoever is the most comfortable in Java, or
outsource it to anyone. This all sounds very critical of professional hardware
engineers, but it is not exclusively their fault. It is also a culture thing
of the companies and the industry. Everything feels to me like how software
engineering was (as I am told) in the 70-80 where everyone is super paranoid
about secrets, and rushing to be the first. There is reasonable concern for
being the first. If you build the first flash ram chip, 40 years in the future
when we have moved from flash to crystallized light or something, everyone who
wants to compete will be using the same pinouts you picked for your first
product so that boards never have to be redesigned. As for ISE and vivado, I
hear it suffers from design by committee. Where every feature has to be
checked off as working before it will ship so there is a new bullet point.
Hell, they have a C to fpga compiler which you would suspect could do some
crazy things and took thousands of engineering hours to make work. But instead
it just implements a CPU in the FPGA with slightly accelerated operations for
your setup, completely missing the point of FPGAs.

~~~
elwin
> a C to fpga compiler which you would suspect could do some crazy things and
> took thousands of engineering hours to make work. But instead it just
> implements a CPU in the FPGA

Is that seriously how Vivado HLS works? Now I'm glad I decided not to buy it.

~~~
MootWoop
How well HLS does highly depends on the source code (this is in general and
not specific to Vivado HLS). If your code is a simple loop over an array and
you add a vendor-specific #pragma directive (such as #pragma unroll) the tool
will unroll your loop and extract the parallelism from there. This actually
works quite well in practice for regular DSP code (like FIR and FFT) and
floating point. Anything else is another story though.

The thing is that unless you're writing your code as the tool expects it, with
the proper pragmas etc. there's no way it can be transformed to fast hardware.
A way around that is for vendors to ship "customizable IP" kind of like
Altera's Megafunctions... so much for portability and high-level.

I'm not sure which tool OP is referring to though, I remember Altera had a C2H
tool that they discontinued in favor of their OpenCL SDK.

------
aswanson
Posts like this make me happy I didn't get that "dream" job from Altera back
in the day and got pulled into software. Closed tools, expensive proprietary
equipment to get started, hostage to large vendors. I dodged a career bullet.

~~~
diamondman
Those jobs also pay you to sell your ideas that you are never allowed to use
again unless you continue working for the company you sold it to. And if you
have the gall to use your own idea, lawyers!

~~~
aswanson
Wow. The beatings just continue. I was headed straight into the worse
engineering subfield for starting a company, economically as well as legally.

------
scottchin
First, just wanted to say that I find this topic really interesting. I'm
trying to understand who the target user is for open-source FPGA tools.

For many hardware companies, the risk of using an unproven tool is too severe.
Unlike software, you can't just push out a patch if there is a bug. I mean, in
theory, I guess you can since FPGAs are reconfigurable, but it is probably not
very straightforward from a deployment point of view (I actually don't know so
feel free to correct me).

So, since you cannot push out fixes easily, it's quite scary for the
engineering teams to work with unproven tools. For the consumer electronic
space, product life-cycles are quite short (mobile phones get updated every
year!). So you definitely don't want to risk cutting into your product's time-
to-market due to bugs in the FPGA tools.

Also, the bigger the company, the more likely that they will have high-
priority direct-support from the FPGA Vendors. Whereas, it's probably harder
to get support for open-source tools. So I can't see consumer electronic
companies choosing open-source tools.

So again, I'm not against this work at all! Just trying to understand the
target audience.

~~~
TD-Linux
>(I actually don't know so feel free to correct me).

Okay, I will :) Many FPGAs load their program from a SPI chip on board, which
can't be reprogrammed. However, it's increasingly common for another
microcontroller or SoC to be on the same board. In this case, it's cheaper and
more convenient to store the FPGA bitstream on that controller's flash and
send it over SPI to the FPGA on powerup, which makes upgrades much easier,
too.

On the Xilinx Zynq, a FPGA containing two Cortex-A9 cores running Linux, you
can simply cat your bitstream to a character device to reconfigure the FPGA.

------
catern
>I want to pave the way for FPGAs to be usable in everything from
laptop/desktop computers to phones (if the static power consumption gets
better). I have some very interesting ideas of what an average user could do
with in system FPGAs that can be reconfigured at runtime.

What are these ideas?

~~~
dominicgs
I can't speak for OP, but my interest is in signal processing. DSP and FPGAs
are a great match and with the boom in software defined radio, having
reconfigurable DSP capabilities is extremely useful.

For example, if 802.11b devices had been built using FPGAs, an upgrade to
802.11g could have been an OTA update. Or a new Bluetooth variant emerges
(e.g. BLE) and support could be added to computers and phones overnight.

For these applications it's useful to think of an FPGA as a chip that you can
patch, upgrade or reconfigure for new applications. However, looking at it
from another angle, we can think of it as software that isn't limited by the
CPU architecture.

This second category opens up the possibility of crypto algorithms that don't
suffer from the timing attacks that they do on the CPU. Or a video codec that
can be designed without having to worry about which extensions the CPU
supports.

I'm sure there are much better examples and some of these are bad ideas that
would work better on a CPU, but hopefully that gives you some ideas.

~~~
bravo22
SDR uses orders of magnitude more power than an ASIC implementation which is
why no one uses them except for base stations -- such as cell towers.

FPGA based design for crypto would have the same timing attack as CPU. It can
be worked around the same way it would for a CPU.

~~~
dominicgs
Sure, ASICs are always going to beat FPGAs and CPUs in power consumption for
the same task, but there are also applications where the ability to update the
system far outweigh the power requirements.

Interestingly, SDR is being used for the UK's small scale digital radio
station trials this year. Off the shelf SDR hardware appears to be performing
well enough at a lower cost than bespoke DAB hardware. Again, this fits in
with your base station category.

You're right about timing attacks, I have no idea what I was thinking there.
Would they be better for power analysis attacks? I guess if we're concerned
about that then we'll end up back at ASICs again.

~~~
bravo22
FPGAs _ARE_ ASICs :) Just a special case of ASICs. The fabrication and
processes are the same. There is no advantage of one over the other.

FPGAs are more vulnerable in some ways than ASICs because someone could steal
your design and duplicate it much much more easily than an ASIC.

A lot of crypto chips have built in protection against many side-channel
attacks.

------
skwuent
Would somebody knowledgeable care to comment on how tools like Bluespec
SystemVerilog play into this discussion if at all? Disclaimer: I know little
enough about EE that this question might be nonsensical.

~~~
vonmoltke
SystemVerilog is a Hardware Description Language (HDL), which is what the
first step in the process. The HDL synthesis is the level just above the part
of the process where the major tooling issues referenced exist, since
synthesis generally isn't specific to a particular chip vendor and, as noted,
is also a necessary step before laying out an ASIC.

~~~
diamondman
Several friends complain about the arcane nature of verilog and vhdl and like
the idea of better languages written by actual language designers. But since
the steps of FPGA compiling first converts the HDL language into a netlist
which is used for all other operations, once a tool chain exists we can just
replace the verilog compiler with a VHDL or bluespec compiler and everything
will still work.

------
zackmorris
Open FPGAs would be disruptive to the status quo, that's why they are closed.
Right now many technologies like video cards are wandering in the desert,
adding on more and more opaque layers rather than providing general purpose
computing. So that's set simulation back at least a decade or two and hindered
progress in fields like AI.

I just want to address one of the main complaints about FPGAs - that they
require more chip area to route logic. This is not as big of a deal as it
sounds because today 3/4 of a chip's area or more is often dedicated to
mundane things like cache.

Also since there has been little progress in breaking the 3 GHz barrier since
the early 2000s, everything is moving towards higher transistor counts. So the
added cost of layout goes down over time and after about 3 years is on par
with the previous generation. So we have a chicken and egg problem where true
general purpose parallel computing can't get off the ground because it's
perceived as too expensive, but it's too expensive because it hasn't gotten
off the ground yet.

Breaking that chicken and egg cycle by opening FPGAs could trigger an
overnight adoption of them, even greater than when Bitcoin triggered renewed
interest in ASICs.

------
mekarpeles
About @diamondman's Adapt Framework (github.com/diamondman/adapt)

Adapt is an open, modular framework which offers a streamlined approach for
JTAG controllers to speak to target devices (currently CPLDs and soon fpgas).

Adapt is built to be extensible and currently includes open/reversed drivers
for the following Controllers and target devices:

\- Digilent & Xilinx PC1 (JTag Controllers) & XC2C-256 (CPLD)*

*Note: Map files are required for CPLDs to avoid legal recourse

FEATURE ROADMAP:

1\. Support for Controllers: Currently Adapt only supports JTag but
@diamondman intends to support dbw & spi. Patches for other serial protocols
are welcome.

2\. Support for Target Devices: The next milestone is to support Spartan3 &
Spartan6 fpgas

3\. Extending Adapt / Contributing Drivers: Adapt can be extended to
additional controllers and target devices -- @diamondman will write a tutorial
on how this is done + what the limitations are, if there is enough interest.

Please express interest in driver support by replying to this comment thread.

~~~
voltagex_
So why aren't these targets for OpenOCD instead of a new system?

~~~
mekarpeles
@diamondman responded to this in another thread:
[http://www.reddit.com/r/programming/comments/338t25/the_lack...](http://www.reddit.com/r/programming/comments/338t25/the_lack_of_open_toling_for_fpgas/cqiz0gh)

"I actually want to hook OpenOCD into my Daemon to bring some sanity to how it
manages devices. Several of my IRC friends prefer using closed source tools to
open OCD calling open Obsessive Compulsive Disorder, frustrated the license is
so strict (mine will be LGPL), and annoyed that everything has to be
exhaustively specified in TCL files for it to do ANYTHING."

The TL;DR is OpenOCD's driver model itself is fixed/limited/standardized to a
degree which prevented him from, in many cases, optimizing underlying
controllers (or getting them to work at all -- e.g. OpenJTag, I believe).
Also, see licensing disagreements above + unreasonable configuration
requirements outlined in @diamondman's answer above.

------
scottchin
Thinking about this topic some more, it seems hard to gain the critical-mass
of contributors for such an open-source tool.

Place-and-route for example, requires a very specific set of knowledge in both
optimization (comp-sci), and hardware (electrical). Most of the people who
have these skill sets are probably already employed by the major FPGA vendors
and under NDA to not contribute to such an open-source tool. I've seen this
first-hand having been in the academic space of FPGA research. New masters and
phd grads typically go straight to Xilinx or Altera. And without really really
good place-and-route, you won't have a competitive tool.

I'm not sure how you would solve this problem.

~~~
diamondman
OP here

This is a severe problem that saddens me greatly. For now I am just writing
tools to make JTAG/SPI/etc loading of chips Ubiquitous to the developer
writing software/netlists for them. I do not know enough about the place and
route math and routines, nor have I seen enough details of real life chips to
know the types of real challenges faced. But I will cross that bridge when I
get there (maybe it will be someone else :)).

I personally have a huge issue letting anything fundamental I figure out be
marked down as the property of a company to have and hold for 20+ years. But I
understand the financial rewards and interesting problems are more than enough
for many people, so I can not hold it against a student with a PHD worth of
debt to need cash. Sigh.

Hopefully if a good enough extensible base exists, people will add pieces on
(whatever they can get away with) over a long time. That is all I can really
hope for.

------
elevensies
Delete the "A" from the first link to see the referenced comment thread:
[https://news.ycombinator.com/item?id=9388751](https://news.ycombinator.com/item?id=9388751)

~~~
cgag
fixed thanks

------
boxerab
I have done a lot of GPU programming, and interested in looking into FPGA for
low power solutions. Will this situation improve with the release of OpenCL
support?

~~~
diamondman
OpenCL support for FPGAs is super limited and only done through vendor
specific tools. The point of these tools is more to LET you write your FPGA
program that runs outside of a desktop computer in OpenCL than it is to make
an FPGA be able to speed up your OpenCL program running on a desktop. So it is
not like 'I will pop in a new FPGA card so I can play better games.' Though
that is what i want to happen in the next...... lets say 5-10 years.

~~~
DrHoppenheimer
You're confusing OpenCL with high level synthesis (HLS). OpenCL is explicitly
a platform for software developers. It targets accelerator boards attached to
a host (via PCIE), just like GPGPU.

[https://www.altera.com/solutions/partners/opencl-board-
partn...](https://www.altera.com/solutions/partners/opencl-board-
partners.html)

------
progman
Is is actually a good idea to reverse engineer the internal structure of FPGAs
since the manufacturers likely won't disclose their proprietary knowledge
ever?

The question is, do we really need FPGAs? What about emulating FPGAs in GPUs?
Or using massive parallel chips like Parallela or GreenArrays GA-144 (Forth)
to simulate logic gates? Such architectures are usually much more open than
FPGAs.

~~~
deutronium
I can't see how you could possibly create an FPGA from a GPU, whereas you
could/can create a GPU from an FPGA, for example Altera have OpenCL running on
FPGAs, and also people have implemented GPUs using an FPGA.

~~~
diamondman
Note that Altera's compiler for Open CL does not let you run your OpenCl stuff
faster. It is intended so you can write your FPGA programs (that run outside
of your computer usually) in OpenCL. Most of the imagined benefits kind of
vanish, at least to me, when you realize that this is just a tool so if all
you can hire if OpenCL developers, you can still get the job done.

------
alexweber
I'm still reading above the level I currently understand when talking about
FPGAs and chips and such… but how does Bunnie's Novena[1] factor in to all
this?

[1]
[https://en.wikipedia.org/wiki/Novena_(computing_platform)](https://en.wikipedia.org/wiki/Novena_\(computing_platform\))

------
anon_agin
An interesting talk on "￼How to Build an Open Source FPGA toolchain" was
presented at the Open Hardware Workshop a few years back:
[http://www.ohwr.org/attachments/821/sebastien.pdf](http://www.ohwr.org/attachments/821/sebastien.pdf)

------
wcunning
I'm not the best FPGA engineer out there, but I am relatively educated. I
agree that most of the tools suck, but I'm quite impressed with Vivado. Xilinx
has really upped their game there. Does anyone have any experience with it,
and if so, do you agree?

~~~
mng2
Vivado is good and bad. Vivado's schematic and device views are much better
than the earlier FPGA Editor et al. The interface for adding Chipscope debug
signals is quite nice.

However there are lingering bugs. The hardware manager crashes a lot if you
don't handle your debug nets carefully. My colleague had an issue with
constraint priority. I _think_ I've encountered an issue with VHDL synthesis
being incorrectly cap-sensitive, but I never bothered to make certain.

Overall it's getting there. The Zynq stuff is neat (once you understand what's
going on) but I have some misgivings about the push toward IP cores.

~~~
wcunning
Yeah, I'm not a huge fan of IP cores, but they have their place. The biggest
thing that I'm a fan of is the push towards standard TCL methods for
_everything_ Vivado does or touches. That's the difference between UCF and
XCF. I'm also under the impression, but don't have the experience to say for
sure, that there's supposed to intermediate files for every design step and
that the command line interface is better documented.

I am very impressed by the Zynq stuff. I'm particularly looking forward to
seeing people using it for accelerators and such.

