
Why FPGA manufacturers keep their bitstream formats secret - throwaway000002
http://www.megacz.com/thoughts/bitstream.secrecy.html
======
dkarchmer
It is not the bitstream per say that you want, but the full description of the
Device Database (defining every programmable switch in the FPGA).

Somebody asked why are FPGA EDA tools so large. Well, this is the number one
reason.

So, at the end of the day, the real reason why FPGA companies don't open
source their bitstream (and as I said, the actual database) is simply because
it will be a major undertaking for them to document in any way that will
actually make it possible for the community to use. An FPGA is NOT a
processors so it not as easy to document as documenting an instruction set.

So, very hard to do, and not enough of a business justification to do so
(combined with old school management that don't really understand the value of
open source). That is it.

BTW, it will actually be relatively doable to document the basic Logic Cell,
but the problem is that in today's modern FPAGs, the logic portion is a
relatively small portion (when considering complexity) compared to the very
complex I/O interfaces.

I think the best you can hope for (and what I believe both X and A are moving
towards) is a more flexible tool flow, and heavy use of technologies like
Partial Reconfiguration, which should allow you to build lots of tools and
simply use the FPGA tools (mostly P&R and Timing Analysis) as "smaller" black
boxes, while allowing open source or third party to build higher level system
integration tools (which IMO, is what is more needed today).

~~~
hornd
While I never worked directly with dkarchmer, he was a pretty well respected
director at Altera. I value his opinion here highly (not to mention it makes
sense).

~~~
gozo
Not sure if I should laugh or cry over that career change though...

~~~
dkarchmer
A little of both :-)

I was one of the biggest champions of opening our SW and worked hard to create
open APIs to give access the device database and timing engine, even where
there was no real business justification. The limitation was always what we
could make usable without shipping an Altera engineer with the SW. A fair
amount is available but unfortunately undocumented. If you look close enough,
a some of the features in Quartus are written in plain Tcl, which you can
reverse engineer.

~~~
cottonseed
Hi dkarchmer, thanks for your comments. I'm the author of arachne-pnr, the
open-source pnr tool for the Lattice iCE40. One of my hopes is that by
creating compelling open-source tools, it might be possible to change the
value proposition for FPGA vendors to get involved in opening up the chip
internals, although perhaps that's wildly optimistic. One of the hard parts is
to get a realistic foothold. I like to think we're making some progress with
the Icestorm project. I got a bug report from a user recently, and I quote,
"We would like to do what we can to help fix your tools because the workflow
is far superior." I know there's a world of difference between a big flagship
FPGA and the iCE40.

I still like the analog with CPUs. If there was no gcc or LLVM and the vendors
all had their own compilers, there would be little incentive to open up the
ISA. In a word with gcc and LLVM, you're dead in the water if there isn't a
port.

I was a little surprised to hear a big part of the job is documentation. How
do the chip design teams communicate with the tool development teams? Or is
there a problem with releasing internal documentation?

------
jacquesm
So, who will set up a transparent FPGA manufacturing company with old tech
that will then prove all these companies wrong by capturing the market?

FPGA manufacturers keep their bitstreams secret because there is no real
demand for openness. The customers are happy with the product, they are
_applying_ the chips and they work. Outside of the tinkerer world there exists
a vast realm of industry where access to the bitstream format would be a nice-
to-have but not a must. If it were then a manufacturer would surely take the
lead in this and capture the marketshare of the others.

For tinkerers and open source proponents this is of course a less-than-ideal
situation, we'd like to see that bitstream documented because then we can make
tools operate on them and that generate them. But for the vast majority of the
customers (some exceptions do exist) this is not a big issue.

The most heard complaint is that the toolchains are cumbersome, slow and too
large, not that the bitstreams aren't open (funny though: those toolchains
would be fixable _if_ the bistreams were open...).

~~~
nickpsecurity
It's not going to happen. I lost my article that clearly explains it but let
me summarize. _Every company_ that tried to challenge FPGA's or the Big Two
went bankrupt or were acquired. The survivors are niche players who might be
acquired or go bankrupt at some point. Another one, Tabula, just announced
they'd be shutting down. This is an insanely tough market to get into and stay
in.

The article explained the reasons were two: I.P. and tooling. Short version
for I.P. is there's all kinds of stuff integrated with FPGA's on many nodes
efficiently and you'll always have less due to less cash. Far as tooling, vast
majority of Big Two's expenses are on R&D for EDA tools like HLS and logic
synthesis. That's the hard part that took them a decade or more to get working
enough to be usable as an ASIC alternative or easier to pick up for new HW
engineers. You can't just duplicate FPGA HW: you need EDA tools to map
hardware designs efficiently and correctly to that HW.

The I.P. problem is tolerable but the EDA problem isn't. There are academic
tools to draw on and Mentor even has an FPGA tool that can be re-targeted.
Yet, Xilinx and Altera give away most of their tooling free with best stuff
dirt cheap compared to six digit EDA tools. The new contender must have FPGA,
onboard I.P., and EDA tool that's just as effective despite lacking years &
hundreds of millions in R&D. Just ain't happening. So, the focus has to be on
niches Xilinx and Altera haven't conquered yet.

Now, I do have several business models I'm tweaking that might allow for a
new, OSS FPGA and toolchain with both non-profit and commercial strategies.
But you basically need big players individually or in combination willing to
loose millions each year to support R&D on it. There's also a few firms that
short-cut developments whose tools would help a lot. The effort could acquire
them. So, there's possibilities but it will be a niche, unprofitable market
regardless.

~~~
throwaway000002
With respect to tooling, I believe the work on Lattice ICE40 will be an
inflection point.

Consider: open tooling drives user device adoption, which in turn drives
tooling refinement. At some point people working on the tooling are going to
have ideas that simply didn't happen, or gain management support, in the
commercial side.

Now if this enables some advancement that that suddenly makes ICE40 devices
more appealing in end products, then Xilinx and Altera are going to be on the
outside looking in. If further, Lattice watches what's happening and develops
a hardware enhancement that accelerates/reduces power/etc. whatever
development has happened on the open tooling side, this will further entrench
their style of FPGA architecture.

For example, I am certain that the openness of Linux has essentially killed
the search for new "process models" on both the software and the hardware
side. (Think address space structure, and mmu design.)

However, if we are realistic, the web paradigm makes very few architectural
requirements. Somebody, anybody, can re-architect the stack from the moment a
GET request hits a NIC to provide whatever the existing mass of cruft on x86
systems provides probably for far less cost, far more performance, and far
better efficiency.

The question then becomes, who are you doing it for? If it's for customers,
and you require them to use specialized tools, then they're always going to
whine about lock in. (There are ways to solve this problem, but this is
already getting rather long.) So the only hope is that you build an app that
leverages your infrastructure that no one can clone without a greater
investment in traditional tooling.

This is all just a long winded way of saying that I do believe that there is
something "out there" in having open FPGA tooling. The time is right. I see a
lot of future in SoCs with integrated FPGA fabric featuring open tooling.

Personally, I'd love to see something like an ARMv8-M Cortex-M7 core(s) +
FPGA. Do your general purpose stuff on the micro-controller, and accelerate
specific tasks with dynamic reconfiguration of the fabric.

What is going to happen, however, is Intel will release a massive, overpriced,
Xeon with an inbuilt Altera FPGA and only the likes of Google and ilk are
going to be able to afford to do anything with it.

Here's hoping though. I have belief in the Chinese!

~~~
al2o3cr
"Consider: open tooling drives user device adoption"

Citation needed. Given that most of the device adoption that's already taken
place was with seriously-expensive closed-source tools that aren't exactly
paragons of UX quality, I think this is assuming a LOT.

My guess is that the quality of the tools (and I'm handwaving "open" into
"higher quality" which is not guaranteed) is a distant concern compared to
lots of things: device capabilities, power envelope, $/10k, IP library, and
many more take precedence.

~~~
nickpsecurity
"device capabilities, power envelope, $/10k, IP library, and many more take
precedenc"

They do. There's been all kinds of open tooling and more open hardware. What
did most people buy and keep investing in? Intel, AMD, IBM, Microsoft, FPGA
Big Two, EDA Big Three, etc. Got the job done easily, reliably enough, and at
acceptable price/performance.

I call out OSS crowd all the time on why they havent adopted GPL'd Leon3 SPARC
CPU's & open firmware if it means so much. It doesn't have X, it costs Y, or
too lazy to do Z. Always.

------
userbinator
Bitstream formats are probably the worst-kept secret in the industry. FPGAs
are inherently very regular in structure, so figuring out which bit goes where
is pretty trivial. Try to disclose what you find in public, however, and
they'll quickly send their lawyers after you...

From what I know, this is the only public effort that's remained up so far:

[http://www.clifford.at/icestorm/](http://www.clifford.at/icestorm/)

~~~
rkangel
> FPGAs are inherently very regular in structure, so figuring out which bit
> goes where is pretty trivial.

That is ignoring all the complexity that makes modern FPGAs what they are.
Three different variants of LUTs, on board block RAMs, Hard IP blocks, or even
a whole ARM subsystem. Even your quoted article says that the reverse
engineering was possible because "There are not many different kinds of tiles
or special function units".

The interfaces to all of these are complex and change regularly as they
release new variants.

You might, with the right team, and after many many months of effort work out
most of what you need to program one of the simpler Altera/Xilinx devices. But
they're a moving target.

The best analogy I can think of is that every month ARM releases a processor
with some changes to the instruction set. Tracking that with an open source
compiler if you DID have documentation would be hard enough.

~~~
reisgabrieljoao
The whole ARM subsystem configuration (at least in Zynq-7000) has nothing to
do with the FPGA bitstream. It's all about configuring registers and can be
done in software (normally by a bootloader) or through JTAG.

~~~
planteen
The ARM peripherals need to be assigned to I/O pins which happens in the FPGA
(PL in Zynq terms). The ARM subsystem has bus connections to programmable
logic which happens entirely in the FPGA (e.g., you connect a 16550 UART to
the ARM via AXI bus, etc). I'd say Zynq is a far more complex bitstream than
just a vanilla FPGA.

~~~
reisgabrieljoao
>The ARM peripherals need to be assigned to I/O pins which happens in the FPGA
(PL in Zynq terms).

ARM "hard" peripherals can be assigned to MIO pins, which have nothing to do
with the PL, or EMIO which can be routed through PL to the FPGA pins. In both
cases, the user must write to PS registers to configure internal muxes and
assign the peripheral to a given set of pins. In the latter you must assign
the ARM pin as an input in the PL and route it to a PL output as in any
regular FPGA design.

>The ARM subsystem has bus connections to programmable logic which happens
entirely in the FPGA (e.g., you connect a 16550 UART to the ARM via AXI bus,
etc).

Yes it has but the interconnect subsystem is configured in software not
through the bitsream.

> I'd say Zynq is a far more complex bitstream than just a vanilla FPGA.

Zynq's PL is a Xilinx 7 Series fabric, I wouldn't say both are much different
in complexity. I'm not saying they're simple, tough...

~~~
planteen
You're right. I was thinking those were PL features because I set them in
Vivado, but they are really just exported in the HDF/XML.

------
Qantourisc
I don't know if they realize this, but imo all they cause it having less
people working with FPGA's. As a result it's not a tool people will reach to
when it could be used.

~~~
coderdude
It's not the bitstream format that is stopping you. It's the comprehensive
tooling in this field that is not open which holds you back. Bitstream formats
are nothing. It's like not having access to GCC but complaining to Intel about
their microcode. There is so much more to the process of creating something
useful with an fpga. Worrying about open bitstream formats is like worrying
about Intel not disclosing how their microcode affected a single chip. It's
not useful for much beyond a single chip, in the same way that many cpus in a
family have different microarchitectures.

The most useful tools in fpga dev have little to do with the part of
translating the Verilog/Vhdl into something the fpga understands.

Just going to add this:

I had to install VMware (on OS X) and load an Ubuntu ISO just to play with
Xilinx's tooling. I dev on win, Ubuntu, and OS X. Primarily Ubuntu. But just a
heads up to anyone reading this that can positively affect this: if even I am
on a Mac, it's time to expand your offering.

~~~
sklogic
You need a synthesis tool - and there are open source alternatives. You need a
cell library - a direct derivative of knowing a bitstream format. You need a
place and route tool - as soon as format is known such tools would appear (as
it happened with ice40). You need timing analysis, probably the hardest piece
for community to produce without having an insider knowledge enjoyed by the
vendors.

All this stuff, including the libraries, can be pretty compact. There is no
justification whatsoever for the ISE and Vivado bloat. The essential tools
should be very compact. There are multiple copies of a microblaze toolchain,
for example. WTF? I need _none_ exactly. Zero. Nil. Stop bundling all the
crap, please! Put it in a separate package so everybody could happily ignore
it.

~~~
nickpsecurity
"There is no justification whatsoever for the ISE and Vivado bloat. The
essential tools should be very compact. There are multiple copies of a
microblaze toolchain, for example. WTF? "

That they were trying to include or support most of their devices or use cases
in one distribution was my guess as to the reason for the bloat. Thanks for
confirming it. I'll add that they'd do even better writing it in a LISP or ML-
like language with macro's plus efficient compiler. The code would be
readable, robust, efficient, and take no more space than necessary.

~~~
sklogic
They already wrote all of their GUI in a nice and compact Tcl. Still such a
bloat.

Cadence is using Lisp extensively, and their distributions are also not very
small and are hard to maintain.

~~~
nickpsecurity
Didn't know about those lol... We already covered that the complex
functionality and everything but the kitchen sink is reason for much of the
bloat. With Cadence, it could be that plus legacy and/or coders that aren't
good enough (or deadlines). Who knows.

I just think they could be several times smaller if they only included
necessary functionality and added extra's repo-style depending on one's
device, use-case, and so on. Btw, is Lisp used for extensions over there or
the whole application? And which LISP?

~~~
sklogic
They're using it for scripting, not for the core code, although there is a
_lot_ of Lisp code there.

[https://en.wikipedia.org/wiki/Cadence_SKILL](https://en.wikipedia.org/wiki/Cadence_SKILL)

And, yes, it's about a time for the monolithic distros to die. People are
already spoiled by package managers, there is no justification for 10gb
downloads any more.

~~~
nickpsecurity
Oh, OK. That makes sense.

------
CamperBob2

       "Customers will damage their FPGAs with invalid 
       bitstreams, and blame us for selling unreliable 
       products."
    
       This used to be true when FPGAs had internal tristate 
       resources (so you could drive the same net from 
       competing sources); this is no longer the case for 
       modern devices. 
    

I agree that this is a bogus argument -- nobody would blame the chip vendor
for damage caused by a third-party bitstream -- but I don't see what it has to
do with tristate logic. The only thing keeping me from shorting out two (or
two thousand) ordinary CMOS drivers with an assign statement is the toolchain,
isn't it?

~~~
duskwuff
Some older FPGAs, such as the Virtex 2 Pro series, had internal buses driven
by tristate buffers. The responsibility was on the designer to avoid enabling
multiple drivers at a time.

This is no longer true of modern FPGAs. There is no way to create a signal
that is driven by multiple sources; the structure of the FPGA guarantees that
any net will have exactly one driver.

~~~
michaelt

      the structure of the FPGA guarantees that any net
      will have exactly one driver.
    

Interesting - I didn't know that! Could you link to a schematic so we can
understand how the gates are connected to achieve that?

~~~
pjc50
If we had public schematics of FPGAs we wouldn't be having this discussion..

~~~
michaelt
I don't think I've asked for anything particularly unusual?

It's common for electronics manufacturers to release diagrams like this:
[http://i.imgur.com/k6o8mwO.png](http://i.imgur.com/k6o8mwO.png) detailed
enough to give you an idea of how the circuit behaves and why, but well short
of a complete internal schematic for the entire device.

Google "Xilinx FPGA cell"
[https://www.google.co.uk/search?tbm=isch&q=xilinx+fpga+cell](https://www.google.co.uk/search?tbm=isch&q=xilinx+fpga+cell)
and you'll find similar approximate diagrams exist for FPGA cells. That's the
detail level I'm interested in, and I think it's reasonable enough to believe
it would exist?

~~~
pjc50
That diagram shows tristates!

What I suspect is happening is that the good old tristate bus can't be turned
round fast enough for highspeed designs so some sort of mux network has been
built to replace it. The place to look is probably in the patents.

~~~
duskwuff
That diagram is from an AVR datasheet, not an FPGA. :)

------
badsock
Here's my official prediction: there's going to be a sea change in the FPGA
market in the next five to ten years. The reason FPGAs are so complex is that
more than half of the die space is given over to the routing structures. This
means that to be even in the ballpark of ASIC performance they have to have a
complicated mish-mash of logic tiles and all sorts of other special-purpose
tiles (like arm cores etc.).

The die space issue is an artifact of using SRAM to store the configuration
(likewise using flash is also too big and cumbersome), not to mention the
power usage. However, the new non-volatile memories coming up (like memristors
or nano-ram) are absolutely minuscule and will have a dramatic effect on the
competitiveness of FPGAs. All the sudden you can have an extremely regular
structure, with no specialized tiles, with performance comparable to ASICs. In
fact, there's an argument that they will outperform ASICs because as we start
getting into quantum effects because of die feature shrink, being able to
function despite a relatively high degree of errors (which FPGAs can route
around with minimal performance loss) will become a huge advantage.

Here's a paper that talks about it:

[https://www.ece.ucsb.edu/~strukov/papers/2006/Worldcomp2006....](https://www.ece.ucsb.edu/~strukov/papers/2006/Worldcomp2006.pdf)
[PDF]

The very regular structure of the iCE40 is quoted as a reason for its choice
as a reverse-engineering target - this helps a great deal with the tooling
issue. If the above prediction is true, then FPGAs can potentially be:
extremely regular, on par with ASIC performance, and potentially cheaper to
produce because of economies of scale. These factors will, I believe, change
the dynamics of the market to a point where an open source FPGA is a viable
option for crowd-sourcing.

~~~
nickpsecurity
"The reason FPGAs are so complex is that more than half of the die space is
given over to the routing structures. "

That premise doesn't seem to be right. The routing adds overhead but that's
not the real complexity. FPGA's have always been about trying to get more
performance and flexibility at the same time. So, instead of simple structures
and symmetry, the FPGA's have all kinds of things on them ranging from complex
logic units to MAC's to processors to accelerators. This creates difficulty
for both OSS and commercial EDA tools in efficiently handling them.

Customers want this stuff, though, because it lets them get more done with
less or acceptable amounts of money. The open or simpler alternatives don't.
So, the market won't shift to them. If anything, like with OS's and smartphone
SOC's, the barrier to entry will only grow with those accepting something open
or simple being a niche market.

Note: Lattice iCE40 is already serving a niche market. So, it fits my
assessment.

~~~
badsock
The argument is this: to an (admittedly over-simplified) approximation an ASIC
and an FGPA have the same gates, it's just the interconnects that are
different. The lower the power and space efficiency of the interconnect, the
larger the incentive is to put ASIC-style interconnected gates into little
islands in the FPGA fabric - hence the various complex tiles like MACs that
you mention.

When nonvolatile nanoscale memory becomes more widespread (the demand driven
by SSDs, but FPGAs get to benefit), the additional interconnect power costs
get close to zero, and when things get so small that nanowires are much
smaller than the smallest gate, the space requirements start being negligable
as well.

At that point there's no real advantage to having those complex tiles, because
the equivalent configured FPGA circuit is just as efficient, so you might as
well take advantage of the flexiblity of a completely uniform fabric.

The point of view taken in the paper I linked above is that even LUTs are an
uneccessary optimization at that point.

I'm not saying all of this is an inevitability - but it's my bet.

~~~
nickpsecurity
"The argument is this: to an (admittedly over-simplified) approximation an
ASIC and an FGPA have the same gates, it's just the interconnects that are
different."

No, they don't. The FPGA's have LUTS that represent other logic gates in weird
ways for flexibility. They also have traditionally, software-programmed macro-
cells like DSP's and MAC's which are logic programmed. And there's an
interconnect and power-saving tricks on top of that. Different enough that
properly synthesizing to FPGA's is a different subfield with different
techniques and sometimes 5 to 6-digit software to do it well on heterogenous
tiles. I got simpler, free ones that can synthesize or optimize logic with
primitive gates.

"The lower the power and space efficiency of the interconnect, the larger the
incentive is to put ASIC-style interconnected gates into little islands in the
FPGA fabric - hence the various complex tiles like MACs that you mention."

"At that point there's no real advantage to having those complex tiles,
because the equivalent configured FPGA circuit is just as efficient, so you
might as well take advantage of the flexiblity of a completely uniform
fabric."

You seem to be looking at the technical aspect for the simplest and most
elegant solution. Bets on that usually fail because what drives these markets
is consumer demand and what's good for business. Consumer demand wants things
faster, cheaper, optimized for their use case, and so on. _That_ , not
interconnects or whatever, prompted the creation of complex tiles that could
accelerate their workload. SOC, HPC, and cloud server markets have been going
in same direction with offload engines for same reason.

The other end of the problem are the suppliers. They know that they need to
differentiate to sell more chips. So, they create chips with different specs,
new types of LUTS, onboard I.P., accelerators, and so on. They, academics, and
startups then create tools to try to utilize them effectively. Consumer demand
for this sort of things isn't going away and neither is the need to
differentiate with them. So, this pattern will remain regardless of technical
arguments as it always has in every part of computing industry.

So, we get to nanoscale memory, nanowires, nano-FPGA's... whatever fictional,
better tech you want. So, they get created. Now, everyone has super-fast, low-
power chips with tons of logic. Parkinson's Law and the above pattern kick in:
companies need to differentiate, solution providers start using exponentially
more resources for their competitiveness, and users want ways to run these
cheaper/faster/whatever. So, they start adding I.P., using different types of
nano-blocks, getting clever with interconnects, and so on.

In other words, you've swapped out the physical components but changed nothing
that drove them to complexity in the first place. The drivers will still be
there in age of nano-FPGA's. So, they'll still be complex, consumers appetites
will still be insatiable, and EDA runtimes will be higher than ever. This
outcome is always the safe bet.

~~~
badsock
"No, they don't. The FPGA's have LUTS that represent other logic gates in
weird ways for flexibility."

LUTs are composed of gates. By gates I mean literal transistors etched in the
silicon. They're the simplest form of optimization that FPGAs use to get
around the cost of re-routable interconnects.

However, you seem to be missing my point: that complex tiles will no longer be
an advantage, for performance or for differenciation. We're reaching the
bottom of what silicon can do, and the rules of the game are going to change -
the future isn't going to be just smaller and faster versions of the same
thing we have now.

And while I agree that differenciation is a powerful driver, it happens just
as often as not that the ability to differenciate goes away. That's how we get
commoditization. Certainly there will be still be the ability to differenciate
based on the various analog peripherals on the chip (GPIOs, DACs, etc.), but
in terms of the actual digital FPGA fabric, I think that will go away, and the
benefits of standardization and reduced SKUs (i.e. in terms of supply-chain
management and economies of scale) will start to win out.

And, certainly the market-driven complexity you talk about will always exist,
but it will be driven from the silicon side of it.

~~~
nickpsecurity
"And, certainly the market-driven complexity you talk about will always exist,
but it will be driven from the silicon side of it."

I really doubt that given what markets done so far. I hope you're right,
though. It would make things so much better for us. I won't bet on it but I'd
like it to happen.

------
michaelt
I always assumed it was because, when you make an interface/API public (by
documenting it and offering it to your customers) you have a professional
obligation to keep it reasonably stable. People get understandably upset when
APIs for things like twitter and facebook keep changing under them, making
things that worked before stop working.

Making the bitstream format a public API would make it harder for them to
change/update/improve it without making themselves look like assholes for
breaking third party software.

~~~
nickpsecurity
"Making the bitstream format a public API would make it harder for them to
change/update/improve it without making themselves look like assholes for
breaking third party software."

That's an interesting point. I think you're the first person I've seen make
it. Customers work ends at RTL level mostly so I don't see this as a real
issue. Especially if customers are told not to depend on bitstream staying
same from device to device.

------
c3534l
So then if none of those things are true, wht do FPGA manufacturers keep their
bitstream formulas secret?

~~~
Aissen
Lawyers. Semiconductor/IP vendors are afraid of _anything_ weakening their IP
and patents. This is most of the time not true, but they'd do anything to
protect their IP, even if it undermines the business.

~~~
StringyBob
Sadly true. Security by obscurity doesn't work for encryption, but a bit of
obfuscation is very good at fending off opportunistic patent trolls. I expect
the potential lawyer cost is more than any lost business (see also binary
blobs vs open source).

~~~
nickpsecurity
A hardware guru I know practically specialized in this where they were
constantly obfuscating products and R.E.'ing 3rd party stuff they needed to
make sure it worked right. He also said his company, in Asia, refused to do
business in U.S. due to patent suits. There was plenty money to be made
elsewhere with less legal liability past people cloning their products. Hence,
his obfuscations.

------
davexunit
We really need a fully free FPGA toolchain. I am particularly interested in
liberating the Xilinx CSG324 chip that was chosen for the Novena. I have heard
that it is possible to fry the chip with a bad bitstream, so this baby step[0]
is all I've managed to accomplish. The fpgatools project is messy, but the
only project I know of that has figured out the bitstream format for a closely
related Xilinx FPGA model. I even wrote some Scheme code that could produce
the exact same bitstream as the example fpgatools program using a little
domain-specific language... if only it could run on my FPGA.

Others note that the bitstream format is only a small piece of the puzzle, and
while I agree, I don't yet care about the HDLs and all of the tools built on
top. Just figuring out the bitstream format for more FPGAs would be a huge win
and would enable a free toolchain to be begin to be written.

[0]
[https://github.com/davexunit/fpgatools/commit/06e95c379cefd9...](https://github.com/davexunit/fpgatools/commit/06e95c379cefd9c8e21dad44747109e9095cc5b5)

------
nickpsecurity
Screw the formats: build your own. There's all kinds of bright people doing
hardware development in college with top EDA tools and cheap prototyping
through eg MOSIS. One already did a FPGA architecture on 45nm:

[http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43...](http://www.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43.pdf)

Anyone wanting to see more projects, info on what developing open HW will take
w/ potential paths, and so on can follow my links here:

[https://news.ycombinator.com/item?id=10468898](https://news.ycombinator.com/item?id=10468898)

Anyway, we have most of what we need in terms of tooling. It's just going to
be a huge loss getting the initial I.P. built and ASIC-proven on each process
node. Best route is academics w/ professional support each doing a piece of
key I.P., using grants + discounts to build it, and then public domaining it.
So, who knows a near billionaire who wants to change the HW world for the
better? And who can fight patent suits?

------
loginusername
To summarize, there exists at least one person who wants to write their own
tools, or at least use open source tools.

For at least this one person, downloading closed source software from the chip
manufacturer is not satisfactory.

The comments also add that these downloads can be on the order of 8-20GB.

Has anyone ever wondered what is in those large binaries?

Does someone think the larger size somehow offers potentially more IP
protection?

Does the FPGA utilities world have anything like teensy_loader_cli? It's about
28k. Not limited to MS Windows. Works with both BSD and Linux.

~~~
nickpsecurity
They're huge because they have to support a lot of stuff from hardware devices
to synthesis to verification. Probably legacy issues in there too. It doesn't
help that every part of hardware development is Mega-Hard:

[http://fpgacomputing.blogspot.com/2008/08/megahard-corp-
open...](http://fpgacomputing.blogspot.com/2008/08/megahard-corp-open-source-
eda-as.html)

Each aspect of synthesis, equivalence checking, testing, etc has whole MS's
and PhD's dedicated to it. I'm sure the result can be a lot smaller than 20GB
but it's still going to be incomprehensible by one person except in pieces.
And that person has to be an expert on every aspect of hardware development
from digital to analog to the wires that make up the gates. Like major OS's or
software, you're always going to be taking someone else's word that a huge
chunk of it's safe. Might as well plan around that.

------
fuzzieozzie
It's simple economics --- lock in customers to buy your chips. Just like razor
blades and razors.

------
nraynaud
how far are people in reversing the bitstreams? I'm thinking maybe I should
choose an FPGA and try to raise money on kickstarter and spend one year on it.

~~~
plaes
Lattice iCE40 is documented [1] and it has fully open source development tools
[2], [3]

[1]
[https://github.com/cliffordwolf/icestorm](https://github.com/cliffordwolf/icestorm)

[2] [http://www.clifford.at/yosys/](http://www.clifford.at/yosys/)

[3] [https://github.com/cseed/arachne-pnr](https://github.com/cseed/arachne-
pnr)

------
sklogic
If they want to keep the bitstream format closed for some deranged IP reason -
ok, fine. Just document all the cells (they do it to an extend anyway) and
provide a readable format for a placed and routed layout.

This way the entire toolchain can be open source, keeping only the bitstream
packer closed. Lawyers are happy, users are happy - win-win.

~~~
sobkas
But then you could potentially change fpga supplier by only replacing
bitstream packer, why would vendor allowed that?

~~~
sklogic
Cell libraries would still be incompatible. Your only possible portable layer
is still RTL (with a lot of effort), exactly the same thing as with entirely
closed toolchains.

~~~
nickpsecurity
Good point. There's still a potential loss for them if that final synthesis
phase creates a performance or energy usage advantage for them. I haven't seen
any experiments to find out. I do know, outside of device characteristics,
they mainly compete on how well their EDA tools utilize them.

