
Open Source Needs FPGAs; FPGAs Need an On-Ramp - heathjohns
https://www.blinklight.io/blog/2017-03-31/
======
BillBohan
I am currently waiting for my MATRIX Voice to arrive.

[https://www.indiegogo.com/projects/matrix-voice-open-
source-...](https://www.indiegogo.com/projects/matrix-voice-open-source-voice-
platform-for-all#/)

I have a Spartan 3AN dev board, another Spartan6 board, and an Arty. I was
using Xilinx ISE free version to develop for the other boards until I bought
the Arty. It came with a one year license for Vivado. I did not know that
activating the Vivado license locked me in to developing only for Arty. ISE
will no longer synthesize for any other target. I strongly dislike the closed
nature of their software licensing.

I am retired but the last 10 years I worked writing VHDL. I can kind of read
Verilog and understand what it does but do not know it well enough to write in
it. The systems I worked on were for oil well logging. My circuits went down a
16,800 ft. hole where it was 350° F and the pressure was over 7000 PSI.
Production quantities were typically less than 100. We used no bigger an FPGA
than was needed to keep power at a minimum as there was no way to dissipate
heat. Also the circuit boards were quite small since they needed to fit into a
housing less than 2" in diameter. Frequent design changes were needed but all
ran on the same boards.

I am currently working on a processor design that I call NISC. The set of all
opcodes is the null set. It's a single instruction machine that does a move
instruction with two operands, Source Address and Destination Address. I have
considered putting the specifications and design on the internet as open
source but am not sure where I should put it. Would anybody be interested in
seeing it and where do you think I should put it?

~~~
amelius
Curious, how would one implement adding two numbers on your NISC architecture?

Or is the idea that you move the operands to the inputs of an adder circuit,
and then move the result?

How would conditional control flow work?

~~~
bitexploder
[https://recon.cx/2015/slides/recon2015-14-christopher-
domas-...](https://recon.cx/2015/slides/recon2015-14-christopher-domas-The-
movfuscator.pdf)

Movfuscator

~~~
BillBohan
Thanks for this link. I will read it when I have time.

I currently have an accumulator at an address.

Moving to the next address ANDs with the accumulator.

Moving to the following address ORs with the accumulator.

Subsequent addresses XOR, ADD, ADC, SUB, SBB with the accumulator.

I have a location called Z and one called NZ which may be written. Reading
from either location returns what was written to Z if the Z flag is set,
otherwise both read what was written to NZ. Moving either to the PC effects a
conditional JUMP. Moving either to the relative register adds it to the PC
(relative conditional Jump). Moving either to the Indirect register pushes the
PC on the stack and writes to the PC (conditional call).

I envision the capability of using the accumulator as a floating point
register and having locations which perform floating point operations in a
similar manner. It could also be considered as a vector and there could be
locations which perform vector operations on it.

------
vvanders
Something that's not really spoken to but is just as important is that FPGA
development isn't software development. You're specifying hardware and usually
that hardware doesn't just scale to new platforms.

Time will tell if Open Source can make the transition over to that domain I'm
cautiously optimistic but I think there's a larger cultural divide there than
called out.

~~~
blackguardx
FPGAs are sexy to many people because they are exotic. Imagine a custom chip
designed to do whatever you want and able to be reconfigured on the fly?

The problem is that many aspiring learners want to approach it like a software
problem. That can be a valid approach as long as they are willing to learn new
abstractions. Many are not and then get frustrated and blame the tool
ecosystem. Granted, the tool ecosystem does suck, but people (like me and
others) are doing real work with these tools. Even if you scoff at FPGA tool
quality, you have to remember that almost every ASIC out there was developed
with similar (maybe slightly better) quality tools.

There has to be flexibility on all sides. Tool vendors have to adapt to
changing times (more open ecosystems) and software developers branching out
into FPGAs have to be willing to learn how hardware works.

~~~
heathjohns
> The problem is that many aspiring learners want to approach it like a
> software problem.

That is precisely why I made Blinklight - it's an educational platform for
starting at the very bottom.

~~~
petra
Maybe it would be better to use the highest abstraction tools possible
(chisel? Maybe DSL's for hardware generation and verification) ?

Because compared to let's say software development, or embedded systems
development, real chip design and mostly the tons of verification you need, is
boring.

~~~
nickpsecurity
Look up Synflow's language for HLS. It's C based and open-source.

~~~
inlineint
I don't want to diminish what they and other similar projects do and actually
I'm not really familiar with FPGA development, but I have one thought: what if
we don't need one more _language_ , but rather something on a different level
of abstraction?

The first neural networks that ran on GPU was written using low-level GPU
primitives [1]. This was a non-trivial process that required to do a lot of
low-level stuff. It required system programming skills and time to implement
new architectures. But a group of researchers at the University of Montreal
developed Theano [2], a framework that allows you to define computational
graphs in Python programming language and then compile them to CUDA code that
could be then executed on GPU. Instead of spending resources on development of
a new language they put their efforts on actual thinking out useful
abstractions and implementing compiler that works efficiently. It is also
notable that they didn't include very high level abstraction in Theano too,
but there are libraries like Lasagne [3] and Keras [4] that introduced higher
level abstractions (neural network layers and pluggable pre-implemented
models) on top of Theano. It is safe to say that Theano boosted Deep Learning
research, making programming of new neural networks architectures quicker and
accessible.

What if actually we need just the same thing for FPGA? Just a Python library
that defines useful abstractions for logic circuits building, allows to
construct arbitrary graphs using them, and then compile these graphs to VHDL.
Assuming that the basic building blocks defined in Python are well defined and
tested, it would be easy to implement verification and some testing in pure
Python, the tooling like visualising logic diagrams can be implemented in pure
Python too.

In addition to reduced efforts for development (because you don't need to
design and implement a new language), it would be easier to pick up by
software programmers: they would not need to learn new syntax, but concentrate
on core conceptions like gates/summators/other stuff and graphs involiving
them. It wouldn't be necessary to develop special-purpose editors, because it
just Python, and again, because the tests for the resulting schemas could be
written and ran in pure Python, it would be possible to use standard CI tools
like Travis for open source development.

Edit: It seems like there is already a project MyHDL [5] that does something
very close to what I described above.

[1]
[https://hal.inria.fr/inria-00112631/en/](https://hal.inria.fr/inria-00112631/en/)

[2] [https://github.com/Theano/Theano](https://github.com/Theano/Theano)

[3] [https://github.com/Lasagne/Lasagne](https://github.com/Lasagne/Lasagne)

[4] [https://github.com/fchollet/keras](https://github.com/fchollet/keras)

[5] [http://www.myhdl.org/](http://www.myhdl.org/)

~~~
mietek
Clash. [http://www.clash-lang.org](http://www.clash-lang.org)

~~~
eli_gottlieb
Ha! Looking at their website, I realized a toy project I just did for a hiring
process was actually designed to produce simulatable inputs to Clash!

------
wkoszek
Good place to start this would be here:

[https://github.com/wkoszek/freebsd_netfpga](https://github.com/wkoszek/freebsd_netfpga)

[https://wiki.freebsd.org/FPGA/](https://wiki.freebsd.org/FPGA/)

[https://www.freshports.org/devel/xc3sprog/](https://www.freshports.org/devel/xc3sprog/)

It's a FreeBSD NetFPGA driver for an older 1G card that I wrote during my
internship 8 years ago, and a little ecosystem to make development more
functional. I could program the FPGA from the FreeBSD and synthesize the code
on FreeBSD that time.

What you say is very hard, though. Speeds which are achieved with modern ASICs
are hard to compete with, because ASICs do a lot of advanced stuff with DMAs,
interrupts, checksum offloading. This might be doable with an expensive
hardware, but nobody in DYI community has money to do this.

Additionally the tools for synthesis are proprietary, and everything touching
FPGA is pretty much proprietary too. Looking into date/author of the synthesis
is as far as I could get:

[https://github.com/wkoszek/libxbf](https://github.com/wkoszek/libxbf)

(can only open file and tell you where the bitstream starts; not what it
actually is) So the road to complete freedom goes through ASIC world, in my
opinion.

~~~
canada_dry
Great list.

I have Bunnie's awesome Novena, which came with an FPGA (as well as an SDR
add-on).

But, as the article says "FPGA development has always been an industrial
activity, dominated by brutalist, opaque, and proprietary tools."... I haven't
found the FPGA-FOR-DUMMIES guide to help me do anything with the thing.

~~~
wkoszek
Do you think there's a market for an FPGA-based tutorials?

I think I have a good grasp of stuff necessary for this, but I always felt
that it's a really niche business, even within the DYI/open-source community.

If you want to start doing anything, I found it's best to look at the
popularity of the board. S3E Starter Kit is by far the best, since most of the
people could afford it, and it's nicely support. Any other board: case by case
basis.

~~~
canada_dry
Realistically no.

However, I doubt many folks predicted the wild popularity of Arduino or Rpi
either. All it takes is for some cool applications. The price point isn't a
huge barrier these days.

I keep hoping some post-grad will come up with a great DUMMIES like guide that
covers Hello World to signal processing.

~~~
heathjohns
Author here, that is precisely the point of Blinklight (the site this blog
post is from) :)

It starts from the absolute beginning, and the tools and tutorials are web-
based and integrated with each other:

[https://www.blinklight.io](https://www.blinklight.io)

The focus is from Hello World to personal computer, but I've built a DSP-based
guitar pedal in the past, and signal processing is really fun - I'd love to
create a parallel learning path from Hello World to DSP at some point.

------
fiziks_hckr
Andrew Zonenberg has a whole list of open projects for FPGAs on his wiki:

[https://github.com/azonenberg/openfpga/wiki](https://github.com/azonenberg/openfpga/wiki)

There has been a lot of exciting work in the same repo for Silego lines and
general foundations for Open Source FPGA toolchains for many of the types out
there.

I've been really enjoying Clifford's Project IceStorm - open source tools
which I've been using to develop/test on real hardware build up some Verilog
chops:

[http://www.clifford.at/icestorm/](http://www.clifford.at/icestorm/)

I made a quickstart for it in a repo here for those interested in starting
some Verilog adventures : )

[https://github.com/gskielian/TEAM-VERILOG](https://github.com/gskielian/TEAM-
VERILOG)

------
rubenfiszel
For anyone interested in developping applications for FPGAs in a high-level
DSL embedded in Scala, this project ([https://github.com/stanford-ppl/spatial-
lang](https://github.com/stanford-ppl/spatial-lang)) from a Stanford Lab might
interest you.

Disclaimer: I am part of the lab.

~~~
spamizbad
I noticed there's a lot of RISC-V cores today implemented in Berkeley's Chisel
([https://chisel.eecs.berkeley.edu/](https://chisel.eecs.berkeley.edu/)). How
would you say they compare?

~~~
rubenfiszel
Actually, Chisel is one of our main target codegen. We aim to be more high-
level than Chisel.

See here for a quick and very incomplete tour: [http://spatial-
lang.readthedocs.io/en/latest/tutorial.html](http://spatial-
lang.readthedocs.io/en/latest/tutorial.html)

~~~
krapht
I was never able to convince my coworkers to program in FPGA HLLs, just
because when you need to debug and simulate, you have to touch VHDL/Verilog in
order to communicate with the board and understand what your high level
abstraction is compiling to (not to mention the great deal of work that relies
on tweaking and instantiating FPGA parameters like clock lines, buffers, DSP
cells, etc). And actually this takes up the majority of the design time... so
to make a software analogy, why bother writing in Scala when you have to
verify and debug at at the assembly level?

~~~
smaddox
Is that a serious question? Because most software development is now done in
high level languages.

With the proper abstractions, there's no need to debug the high level code at
the assembly level; you just need to debug the abstractions.

~~~
krapht
Maybe I should have made a different comparison: you can write a kernel driver
in Python, but maybe you shouldn't. Debugging your HLS compiler is not a
process full of joy and happiness.

------
zafka
I love FPGAs in a simplistic sort of way. I first started working with them
when I was in school (1994-96) and thought I was going to spend my life with
them. Other than some simulations with very expensive software that had been
donated by Motorola I never actually used FPGAs. But from when I first started
looking at them, I thought they would be the building blocks for an AI machine
that could be added on to forever. I still think so, but I thought that there
would be more visibility into the internal workings of the chips.

------
amboar
TimVideos is an open source project using FPGAs and Python for conference (and
other) video capture: [https://hdmi2usb.tv/home/](https://hdmi2usb.tv/home/)

So if you're interested in hardware, FPGAs or Python, it's a great opportunity
to get hacking! They are also part of GSoC this year, but it might be too late
to apply.

------
PhaseLockk
I'm not exactly clear on what the post is advocating. If it's saying that
there should be an open source implementation of an FPGA, then I just have to
say that I think there's no way that's happening anytime soon. There are way
too many hurdles.

If the argument is just that the open source community should leverage FPGAs
more as a means of creating more powerful "open source" hardware, and that
there should be more resources for people to learn how to write hardware, then
I guess I agree with that. But I don't think FPGAs will be the panacea the
author seems to think they will. FPGA implementations will always entail a
performance and/or efficiency hit compared to ASIC implementations, and I
think many people won't want to take that hit, limiting the number of users
who are willing to adopt the open source solutions.

~~~
heathjohns
Author here - I'm advocating the latter.

I agree with you to a point. However, I believe that those things that have
been with us for decades: sound cards, 2D graphics adaptors, network cards,
etc. can be done in FPGAs, and should be.

The speed is there, and the power used by the southbridge and peripherals is
eclipsed by the processor and the screen backlight, so I don't think the power
consumption is worth worrying about (I'd be interested to see evidence to the
contrary, though).

Put another way: much of the foundational chips on motherboard are no long
performance-sensitive, so we shouldn't be paying a compatibility price for it.

~~~
petra
Why not just choose a small set of "golden" chips, create high quality drivers
that abstract away incompatibilities if possible, and verify the heck out of
that ?

~~~
floatboth
That's… sort of happening with laptops. Pretty much any modern laptop with an
Intel CPU uses a small set of Intel chips for everything. My dmesg includes:

em0: <Intel(R) PRO/1000 Network Connection>

iwm0: <Intel(R) Dual Band Wireless AC 7260>

xhci0: <Intel Panther Point USB 3.0 controller>

ehci0: <Intel Lynx Point LP USB 2.0 controller USB>

ahci0: <Intel Lynx Point-LP AHCI SATA controller>

drmn0: <Intel Haswell (ULT GT2 mobile)>

Most laptops from the same generation use the exact same set of chips.

------
milesvp
Having read through the comments, I'm surprised that no one is talking about
the true benefit of a tech like fpga. It's that it's field programmable, with
many that can be reprogrammed in fewer cycles than a cache miss. This means
that ultimately, fpgas have the potential to be an optimization flag in your
favorite compiler or jit.

I see this as an inevitable convergence, especially when rumor of fpga
transistors/mm^2 growing faster than cpu transistors/mm^2. What I can't tell
is how much of what people talk about fpgas is hearsay, and how much is simply
exaggerated. I know that every time I start to look into fpgas I always feel
let down compared to their potential.

------
michaelmior
Since I didn't see any other good place to post feedback, I thought I would
point out that it's "sheer number" not "shear number."

Also, lowrisc.org is not accessible via HTTPS.

~~~
heathjohns
Thank you, fixed!

~~~
michaelmior
Also, now that I've had time to start going through things, I wanted to add:
great job! This seems like a pretty fantastic introduction so far. Although it
might be helpful to provide solutions/additional hints in case someone gets
stuck. (I'll admit I'm currently a little stuck at the "boss battle" in
chapter 1.)

~~~
heathjohns
A sorry, I missed your comment! I'm currently working on getting a forum up so
that people can help each other through challenges. In the meantime, if you're
still stuck feel free to email me at heath@blinklight.io and I can give you
some hints :)

------
bsder
Personally, I'd rather have the ability to create a chip for $5K.

It's too stupidly expensive for the CAD tools (>$100K) when a wafer run is
less than $20K for a very old process nowadays.

~~~
smaddox
$5K wouldn't cover a single photomask, unless you're talking VERY OLD process.

Maybe there are direct-write (laser or ebeam) litho foundries... I don't know.

~~~
neurotech1
Multi-Project Wafer [0] services like MOSIS [1] reduce the costs
significantly. Not sure what the lowest practical budget for a project is, but
$5k would be in the ballpark. Access to student licenses for design software
is possible, too.

[0] [https://en.wikipedia.org/wiki/Multi-
project_wafer_service](https://en.wikipedia.org/wiki/Multi-
project_wafer_service)

[1] [https://www.mosis.com](https://www.mosis.com)

------
nickpsecurity
The trick is getting more EE's to spend their grants on open-source FPGA's and
tooling:

[https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-...](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-43.html)

[http://opencircuitdesign.com/qflow/welcome.html](http://opencircuitdesign.com/qflow/welcome.html)

------
cushychicken
The author's example about a huge multitude of ethernet drivers doesn't really
map well to the solution suggested (i.e. A big blank slate of FPGA logic with
a bunch of ADC/DAC/codec fabric on the chip periphery). The wire side of
Ethernet is really well specified, and tightly implemented. (Your chip won't
be 802.3 compliant if it's not!) That's a result of really good specs on the
IEEE's part. They put a TON of time into detailing how the Ethernet physical
layer talks chip-to-chip. The driver side implementation for Ethernet,
however, is basically the wild fucking West.

There are cases where the author's approach makes sense - software defined
radio jumps to mind. Adding one more layer of abstraction to ethernet drivers
isn't one if them. That seems like an area where software could learn from
hardware - namely, that good specs drive good implementations.

~~~
donlzx
AFAIK, the digital interface of the physical layer (PHY) of the Ethernet stack
are pretty much standardized (MII/RMII/GMII/RGMII). Most SOCs implement their
own MAC layer, but are usually compatible with off-the-shelf PHY chips. The
function and interface differences in proprietary MAC implementation is where
the complexity of the kernel driver comes from.

Suppose that we also standardized the interface between the MAC layer and the
kernel and add an FPGA inside the existing PHY chip which is opened to real-
time programming, a ton of optimization can be done. For example, DDOS packets
could be thrown away well before reaching the kernel space.

Previously I had done that with some low-cost FPGA kits, with less than 1K of
HDL codes, it could get out only the IP packets wanted and then pass the
packets directly to the application layer via the memory interface between the
embedded CPU and FPGA logics.

I think a programmable network interface is well worth the money if we care
enough for openness and efficiency.

~~~
cushychicken
You're right - MII and it's derivatives are fairly standard. I agree that the
MAC -> CPU interface is the root of a lot of the complexity that the author
writes about. However, that still sort of serves to illustrate my point: the
software to interface with the MAC is the chunk that introduces the most
variability. Standardize that and a lot of your problems are solved.

Integrating a bunch of analog/digital/vice versa fabric onto the same die is
not a tenable solution to this particular issue. I'll admit the concept is
mildly intriguing, but I think you'd find quickly that the silicon demands for
a DAC/ADC of the quality needed to emulate any high speed waveform are a
little out of reach fab wise. Too much space, not enough drive speed.

------
JoeNatter
Hi there,

we started a Kickstarter campaign a few days ago. We would really appreciate
your expert feedback to our FPGA related board. Link:
[http://kck.st/2orXGCv](http://kck.st/2orXGCv)

Our board connects a Raspberry Pi to the DE0 Nano. The FPGA can be programmed
and reconfigured by the Raspberry Pi. The overall goal of our company is to
make the entry to the FPGA world as easy as possible.

Motivation for our connector board: We try to avoid the proprietary tools as
much as possible.

My experience with FPGAs so far (and only my opinions! please roast me)

CASE 1: Only FPGA, no processor: To complex for bigger projects, because of
the missing high abstraction layer. Drawbacks: Slow development process; To
make something useful you need a lot of stuff in your FPGA. Very boring for
beginners and in my opinion also for experts;

CASE 2: FPGA with Soft-Core processor: For my bachelor thesis I once used the
SCARTS Soft-Core processor from open cores in my DE0 Nano. Using an arbitrary
Soft-Core processor from open cores can't be done Out-of-the-Box. You have to
be quite experienced working with FPGAs. For my thesis for example I had to
write my own SDRAM controller and add an additional pipeline stage to the
processor. Drawbacks: Soft-Core processor quite slow; To hardcore for
beginners; Simulation time makes you consume a lot of coffee;

CASE 3: FPGA with processor on chip: Advantages: High speed interconnect
between FPGA and processor; Fast processor; Disadvantages: Being fully
dependent on the proprietary toolchains. In my opinion the "FPGA only"-tools
suck, but the so-called "system builders/designers" drove me crazy;

CASE 4: FPGA with external processor: In my opinion this is by far the best
compromise. With some of my colleagues at the university I once made a bitcoin
hashing cluster with our student boards. We also had an atmel microcontroller
and a PC for that project. We just needed two days to make a fully working
system. So FPGA programming can be very easy actually.

But if you combine the Raspberry Pi and the DE0 Nano it should be even easier.

With Raspberry Pi you have a clean and maintained Linux and with the DE0 Nano
you have a powerful and still quite cheap FPGA board.

Again I would really appreciate any feedback. What do you do with FPGAs and
how do you approach bigger FPGA projects?

Best regards, Joe

~~~
gluggymug
Cool project.

As someone from the FPGA industry, bigger projects just use your CASE 3 on a
board to start prototyping. A Zedboard or similar. Cheaper version could be
the Zybo. It's like $500 vs $200.

When you get to production stage, you whip up your own board.

What exactly is your complaint about the propriety tools? You are using Altera
tools with the DE0 Nano.

~~~
JoeNatter
Thanks

k. Interesting. A former employer of mine used CASE 4. The FPGA was connected
to the CPU via a memory interface. In general this option should be more
flexible. You can choose the exact processor and FPGA type you want. On the
other hand CASE 3 may has components which are optimized for each other. But
for an industry product I would feel to be in a better position in CASE 4 if
one of the components is discontinued.

About the complaint. It was a bit too emotional, because I had some specific
issues in mind, that cost me a couple of forum searches and hours. Included
GUI bugs, some none-intuitive settings and IP blackboxes that weren't working
or buggy. Not only Altera but also Lattice. The Lattice FPGA contained a Hard-
IP SPI. Didn't do anything. I posted on the forums, a couple of users replied
who had the same problem but no reaction from Lattice at all.

~~~
gluggymug
I think CASE 3 is going to be much faster due to the high performance AXI
ports that Zynq's have.

~~~
JoeNatter
Yes. That's true

------
Ccecil
Well there are some things like [http://papilio.cc/](http://papilio.cc/)

I personally backed the papilio duo kickstarter. I haven't done anything with
the board yet but there has been some small stuff...and some pretty complex
projects too.

------
pryelluw
Are there any fpga open hardware implementations available?

~~~
buildbot
Actual chips, not as far as I know. There is a completely open source tool
chain for a small FPGA, the icestorm project.

The VPR project by university of Toronto has several architechure models
defined, but there is probably no way to turn these into a real chip.

FPGa vendors in general are extremely secretive about their designs.

(I'm a grad student working in The FPGA space)

~~~
zafka
> FPGa vendors in general are extremely secretive about their designs.

That is the part that saddens me. I had always wanted to be able to play with
the internal programming. Back when I started they were not anywhere as big,
and we almost had visibility.

~~~
buildbot
At least with VPR you can define /modify your own FPGA architecture file! They
are written in xml and parsed by the tool when doing place and route.

------
jecel
The course at that site is pretty interesting. There is an error that doesn't
affect any of the examples but might be a problem for future circuits: what
are called OR are actually XOR. It took me a while to figure it out because
the notation for "don't care" was not obvious to me (it was explained later in
the text after table circuits are introduced).

I was able to build a circuit to test this by copying an OR to the display
block and hooking up the inputs to the buttons and the output to column 1. It
might be interesting to have some scheme to test sub-blocks directly.

------
tyingq
Aside from the acceleration angle, the maker community would also benefit from
a richer ecosystem around FPGAs, especially inexpensive ones.

They are very handy for input and output of signals where you need precise
timing.

Things like oscilloscopes, video cards for vintage displays, driving led
billboard modules, and so on. All of which either aren't possible, or aren't
optimal/scalable on either Arduino or Rpi boards.

There are also bits like drop in clones of atmega microcontrollers that run on
the fpga, so you can leverage some of what you already know to interface with
it.

------
visarga
> FPGA development has always been an industrial activity, dominated by
> brutalist, opaque, and proprietary tools.

Brutalist tools? As in, made out of raw concrete (béton brut)?

~~~
duck2
Xilinx ISE does give the notion of raw concrete.

------
WheelsAtLarge
Question to all: is there a ycombinator for open source projects? Is it even
possible to have one without profit as a motivator?

~~~
dragonwriter
> Question to all: is there a ycombinator for open source projects.

I think it's called "YCombinator".

> Is it even possible to have one without profit as a motivator?

YC funds nonprofits, and open source, in any case, can (and often does) have a
profit motive, so either way it seems YC would potentially be an option for an
open source project.

------
patsplat
While tangential to the main point, it's weird to read about the failure of
the open source phone knowing about the Android Open Source Project. What's
missing from AOSP is Google Play Services aka proprietary web services.

Open source has been a huge hit on the client, it's the firewalled server that
has slowed the spread of free software.

------
jcoffland
The Open-Source community needs Open-Source HDL compilers. Until this happens
OSS for FPGAs will continue to be slow.

~~~
grp
Like clash [0]?

[0] [http://www.clash-lang.org/](http://www.clash-lang.org/) ?

~~~
jcoffland
No. Clash is a transpiler from Haskel to other HDL languages. It still needs
propritary compilers.

------
mindcrime
Nice coincidental timing... I literally just ordered a Lattice Ice-40 eval
board yesterday, and am hoping to get started using IceStorm to do some FPGA
design stuff. It's all brand new to me, so any good entry level stuff,
tutorials, docs, etc. on any of this stuff are very much appreciated.

~~~
heathjohns
Author here - have a look at the main site, it was made just for that purpose:
[https://www.blinklight.io](https://www.blinklight.io)

It's based around the iCE40/IceStorm, but it has its own specific "dev board".
I'm planning on making it more generic later on.

------
forkandwait
I am trying to teach myseful CUPL on Atmel 16v8's as a cheap, super basic way
to get my feet wet in the FPGA/ CPLD/ GAL world. I have a breadboard half
wired up to blink lights based on input, etc. I like older technology, but
hope that what I learn applies to more modern FPGAs.

------
Jhsto
Unrelated to the article content, but the webpage zoom is broken on mobile.

------
salesguy222
Do FPGAs have a future beyond "prototyping non-memory intensive algorithms for
eventual ASIC implementation"?

It seems to me that the scale out + scale up method of x86 and GPUs are the
most promising and profitable arenas still, besides some very niche and very
particular applications?

I could and would like to be wrong. I'd go learn FPGA dev then if there were
things that needed accelerating and were personally lucrative to me :)

~~~
lvoudour
They've grown beyond prototyping a long time ago. They are full featured SoCs
with great flexibility that fill niches where CPUs, GPUs, ASICs can't compete
in flexibility/power consumption/development cost.

\- ASICs are too rigid and require high volumes to be profitable

\- GPUs are too power hungry

\- CPUs are not good for massively parallel processing

FPGAs are heavily used in industrial/military/aerospace applications

~~~
vvanders
If you think GPUs are too power hungry then you're in for a shock with FPGAs.
Switching FPGAs are incredibly power hungry since they run on much larger
processes.

FWIW most modern FPGAs use discrete DSPs anyway so you're not really getting
the flexibility at that level.

~~~
PhaseLockk
> Switching FPGAs are incredibly power hungry since they run on much larger
> processes.

I don't think the comment about process is really true, from what I can tell,
Xilinx is only a few months behind the biggest SoC makers in terms of its
process adoption, and is shipping 14nm parts currently. Not sure about Altera,
but they are on Intel's process, which is bit ahead of the competitors anyway.

In terms of switching power, you definitely pay a penalty to have the
reconfigurability in hardware, but on the other hand you don't have all the
unused logic that you would on a GPU. I'd guess the comparative efficiency
depends on the specific problem and specific implementation, but I don't have
any numbers to back that up.

~~~
vvanders
It's a couple things, process is a large part. You're also dealing with 4-LUT
instead of transistors so you pay both in switching power and leakage since
you can't get the same logic-to-transisitor density that's available on ASICs.

Also there's a ton of SRAM for the 4-LUT configuration so you're paying
leakage costs there as well.

~~~
thesz
Tell me more about leakage.

NVidia managed to get it right about year and half ago. Before that their
gates leaked power all over the place.

The LUTs on Stratix are 6-to-2, with specialized adders, they aren't at all
that 4-LUTs you are describing here.

All in all, there are places where FPGAs can beat ASICs. One example is
complex algorithms like, say, ticker correlations. These are done using
dedicated memory (thus aren't all that CPU friendly - caches aren't enough)
and logic and change often enough to make use of ASIC moot.

Another example is parsing network traffic (deep packet inspection). The
algorithms in this field utilize memory in interesting ways (compute lot of
different statistics for a packet and then compute KL divergence between
reference model and your result to see the actual packet type - histograms
created in random manner and then scanned linearly, all in parallel). GPUs
and/or CPUs just do not have that functionality.

------
throwayedidqo
FPGA hardware is vendor locked and incredibly proprietary. The current
business model sells chips at cost or below and makes it up with software
licensing.

This is a huge problem for open source adoption, and unless manufacturers
change the business model we will never see widespread use of FPGA's.

~~~
bcarlton0
As an FPGA designer, I can't disagree more with your statement of their
business model. They make money on chips. That is indisputable.

My last 3 designs used the vendor's free software, except for some IP we
bought (IP=Intellectual property, a specific core). You might think of the IP
as a library you would buy as a software engineer.

~~~
Ductapemaster
bcarlton0, what area of business are you in? I'm a young engineer working with
FPGAs in the medical space and I'd love to hear about what else is out there.

~~~
bcarlton0
Ductapemaster, I have done designs in networking (e.g. packet processing and
10 Gb Ethernet), wireless (e.g. baseband part of a modem), glue logic, and
other more specialized areas. FPGAs have a wide variety of uses other than
ASIC prototyping and small glue logic.

