
How FPGAs work, and why you'll buy one (2013) - mpweiher
http://yosefk.com/blog/how-fpgas-work-and-why-youll-buy-one.html
======
davexunit
The major issue with FPGAs today is that there is no free toolchain for
compiling the bitstreams that run on them.

The only project I know of to make any real progress on fixing this is
fpgatools. [0] It supports a single model of the Xilinx Spartan-6 series, the
XC6SLX9. I know almost nothing about FPGAs, but seeing as how almost no one is
working on this problem, I figured I'd try to add support for the XC6SLX45,
the model that is the Novena motherboard. So far I've added the C code to
represent all of the pins. [1] Unfortunately, I haven't heard anything from
the maintainer about how to proceed further.

fpgatools is only a single piece of the puzzle, though. It provides libraries
to build the low-level bitstreams, but we'd still need Verilog/VHDL
implementations that use it, as well as replacements for the proprietary "soft
cores" that most people are using. The amount of work required is
intimidating. Not having an HDL readily available did motivate me to have some
fun rewriting the example C programs in Scheme, though. [2]

Does anyone know about other such efforts to free FPGAs? I'd be very grateful
to hear about them.

[0] [https://github.com/Wolfgang-
Spraul/fpgatools](https://github.com/Wolfgang-Spraul/fpgatools) [1]
[https://github.com/Wolfgang-
Spraul/fpgatools/pull/8](https://github.com/Wolfgang-Spraul/fpgatools/pull/8)
[2] [https://gitorious.org/davexunit/guile-
fpga](https://gitorious.org/davexunit/guile-fpga)

~~~
alain94040
I disagree strongly. The major issue with FPGAs today is that they cost money
(because they are chips). They cost even more than your CPU. So you can't
treat them life software. To put it in simpler words: everyone has access to a
CPU, less than 1% have access to an FPGA. Fix that* and the tooling will
follow.

* one way to make FPGAs "free" is to incorporate some FPGA blocks in an Intel CPU. Then it will feel free.

~~~
davexunit
Intel CPUs are anything but free. They contain nonfree microcode and other
nastiness, so they would be unsuitable for free hardware projects.

~~~
pogden
I think the grandparent is referring to 'free as in beer.' It doesn't cost
anything to tinker with programming because almost everyone already has a
computer they use for other things. Not so with FPGAs.

------
viccuad
I think a lot of people posting here don't realize that programming an FPGA
means programming how the gates are set up on the FPGA, not software
programming.

VHDL/Verilog etc are descriptive languages, not software languages ala
C/Python/Java/etc. You should have enough hardware design knowledge (at RT
level) to be able to sketch your HW design in a piece of paper, and then, you
are apt for writting VHDL/Verilog and putting it into the FPGA.

~~~
beeworker
While this is sort of true, this attitude contributes to the problem another
commenter mentioned about how working with FPGAs is like taking a trip to the
70s. There's no reason you can't use a full-fledged programming language like
Python to specify your hardware with the strict HDL subset at the RT Level,
then use the full language's power to test it off-FPGA, and use all the
software tooling around your full language to make the whole experience as
pleasant as possible. In fact, that's the approach of MyHDL:
[http://www.myhdl.org/](http://www.myhdl.org/)

~~~
aprdm
Yup, made an experiment with it ->
[http://www.eetimes.com/author.asp?doc_id=1323837](http://www.eetimes.com/author.asp?doc_id=1323837)

I really like MyHDL!

------
lmeyerov
Our experience with GPU cloud computing at Graphistry should be pretty
representative. We spend a lot of effort getting subsecond interactivity in
funny C dialects (OpenCL/CUDA). To get those down further, we can put together
a few GPUs and reuse most of the code. Eventually, however, data communication
costs get too high, so FPGAs would be the next step. That is certainly doable:
OpenCL -> FPGA compilers are a thing!

So the question is who has already been going down that pathway, and I'll
leave that as an exercise to the reader :) My guess: that'll stay in private
hands for awhile, and after a few years, as FPGAs get put into public clouds
and everyone goes down the same pathway, a lot of people will be using FPGAs,
even if indirectly.

~~~
aprdm
With OpenCL + FPGA what you do is basically instantiate a GPU architecture
inside an FPGA and then execute the OpenCL algorithm into that.

I think that's cool but you aren't getting a better throughput than what you
would get with a GPU.

The better solution would be to do your custom optimised control/data path for
the FPGA architecture you are currently using (VHDL/Verilog).

~~~
DrHoppenheimer
That's not how it works, actually. The compiler generates the dataflow
directly in hardware.

------
Igglyboo
The trouble I had with FPGAs is how radically different languages like Verilog
and VHDL are when compared to languages like Java, Python, or C++.

Writing code that is concurrent by default is a massive paradigm shift when
all I had been exposed to at that point were procedural languages.

~~~
nulldozer
That's because hardware description languages are not programming languages.
This point was constantly emphasized in my digital systems courses. You do not
describe a sequence of instructions using an HDL, you describe the layout of a
circuit with registers, wires, and logic blocks. You have to consider the
physics of the device to avoid violating timing constraints or driving a
signal from two different sources, etc.

~~~
elihu
Hardware description languages are not imperative programming languages (even
if they might superficially resemble them). That doesn't mean they aren't
programming languages at all -- they just belong to a different category of
language.

(I suspect that there's a connection between hardware design and functional
programming, but I don't have enough hardware design experience to know how
closely the two are related.)

~~~
tel
Conal Elliott, Haskeller of FRP fame, was recently working for a company where
he was developing a compiler of functional programming into categorical
semantics which is actually quite neatly similar to the layout of a circuit.
The company went under, unfortunately, so I'm not sure of the status of his
research, but some remarks are available on his blog [0] [1] [2].

Further, there is a notion of a Generalized Arrow [3] which is a useful,
advanced functional programming technique which is quite nice for implementing
FRP. These are somewhat obviously "wiring diagrams" but are shown to be in
correspondence with a more normal "lambda calculus"-like syntax.

[0] [http://conal.net/blog/posts/haskell-to-hardware-via-
cccs](http://conal.net/blog/posts/haskell-to-hardware-via-cccs)

[1] [http://conal.net/blog/posts/overloading-
lambda](http://conal.net/blog/posts/overloading-lambda)

[2] [http://conal.net/blog/posts/optimizing-
cccs](http://conal.net/blog/posts/optimizing-cccs)

[3]
[http://www.megacz.com/berkeley/garrows/](http://www.megacz.com/berkeley/garrows/)

~~~
cfsc
You have CλaSH [0] based in Haskell developed at University of Twente that has
the same goal.

[0] [http://www.clash-lang.org/](http://www.clash-lang.org/)

------
varelse
Right now, FPGAs deliver slightly better perf/W than GPUs, but significantly
inferior FP32 performance. That said, they do shine for simple embarrassingly
parallel tasks assuming the task does not require FP32 operations. A good
example here is bitcoin mining.

That changes this year with the release of Altera's Arria 10, which comes with
1.3 TFLOPS of hardcoded FP32 math units. The other gamechanger is the ability
to write kernels in OpenCL rather than VHDL.

However, the build time for OpenCL on FPGAs is still many hours compared to
tens of seconds for GPUs so my guess is that GPUs will remain the development
platform for FPGA code for the foreseeable future, occasionally deploying to
FPGA on a daily basis or so.

Drifting a little off-topic, the SDKS and libraries for all the accelerators
should be free. That's what makes me such a CUDA fanboy. I'm looking at you
Intel. Charging for your Xeon Phi/CPU OpenCL compiler? Bzzzzt wrong...

------
nkurz
_A sequel is in the making, titled "Why you won't buy an FPGA"_

Did the followup article ever get written? I didn't find anything with a
search for the proposed title on either his site or Google.

~~~
solve
The reason is because GPUs gained more general-purpose capabilities. GPUs are
the new FPGA, for all practical purposes.

~~~
_yosefk
Author here.

I didn't write the follow-up yet because it requires a lot of research and I'm
busy :-)

If you ask me, GPUs are anything but "the new FPGA". GPUs are the least
efficient hardware accelerator out there, but also the most accessible to the
largest number of programmers. FPGAs are _much_ more efficient than GPUs on
DSP workloads and GPUs are useless for I/O while FPGAs are a godsend. On the
other hand, FPGAs have a ton of problems GPUs don't have. The two do not look
similar to someone caring about accelerators any more than snow and ice look
similar to someone living at a place where they get to see both... though the
two might seem similar to people from hot places where neither is common (or
perhaps if they saw one but never the other.)

~~~
solve
But are they the most efficient in cost-per-computation? For all the major
data crunchers I'm familiar with, doing either finance or scientific
calculations, that's the only metric they cared about.

Only place I can see the cost-per-computation metric not mattering is in space
satellites. Am I way off?

~~~
_yosefk
Do you get more throughput per dollar with FPGA relative to GPUs? Most
certainly, except for floating point stuff, especially double precision.
(Finance would care much less than scientific computing and I think FPGAs are
way more prominent there.)

~~~
solve
You sure? The ones I'm personally familiar with are investment banks that have
hundreds of thousands of computers doing machine learning modeling. They ran
the costs, and found GPUs to be far more cost effective.

~~~
_yosefk
Machine learning software will tend to use floating point, hence the result
IMO. In HFT for instance I'd expect things to be the opposite.

------
jychang
I am currently just another college student, but I have some experience in
this field, as I had interned at Xilinx (which is the largest FPGA maker right
now, iirc, although Altera may have taken the crown).

I don't think your typical college CS student, and by extension, the average
programmer, would be interested in using FPGAs right now. This isn't an issue
of performance, or costs, or lack of use cases- FPGAs are quite fast now and
certainly can be extended to new niches that currently are CPU-bound. The
issue isn't related to any of the advantages stated in this article, it's that
the FPGA toolchain is still mired in the 1970s, a world dominated by the EE
ecosystem that modern CS sprouted from.

Building things for a FPGA simply- for a lack of a better word- sucks. There
is a lack of beginner tutorials, a free (non-proprietary) implementation
tools. Severely lacking free example code and libraries. Few ways to share
code, like Github. Much weaker community help.

Richard Stallman may be overzealous, but his impact on programming is striking
if you compare it to what could have been, in the world of electrical
engineers. Working on FPGAs now is crippling, when you are used to coding in
the 21st century. Imaging a world where programming is without the GCC
compiler, with little libraries to build on, without Github, without
Stackoverflow. And THAT is the reason why FPGA adaption is low.

There are currently over 400,000 questions on Stackoverflow tagged "python",
and over 40,000 questions tagged "mongodb". [1][2] In contrast, there is less
than 2000 questions for the Verilog language, and less than 500 for Xilinx,
even less for Altera.[3][4]

I have helped organize several hackathons at my University, where the largest
one consisted of over 2000 people from across the USA. Random hacks that
innovate are encouraged. There are plenty of other obscure platforms that are
used, and yet very few people use FPGAs. This is indicative of the difficulty
for typical programmers to dive into FPGAs, and it will be a hard problem to
solve.

[1]
[http://stackoverflow.com/questions/tagged/python](http://stackoverflow.com/questions/tagged/python)

[2]
[http://stackoverflow.com/questions/tagged/mongodb](http://stackoverflow.com/questions/tagged/mongodb)

[3]
[http://stackoverflow.com/questions/tagged/verilog](http://stackoverflow.com/questions/tagged/verilog)

[4]
[http://stackoverflow.com/questions/tagged/xilinx](http://stackoverflow.com/questions/tagged/xilinx)

~~~
vonmoltke
One of the biggest problems is how insular and impenetrable that community is.
The FPGA community, and to a certain extent the entire semiconductor industry,
seems to have a prescribed path for engineers.

First, you get an electrical or computer engineering degree with a couple
internships at semiconductor companies. Then, you get a junior role at one of
those companies where you are mentored by senior engineers, who pass on the
black arts of chip design, tool usage, Verilog/VHDL quirks, and such. You
slowly move up the ranks, either at your first company or another
semiconductor company, until you are the senior engineer mentoring new grads.
Then you retire.

Any deviation from this path and you're screwed. You don't come into this
clique from the outside, and they won't let you back in if you move too far
away. It has resulted in a negative feedback loop: this attitude is bolstered
by how odd, archaic, and inaccessible the tooling is, but also serves to keep
different approaches out and keep the tooling odd, archaic, and inaccessible.

Disclaimer: I'm a disgruntled EE and systems software developer who gets a
steady stream of pings for web and NLP dev jobs but can't get the time of day
from a hardware company.

~~~
gluggymug
I was one of those semiconductor guys for about 15 years but now I am in the
retirement phase. I worked for Motorola, Freescale, Canon and Qualcomm. Did
the FPGA stuff at Canon.

Believe it or not they let people in all the time. And they let you back in. I
got my Qualcomm gig after ~2 years of being out of tech entirely.

I reckon it's because of that crappy tooling thing you mentioned. Years later
and nothing has really changed hence I was ready to jump back in pretty
easily. Things haven't really changed in 10 years IMO. Tools vendors just pump
out the same crap.

I wouldn't worry about working for a HW company. The job is nothing special.

The main point is the engineering aspect: You start with the problem then
weigh up the pros and cons of each solution. Most problems require a
specialist in the domain. E.g. DSP, Image processing, telecommunications.
Stuff that requires fast computations at low level. That can be a way for
outsiders to get in. They learn the HW stuff as they go.

Stuff like web and NLP (Natural Language Processing I assume?) is a little bit
too high level for most HW engineering work unfortunately.

~~~
vonmoltke
> I wouldn't worry about working for a HW company. The job is nothing special.

It's more the nature of the work. I want to work on hardware, but not for a
defense contractor again. I'm reasonably good at the CS-ey stuff I'm doing
now, but I don't really like it. I like making physical devices do things.

> Stuff like web and NLP (Natural Language Processing I assume?) is a little
> bit too high level for most HW engineering work unfortunately.

Until four weeks ago I had never done any serious web dev, and NLP was just
the first escape path that presented itself when I desperately wanted out of a
defense contractor black hole. I am, fundamentally, a signal processing
engineer. Even when I was doing real-time radar code nobody wanted to talk to
me.

~~~
gluggymug
Radar is a bit niche. Embedded signal processing SW might be a good avenue.
Verification is another one.

Like any job, HW companies either want experienced people or grads to do the
crap work. You need a related skill-set to go for experienced positions.

FPGA hobbyists are not considered to be experienced.

------
catern
What I'm having trouble understanding (as just a regular, not-hardcore
programmer) is this: What end-user advantage will cause people to buy devices
that include FPGAs? What use are they to a consumer? What interesting programs
or devices can be created with an FPGA that can't be done otherwise?

Basically, if "everyone can easily create whatever custom objects they need!"
is the utopian vision for 3D printers, and "everyone can self-host their
services and protect their privacy and freedom!" is the utopian vision for a
home server[0], what is the utopian vision here?

[0] Such as one running [https://sandstorm.io/](https://sandstorm.io/)

~~~
davexunit
> What end-user advantage will cause people to buy devices that include FPGAs?

Well, the "consumer" (a term I do not like) would never even know there was an
FPGA in their device.

~~~
catern
Sure, but what would cause manufacturers to start putting those FPGAs in those
devices?

------
gchadwick
I think one issue here is that to achieve the great gains in performance FPGAs
can provide you do need to treat it like a proper HW design. C to RTL exists
but to get the best out of it you're basically using C as a syntactic sugar
for verilog/VHDL.

So the overhead for programmability is higher than a DSP/GPU if you want to
gain over and above them.

I do wonder if there's a sweet spot in here somewhere that's basically a sea
of DSP or GPU (maybe better to call them stream processing units) with
programmable wiring and some specialised ALUs/FP units mixed in that can also
use the wiring allowing you to create special one-cycle fixed function ops?

~~~
TorKlingberg
As a C programmer, I don't think C is suitable for such a fundamentally
parallel architecture as an FPGA. I would prefer a higher level language,
maybe declarative, where it is up to the compiler to lay out the parallel
operations. Unfortunately the engineering culture gap from HDL to a high level
language is much wider than to C.

~~~
sklogic
Actually it's easier and more natural to translate a (domain-specific) high
level language into RTL than trying to fit an alien but so-called "general-
purpose" C.

I'm using a language which translates into Verilog, but features native
support for expressing things like FSMs, pipelines, buses, FIFOs - and
something as simple as this makes HDL coding much simpler and much less error-
prone than a conventional low-level RTL.

~~~
azeirah
This is because C, as I understand it, is a language that best described the
fundamental workings of processors, and let you work with a thin-abstraction
layer over the cpu/memory architecture.

FPGA don't have this traditional cpu/memory architecture, and as a result, C
fits them poorly.

~~~
sklogic
Exactly. C assumes too much about the hardware semantics, it's got a peculiar
memory model, sequential execution and all that. There are tricks, of course,
allowing to get out of the C box a little bit - e.g., using multiple address
spaces to simulate distinct memory and memory-like blocks, but it's still
unnatural and do not make hardware description any simpler than doing it
manually in RTL level.

------
TuringTest
Is there a possibility to use just-in-time compilers to port software to
FPGAs? I can see how that could be used to accelerate applications written on
bytecode languages (i.e. mostly everything in your mobile phone) to near-
assembly speeds.

~~~
kyboren
What jbangert said is the crux of the JITing problem: "compilation" for FPGAs
is not really at all like compiling software, and can be hugely
computationally expensive. On top of that, pretty much the entire synthesis,
mapping, and place and route flow is usually proprietary.

That said, there have been some efforts in this direction. Search the
literature for "warp processing".

~~~
TuringTest
Back in the day I learned about using VHDL to design circuits, is that the
kind of compilation also used for FPGAs?

That language paradigm is close to modern dataflow oriented programming
languages. It should be theoretically possible to compile an application
program written in that style and have the logical circuits in the FPGA be a
direct mapping of the business logic in the application.

~~~
kyboren
> Back in the day I learned about using VHDL to design circuits, is that the
> kind of compilation also used for FPGAs?

FPGAs can implement any digital circuit which fits, with some caveats. So yes,
VHDL is commonly used to describe designs for FPGAs. I'm not sure what you
mean by '[that] kind of compilation', though, as you never specify what kind
of compilation you mean.

Implementing a circuit on an FPGA involves many of the same, or similar, steps
and processes as implementing it on an ASIC (and obviously, does not involve
all the physical design stuff). If that's what you're asking.

> That language paradigm is close to modern dataflow oriented programming
> languages. It should be theoretically possible to compile an application
> program written in that style and have the logical circuits in the FPGA be a
> direct mapping of the business logic in the application.

Sure, of course it's theoretically possible; there's nothing you can do in
digital logic you cannot do on a UTM (space/memory constraints aside), and
vice-versa. This field is known as "High-Level Synthesis", and it's been
around for decades. It's only now becoming sort-of widely used for real-world
stuff.

But it's nothing like "directly translating" to logic for the FPGA ;). It's
probably easier for most programmers to design in such a language vs. an HDL,
but AFAIK that's not even close to the biggest challenge with doing HLS. I
think timing and resource allocation are much bigger problems (but I'm not
very informed about HLS!).

Look through the HLS literature if you're interested in what's involved--I'm
not an expert by any means. Also, if you're interested in alternative
programming languages for hardware design, check out BlueSpec and Chisel.

------
jonaldomo
I think FPGAs are an untapped technology. I would love a book by Manning or
Pragmatic called "FPGAs in Action" or "101 FPGA Projects". I have no idea what
to do with it besides try to make a bitcoin miner.

~~~
anigbrowl
I have several FPGA-based devices, in hybrid digital-analog synthesizers.
FPGAs allow rapid reconfiguration of signal routing between analog components,
which latter deliver much nicer sound quality than DSPs for many requirements,
especially filtering. There are also Field Programmable Analog Arrays but I
don't have any of those, yet.

~~~
chillingeffect
I would love to play with those for synths/audio/effects! Any recommendations
on where to look? I look every few years and products come and go. Just now I
looked and found this, which looks intriguing! I love to see more about these!

[http://anadigm.com/apex-devkit.asp](http://anadigm.com/apex-devkit.asp)

~~~
diamondman
Check out the Milky Mist [http://m-labs.hk/m1.html](http://m-labs.hk/m1.html)

------
fixxer
Suppose I wanted to get started with FPGAs for purpose of computation (no
interest in control or actuation of sensors/devices). What would be the best
starting board for less than $300?

I'm not a student, but I can probably find one if your suggestion is a student
dev board.

~~~
charlesnw
I've recently purchased the parallela board:

[http://www.parallella.org/](http://www.parallella.org/)

It's got USB/ethernet/arm cpu , and has been incorporated into numerous other
projects. It's ~$100.00 USD.

~~~
sitkack
I second the parallella, not only does it have an FPGA, an arm core and a
massively parallel cpu. So much to play with and still running Linux so the
workflow is super smooth.

------
mavis
Xilinx is working on tools to make FPGA development easier for traditional
software developers.

[http://www.xilinx.com/products/design-
tools/sdx.html](http://www.xilinx.com/products/design-tools/sdx.html)

------
devonkim
I've written a number of IP cores with FPGAs over 10 years ago and the sad
fact is that the same problems and criticisms that led me to forget about
using them anywhere than at some big company are still very relevant today.
Problems include:

* Hostile toolchains for independent developers / engineers. Lack of reasonably useful toolchains including simulators, synthesis tools, the works. iVerilog has been lame before in my experience, for example. * Expensive hardware that's artificially inflated in price due to lack of commodity concerns (perhaps lack of competition in general) * Toolchains unfriendly / bad UX for traditional software developers. I read a Linux Journal article with a Python-like HDL featured - myHDL and its traction is what I suspected it would be - irrelevant to anyone but hobbyists, and hobbyists have basically zero input into the industry unlike software's dynamics. After working with different compiler vendors and doing some research in the space (GPGPUs completely destroyed the market like I figured back then because software developer use cases of hardware tend to be very narrow uses of FPGA / ASIC capabilities - parallel processing or custom FPUs for boutique DSPs being typical), I've seen rather little progress compared to the leaps in software productivity. Even the recent Scala-based Chisel language seems a bit trite as well and more research-ware that won't be adopted by manufacturers probably because... * Most FPGA and ASIC targeted use cases for HDLs are primarily for _hardware_ engineers first most, not people that are software engineers first. When people get specs on hardware, nobody really cares about the HDL code written as much as whether the SOC has a lot of test benches and they look at the block diagrams instead of the HDL typically and things tend to be developed typically as black boxes due to how IP cores work (per industry conventions). * Compilers for hardware languages have had tons of issues for a long, long time because expressing bit math and signals (especially the analog extensions to VHDL and Verilog) have been really cumbersome with imperative style languages. Historically, we've gotten far more concise, legible results with language clarity by using conventions from Matlab and R than with C and Ada, but Verilog and VHDL are just too damn dominant and most experienced electrical engineers that are hardware gurus really do not like to learn a lot of new languages and semantics unlike how software developers tend to behave typically.

However, I've always felt that hardware engineers and software engineers
should talk a lot more, and the ever-increasing gap has been a disappointment
for me. I'm totally on-board with a lot of testing for software, but at the
same time I understand that software is typically designed with the
expectation to change things frequently while hardware is typically written to
be reliable-first and to go through insanely rigorous testing because you
cannot patch hardware. Most IP cores that are distributed a lot have at least
2.5 times as much test benches / harnesses as the code to actually do the
work. Hardware has much more rigorous specification designs than software
typically though, so this is possible at least. Haskell's QuickCheck would be
totally helpful for how a lot of hardware is tested because nowadays you can't
test all the combinations of signals on, say, a 128-bit bus in a reasonable
amount of time. So test benches are full of tons of statistical functions and
are sometimes even generated with machine learning techniques to exercise the
most likely to fail use cases and states. This all goes out the window if you
switch languages typically. That kind of attention to testing is almost
unheard of in software in my experience.

A trivial-seeming example of how HDLs are hostile to software developers is
the difference when you write a switch statement in C versus in an HDL
(tremendous apologies, I haven't written Verilog in 10 years):

    
    
      switch (input) {
        case b'10: 
          output <= input & inputB;
          break;
        case b'01:
          output <= input ^ inputB;
      }
    

vs.

    
    
      switch (input) {
        case b'10: 
          output <= input & inputB;
          break;
        case b'01:
          output <= input ^ inputB;
          break;
        default:
          output <= b'00;
      }
    

Omitting a default case causes a latch to be produced by the compiler, so the
two are VERY different and will punish software developers that like to use
shortcuts (my Perl background bit me real hard trying to do shortcuts in the
Verilog compilers I used before - I was even more surprised to see my C-style
macros failing after only 2 levels of indirection to try to make my cores more
configurable, forcing me to rewrite my cores often). Unlike in software where
something like whether a switch statement becomes a jump table or a series of
if statements isn't typically make or break, in hardware the difference
between a single clock cycle being used up or not is massively important.

After basically yelling at my compiler and tools oftentimes for doing things
that defied their documentation enough times, waiting hours for place and
routes to finish (they ARE NP hard, after all), and uncovering so many edge
cases in FPGAs that required use of clock trees and manual refinement of the
resulting FPGA image file, I realized that this just wasn't going to get much
better of an experience because these levels of concerns are what hardware
engineers are responsible for tuning themselves and you just can't abstract
and delegate it away.

I asked my boss back then why Intel shouldn't enter the market and he
responded "because if it's commoditized, the entire industry will lose all
their margins because the market for FPGAs is primarily large company hardware
engineers." It took over 10 years later for Intel to finally buy Altera it
seems... and in the meantime we had an incredible number of things happen
across the spectrum while FPGAs are still practically the same user experience
and fundamental architectures (nothing like SMT happened in FPGA land really,
no really revolutionary compilers to make writing hardware super easy).

There's a gazillion more reasons why we'll see more people using the Raspberry
Pi and Arduino for learning to tinker with hardware, and Xilinx, Altera, etc.
have always been the way they are now, so my decision to just never go back to
the fun world of FPGAs seems to have been justified, sadly.

------
pimlottc
(2013)

