Hacker News new | past | comments | ask | show | jobs | submit login

The reason is because GPUs gained more general-purpose capabilities. GPUs are the new FPGA, for all practical purposes.



Author here.

I didn't write the follow-up yet because it requires a lot of research and I'm busy :-)

If you ask me, GPUs are anything but "the new FPGA". GPUs are the least efficient hardware accelerator out there, but also the most accessible to the largest number of programmers. FPGAs are much more efficient than GPUs on DSP workloads and GPUs are useless for I/O while FPGAs are a godsend. On the other hand, FPGAs have a ton of problems GPUs don't have. The two do not look similar to someone caring about accelerators any more than snow and ice look similar to someone living at a place where they get to see both... though the two might seem similar to people from hot places where neither is common (or perhaps if they saw one but never the other.)


But are they the most efficient in cost-per-computation? For all the major data crunchers I'm familiar with, doing either finance or scientific calculations, that's the only metric they cared about.

Only place I can see the cost-per-computation metric not mattering is in space satellites. Am I way off?


Do you get more throughput per dollar with FPGA relative to GPUs? Most certainly, except for floating point stuff, especially double precision. (Finance would care much less than scientific computing and I think FPGAs are way more prominent there.)


You sure? The ones I'm personally familiar with are investment banks that have hundreds of thousands of computers doing machine learning modeling. They ran the costs, and found GPUs to be far more cost effective.


Machine learning software will tend to use floating point, hence the result IMO. In HFT for instance I'd expect things to be the opposite.


For any sort of mobile device energy-usage per computation is an important metric. Hence you have chips with multiple low power modes which can trade off different amounts of computational power with different amounts of power efficiency.


But ASIC is 100x better than FPGA for energy per computation. Know of any mobile devices that have FPGAs on them now?


It's not fair to compare a programmable circuit with a fixed-function circuit though, because programmability is often a requirement.


Sure: http://www.eejournal.com/archives/articles/20131118-lattice/

But you're right - when programmability isn't important you'd rather have an ASIC.


FPGAs can respond to signals in nanoseconds, talk to directly different peripherals (integrated circuits, SPI, DDR3, SATA, HDMI - whatever). State machines can "branch" on every clock cycle.

GPUs... usually respond in milliseconds. Usually can't talk with anywhere except host across PCIe bus. State machines and branches... uh, yeah, don't do those on a GPU!

Maybe tile CPUs will give FPGAs a fight in the future in some market segments. Easy (well, easier than FPGA) programmability and potentially good I/O. Transputer was so amazing back in the day. 30 years ago. Maybe Tilera and such will eventually succeed?

Anyways, FPGAs and GPUs are very different beasts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: