This part is incorrect. Unless you need functionality specific to workstation & server GPUs, it can be an order of magnitude cheaper to buy a high end gaming GPU. Cryptominers normally use motherboards with 12+ PCI-E 1x-16x adapters and consumer cards: https://www.youtube.com/watch?v=B6gB7GYkkiA
The same performance/dollar math applies to anyone else who needs to do GPU processing and doesn't need:
* double-precision math
* high-bandwidth connection between the GPU and
* a setup that doesn't violate EULAs
* a warranty
In my testing with raytracers, there is around a 5-15x improvement in perf/$ with consumer cards.
This was a reference to what you're talking about.
Meanwhile an ASIC is useless outside of mining.
I'm sure there's still a demand for them, because it's still possible to turn a profit over the lifetime of the card; it's just not as high as that of the 1070.
I do believe Nvidia does offer to sell at MSRP with limits by household yet even then those are not the cards many home builders prefer
a) AMD to offer something mid-market for a low price and steal a load of market share back from Nvidia, while they concentrate on the $900 plus market. I suppose AMD's CPU's with onboard graphics are sort of like this.
b) A new entrant?
The most expensive Tesla accelerator I see in this list has 16 GB, which sounds like a lot, but it's 'only' twice as much as the consumer-level GPU I have in my PC at home, and it seems like a pretty paltry amount for large-scale simulations. I'm thinking about volumetric modeling applications, for example, where storing a dense voxel grid as a volume texture would easily run over 30 GB even for quite modest grid size/resolution (e.g. 2K x 2K x 2K grid). It's exactly these kinds of things where a GPU accelerator could shine, without the need to implement spare sampling solutions or accelerator data structures that would be complex and severely impact performance on GPU architectures.
1) Use desktop memory that allows more BW/pin but is also much slower per pin
2) Develop completely new memory just for your application
3) Put some kind of buffering between the chip and the ram, greatly increasing latency.
None of these is currently that appealing for the manufacturers. HMC was originally going to do 3 by doing 2, but HMC appears stillborn, as all the momentum went to HBM instead, and HBM cannot increase the memory count per pin nearly as much as HMC could.
Agree about the complexity. However, adaptive grids can be a huge win in terms of performance. The GPU doesn’t need to simulate the whole 8G grid nodes, it only simulates much fewer nodes, only in the areas of large gradients / higher frequencies of the simulated fields.
Here’s an old SpaceX presentation about their GPU simulations of rockets: https://www.youtube.com/watch?v=vYA0f6R5KAI
Pardon my ignorance but with all the demand for graphics cards for cryptocurrency mining, why has a viable ASIC not become commonplace so miners aren't grabbing up every available graphics card they can find making it hard for non-miners to have anything and driving up prices.
I'm hoping that mining moves more to ASIC and deep learning gets more desktop based "tensor" options. This is a very strange market right now with companies trying to appeal to everyone instead of specializing more and not really serving anyone well.
First, absolutely zero resale value. Once either the difficulty raises, or the value of the coin falls, to the point where you're spending more on power than you're generating in currency, your ASIC goes in the trash.
Second, the algorithm they use is baked in. So an ASIC built for bitcoin can only be used for other sha256-based currencies. With a GPU, when you realise you're not cost-effective for BTC, you can try your hand at Eth instead.
Furthermore, in the 'alt' space (i.e. not-Bitcoin), there is a lot of competition and you don't want to tie up large capital expenses to inflexible hardware. GPUs can mine whatever trendy coin is currently pumping. Miners switch to mining different coins in order to maximize return.
Pro tip: google "hacker news" + whatever your interested in to find lots of high quality comments about a topic on old articles. That is how I found the previous link and what I do when looking into new technologies.
Given the CAPEX cost of the GPUs, and the OPEX of the electricity someone has probably worked out the rates of return for each of the crypto currencies.
If there’s an expectation the virtual currencies will soar in value why wouldn’t people buy all they can, perhaps to two or three times the cost to mine.