Hacker Newsnew | comments | show | ask | jobs | submit login

GPUs today already routinely have 500+ cores and are doing quite well. There are very few real problems with these visions, they're realizing themselves as we speak.

Of course SIMD is not the same as 1K fully independent cores but they're both valid interpretations of parallel computing.

That kind of architecture only lends itself to a certain class of problems but they're a good indication of what can be done to circumvent the interconnect limitations.

A typical GPU hides access to slow resources by doing more computations.




True, a dual card GPU system can have 1000 cores today, however the cores provide primarily computation on a small (or procedurally generated) data set. This makes them great for simulating atomic explosions, CFD, and bit coin mining but for systems which read data, do a computation on it, and then write new results they don't have the I/O channels to support that sort of flow effectively. Back when I was programming GPUs one issue was that channel bandwidth between the CPU's memory and the video card was insufficient so that textures had to be maximally compressed or procedurally generated or you would starve TPUs waiting for their next bit of Texel data. I'm not sure if that is still the case with 16x PCIe slots however.

-----




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: