Hacker News new | past | comments | ask | show | jobs | submit login

> Is it really more power-efficient?

I would expect huge difference for similar operations. Current-gen $300 GPU consumes 120W doing 4.6 TFlops. Current-gen $300 CPU consumes 65 W doing 16 flop/cycle * 3.6 GHz * 8 cores = 0.4 GFlops. GPU cores don't spend electricity minimizing latency e.g. predicting branches, and they work at much lower frequencies.

Also some parts of the pipeline are implemented in hardware (rasterizer, texture units, alpha blending, Z buffer), they don't cost flops and hardware implementation is likely even more efficient than shader cores. E.g. if you want a gradient, load/compute per-vertex colors and the rasterizer will interpolate in hardware.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: