Hacker News new | past | comments | ask | show | jobs | submit login

While I appreciate the effort, I think this primer is lacking what most other shader tutorials also lack: any information on how to actually do something useful. Shaders are used because they are faster than the cpu. So why isn't everything done with shaders? Because of limitations. So what are those? Most GPU tutorials only include examples of the form

FragColor = <some algebraic expression containing x & y>

That's nice, but hardly useful. To do anything of worth, I would need data from the CPU. How do I do that? What are the most common bottlenecks? What are some ways around the limitation of working with one fragment at a time? Those are the sort of questions I would like to see answered in a primer. A sort of "GPU introduction for competent CPU devs" - any recommendations?




Bottlenecks and multi-sample output are complex subjects, and also uncommon for a beginner to deal with.

Nevertheless, gonna just pimp my own tutorials since they also cover practical implentations, like blurs for desktop and mobile, normal mapping for 2D games, vignettes, etc. https://github.com/mattdesl/lwjgl-basics/wiki/Shaders




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: