"Vertex program" is a better term.
That brings us to "pixel shader." That's actually a good name in order for beginners to learn the concept, but it's imprecise. OpenGL insists on calling it a "fragment program" because with certain forms of antialiasing, there are multiple "fragments" per pixel. "Program" is also a better name than "shader" because there are things you can do per-pixel other than change the color. For example you could change the depth written to the Z-buffer, or you could cause the pixel to be skipped based on some criteria, like whether the texture color is pink.
Anyway, it's just a tiny program that executes either per-vertex or per-pixel. For example you could write a vertex program which moves each vertex in a sinewave pattern based on time. Or you could write a fragment program to change the color of each pixel from red to green and back based on time.
Then there are more advanced/recent concepts like a "geometry program," which lets you generate triangles based on vertices or edges.
Sometimes I wonder if it's overly complicated, or if the problem domain is just complicated. It took me years as a kid to finally grok this, but once I learned it, it turned out to be very simple. Honestly it wasn't until I got up enough courage to sit down with the OpenGL specs and read through them that everything clicked. They're dry reading but not difficult.
Edit: Is this seriously being downvoted? "How do you "shade" a vertex? That implies color, when in fact vertex "shading" is really about deforming the position of vertices. It has nothing to do with color!" is factually inaccurate because you can, and many games do, calculate color and lighting information per-vertex.
There used to be no pixel shaders, which is why vertex shaders had to be used to do shading. That was circa 2004 though. EDIT: This paragraph is incorrect, see comment below.
Also, the whole idea of "color" in a vertex shader is mistaken. There is only one color: the RGB that ultimately shows up on the screen. Until then, there are values which are passed from the main application source code to the vertex / pixel shaders, which then decide what to do with those values. Sometimes the pixel shader interprets them as a color, but they're really just floating point numbers. That may seem like a silly distinction, but again, if your mental model is incorrect as a newcomer then you're going to have a hard time.
Similarly, if you look at early, pre-shader PC graphics cards, like the Voodoo, these cards (kinda like the Play Station) had nothing resembling a vertex shader, and had no knowledge of lighting calculations. They required the host CPU (acting as our "vertex shader") to provide them with completely lit and transformed vertices that they could then rasterize by naively interpolating and combining whatever color and texture coordinate values the host program had supplied to them.
Given this history, I contest the idea that it's clearly the pixel shader's responsibility to deal with all color and lighting calculations, and the vertex shader's responsibility is merely to geometrically transform vertices in space. I still agree that "shader" is a stupid and confusing term (I was learning about this stuff for the first time not long ago, and realizing that "shaders" were just GPU programs was an "aha! moment" for me too), I'm just disagreeing with the first paragraph of your original post.
^ it could do other stuff, like transform matrices to implement a basic matrix stack, but that run-on sentence was already running on long enough...
"The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders"
I actually very much disagree with your point about names. I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
Sure, the naming could be better, but page one of a decent textbook should set you straight. The other issues, not so much. Of course, that's more or less the price you pay for being on the bleeding edge of performance.
I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
This is exactly why I push people to try using OpenGL and avoid Direct3D. All of those problems are D3D problems, not shader problems.
GL has no arbitrary length limits, and it has extremely accurate and thorough documentation. If a program is too complex to execute properly on the given hardware, then it's executed properly in software. Some see that as a terrible thing, and sometimes it is, but in today's mega-GPU world it's becoming increasingly rare to write shader programs that are so complicated that they have to be emulated in software by the driver. Getting an accurate result seems much better.
The limits that you're referring to are artificial, because Microsoft mandates that if a pixel shader has more than N instructions then it shouldn't be allowed to compile, regardless of what the videocard is actually capable of doing. It's confusing, and the reason they decided to do it that way was for compatibility across a wide variety of hardware.
"A shader program consists of a vertex shader (VS), tessellation control shader (TS control), tessellation evaluation shader (TS eval), geometry shader (GS) and fragment shader (FS)."
You could instead say this, but it would be confusing:
A shader program consists of a vertex program, tessellation control program, tessellation evaluation program, geometry program and fragment program.
And it would even get more confusing if you drop the first shader:
"A program consists of a vertex program, tessellation control program, tessellation evaluation program, geometry program and fragment program."
So for the sake of it being easy to talk about, a (shader) program is the whole thing, whenever somebody talks about a "program" it's the whole assemblage. And when somebody talks about a shader, it means one of the programs tied to a stage.
It doesn't have to be this complicated. Humans just made it that way. I'm just trying to make sure everyone understands that there's nothing mysterious or even especially interesting about these terms. It's complicated like an internal combustion engine is complicated, not like math.
As I've gotten older it's become easier to think abstractly and accept that names sometimes have nothing to do with what things are. But when you're first starting out, it's natural to want to visualize everything you learn as what it sounds like.
Hm, not really. I learned more from messing with working demos and prototypes than studying theory. But YMMV.
It's absolutely true that the whole pipeline is very intimidating for newcomers, though.
I love that phrase... "the learning curve has a slope of 'wat'."
But even then, you could call them functions (which is practically a synonym for program in a sense, and also obviously recursive, in that functions call other functions).
Either would at least be less obviously 'wrong' in the sense GP meant it.
But I also think "kernel" is a stupid term for GPGPU programs, and definitely increases the barrier to learning what isn't really all that complicated of a thing (at least not at the intro level).
How do you shade a vertex? Read about Gouraud shading. The idea is to do the expensive lighting calculation only at the vertices, and interpolate in-between. The "vertex shader" would calculate the shading at the vertices.
Obviously, you can use a vertex shader to do things other than shading, but "shader" is just a name, not a definition of all things this program can do.
Thank you. I have never understood the concept of a <thing>-shader; I just knew they were useful and somehow caused neat effects. Even having pasted some shader code into a program once (for a barrel distortion for my rift), I never thought much about them, likely because of the opacity of the name. Just ... magic.
Your description has just made the entire concept click. I'm now actually interested in learning about them, because it's such a simple, sensible idea. Thanks.
side-note: I wish the world had more one-sentence intuitive sum-ups of jargon-laden concepts, even if they are a little 'leaky' as abstractions. Just to give people disconnected from the topic a place to start thinking about it.
I think all the people defending the term are all quite invested in the area already, so it's obvious to them.
"Shader" was a term I was confused about at first when I was learning WebGL. In fact, in the WebGL world, it seems like one is expected to come from an OpenGL background and be familiar with whatever popular 3rd party libraries are en vogue. I never found a thorough guide that didn't presume one of those two things, so I built one:
(Sorry, also jumping on the self-promotion bandwagon.)
A year later, a new project comes along. It requires shaders. I don't remember how to use then and have to look it up again...
FragColor = <some algebraic expression containing x & y>
That's nice, but hardly useful. To do anything of worth, I would need data from the CPU. How do I do that? What are the most common bottlenecks? What are some ways around the limitation of working with one fragment at a time? Those are the sort of questions I would like to see answered in a primer. A sort of "GPU introduction for competent CPU devs" - any recommendations?
Nevertheless, gonna just pimp my own tutorials since they also cover practical implentations, like blurs for desktop and mobile, normal mapping for 2D games, vignettes, etc.
Also gonna jump on the bandwagon with my own tutorials, using Java/LibGDX and mostly focusing on 2D applications.