Actually, the per-vertex color is passed into the pixel shader, which then decides what to do with it. The vertex shader itself has nothing to do with color except calculating an initial value and passing it to the pixel shader. The pixel shader can be set up to interpret that value as anything. It could make it a grayscale color, it could invert it, it could use only one of the four channels, etc.
There used to be no pixel shaders, which is why vertex shaders had to be used to do shading. That was circa 2004 though. EDIT: This paragraph is incorrect, see comment below.
Also, the whole idea of "color" in a vertex shader is mistaken. There is only one color: the RGB that ultimately shows up on the screen. Until then, there are values which are passed from the main application source code to the vertex / pixel shaders, which then decide what to do with those values. Sometimes the pixel shader interprets them as a color, but they're really just floating point numbers. That may seem like a silly distinction, but again, if your mental model is incorrect as a newcomer then you're going to have a hard time.
If you go back to what is arguably the earliest hardware that at all resembled the modern 3D pipeline (it predates DirectX, and the DirectX pipeline was greatly inspired by it, according to some article I read on here a while back by one of the guys responsible for the DirectX project), the original Play Station, it mostly consisted of two parts: a "geometry engine" embedded in the CPU which acted like a cross between an SIMD unit and a modern vertex shader ^(it took as inputs a vertex, a light vector, and a transformation matrix, and produced a transformed vertex with an associated color according to a basic diffuse lighting calculation), and a "dumb" rasterizer chip (which knew nothing about 3D, it just took basic 2D/"screenspace" vertex coordinates, color values, and texture coordinates, and naively combined them with no regard to perspective correctness). There was a clear distinction here where the "vertex shader" did the "smart" work to produce a color based on a simple lighting model, and the "pixel shader" did absolutely nothing but interpolate whatever vertex colors and texture values that had been supplied to it, then combine them straightforwardly to produce a final pixel color.
Similarly, if you look at early, pre-shader PC graphics cards, like the Voodoo, these cards (kinda like the Play Station) had nothing resembling a vertex shader, and had no knowledge of lighting calculations. They required the host CPU (acting as our "vertex shader") to provide them with completely lit and transformed vertices that they could then rasterize by naively interpolating and combining whatever color and texture coordinate values the host program had supplied to them.
Given this history, I contest the idea that it's clearly the pixel shader's responsibility to deal with all color and lighting calculations, and the vertex shader's responsibility is merely to geometrically transform vertices in space. I still agree that "shader" is a stupid and confusing term (I was learning about this stuff for the first time not long ago, and realizing that "shaders" were just GPU programs was an "aha! moment" for me too), I'm just disagreeing with the first paragraph of your original post.
^ it could do other stuff, like transform matrices to implement a basic matrix stack, but that run-on sentence was already running on long enough...
Thanks for the interesting background. I am surprised to learn that the ps1 didn't use perspective correct texturing. Textured rectangular surfaces like walls look terrible without it.
"The first shader-capable GPUs only supported pixel shading, but vertex shaders were quickly introduced once developers realized the power of shaders"
I actually very much disagree with your point about names. I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
Sure, the naming could be better, but page one of a decent textbook should set you straight. The other issues, not so much. Of course, that's more or less the price you pay for being on the bleeding edge of performance.
Oh! You're right, I was mixed up about the history. Sadly my mixup will probably detract from my other, non-history points as well. Thank you for correcting me.
I write shaders when I have to, I'm not very good at it, but it feels fundamentally very broken, much as I picture the days of 16 bit x86 assembly (memory segmentation?).
There are arbitrary length, register etc limits per DX version, there's no clean cross platform method of writing shaders, the documentation is fragmented, vague, for the wrong platform or non existent.
This is exactly why I push people to try using OpenGL and avoid Direct3D. All of those problems are D3D problems, not shader problems.
GL has no arbitrary length limits, and it has extremely accurate and thorough documentation. If a program is too complex to execute properly on the given hardware, then it's executed properly in software. Some see that as a terrible thing, and sometimes it is, but in today's mega-GPU world it's becoming increasingly rare to write shader programs that are so complicated that they have to be emulated in software by the driver. Getting an accurate result seems much better.
The limits are hardware limits. DX version is common shorthand for rough hardware generation. If you develop against the same shader model versions in GLSL, you'll have exactly the same limits.
Each video card has a different limit. If you develop against GLSL, you have the limit of whatever videocard you're using, which is substantially different from what DX would have you believe its limit is. GPUs are capable of more than what Microsoft would have you believe they're capable of.
The limits that you're referring to are artificial, because Microsoft mandates that if a pixel shader has more than N instructions then it shouldn't be allowed to compile, regardless of what the videocard is actually capable of doing. It's confusing, and the reason they decided to do it that way was for compatibility across a wide variety of hardware.
There used to be no pixel shaders, which is why vertex shaders had to be used to do shading. That was circa 2004 though. EDIT: This paragraph is incorrect, see comment below.
Also, the whole idea of "color" in a vertex shader is mistaken. There is only one color: the RGB that ultimately shows up on the screen. Until then, there are values which are passed from the main application source code to the vertex / pixel shaders, which then decide what to do with those values. Sometimes the pixel shader interprets them as a color, but they're really just floating point numbers. That may seem like a silly distinction, but again, if your mental model is incorrect as a newcomer then you're going to have a hard time.