Rendering pipelines are in linear RGB. Which has rgb(1,1,1) and rgb(0,0,0) the same as sRGB, but with 1.0 gamma.
When loading images into the GPU, you flag them as either sRGB or linear. When sampling from a shader the GPU will convert sRGB to linear (this is a hardware feature because many images are in sRGB space), and not make any changes to textures flagged as linear.
Rendering is done in linear RGB space in properly coded game engines, and the final output is generally also linear (with some post processing magic to stay in boundaries). The final swap will convert it to sRGB or whatever your display profile is set to, this is handled by the display driver as well.
You're slightly confused I think. We operate and perform lighting in linear color spaces, but the choice of white point and primaries offers a few degrees of freedom. LDR renderers might opt to stay within linear sRGB primaries, but it's easy to clip against the gamut boundary if doing HDR lighting. A color transform is needed to go to Rec2020 or wider.
Possibly, I might be. (Hence, post processing magic, regarding white points and clipping.) Wasn't the final color space transform handled by the driver? I don't recall having to do any manual color transforms to present a linear RGB render target correctly on an sRGB display, at least. Haven't had the chance yet to experiment with high gamut displays, unfortunately, so I might be missing something.
Fortunately I mostly just deal with embedded graphics that do their technically incorrect blending math in unmanaged sRGB/display space. (:
Ok so, the final display gamut, initial source color gamut, and working gamut are three separate spaces. Effectively, the gamut defines where your primaries are, which are the positions of the unit RGB cube within a larger volume. These three working spaces can be all completely different, and the only thing to ensure is that you are consistent about transforming from one space to the next. It's possible to do operations that take you outside the final gamut, after which you clip or do some other operation. There are no hard and fast rules, although some approaches will certainly look much better depending on lighting conditions (color temperature) and so on.
When loading images into the GPU, you flag them as either sRGB or linear. When sampling from a shader the GPU will convert sRGB to linear (this is a hardware feature because many images are in sRGB space), and not make any changes to textures flagged as linear.
Rendering is done in linear RGB space in properly coded game engines, and the final output is generally also linear (with some post processing magic to stay in boundaries). The final swap will convert it to sRGB or whatever your display profile is set to, this is handled by the display driver as well.