the software's "main editor" uses a very traditional QGraphicsScene with a QGLWidget - no QML / QtQuick (QtQuick is a very good tool for a lot of usecases, but not the right tool for this one particular "traditional big desktop app" job imho).
The RHI-using part is for creating custom visuals (think applying various shaders to video & camera inputs for VJ), so I wrote my own scene / render graph leveraging it which renders in separate windows through QRhi.
It was mostly written at my Ballmer peak during last year's christmas / new year's eve though :-) so lacks code quality a fair bit.
There's a graph of nodes. The graph is walked from every output node (screen surfaces) to create a matching "rendered node" (pretty much a render pass). For every node "model", a node "renderer" will be created, which contains uniforms, GPU buffer (QRhiBuffer) & texture (QRhiTexture) handles, etc. and a QRhiGraphicsPipeline (the state in which the GPU must be to render a node, the VAOs, shader programs, layout, culling, blending, etc etc)
Then every time the output node renders due to vsync or whatever, the associated chain of node (render passes) is executed in order.
I recommend looking at the RHI manual tests in the qt source, they show the usage in a very straightforward manner: