One example is chess positions. I render them in Unity and take screenshots to determine if my ML architecture and loss function can read the board from just and image.
Well I know you could definitely render the same thing in blender and it would probably be faster as blender is actually designed to render images and animations, as opposed to Unity being primarily a game engine.
As far as generating them procedurally I'm not sure. I'm assuming you're using 3d models of the chess pieces, and while it is possible to do procedural animations with the animation nodes add-on I haven't used it much myself so I'm not sure if it will me your use case.
Blender also has a great python API. AFAIK it has an API for every single function so the sky is the limit really. I don't think it would be too much effort to create a blender file that on startup would execute a script that generates a random chess board arrangement, inserts the corresponding models, and sets up the camera.
Blender can even render from the command line, so you could probably just setup a loop that keeps rendering the same file over and over to generate a whole bunch of positions.