My favorite elegant and trivial algorithm has always been merge sort as it looks in Lisp/Scheme. My favorite messy algorithm is simulated annealing for its intuitive sledge-hammer approach. In graphics I like some off-screen rendering methods which are elegantly simple in applying brute-force to use the whole frame buffer as a lookup table.
For object picking (determining what the user clicked on) in a complex visualization, it is often easiest to draw all objects with a simple color-mapping renderer which follows the same occlusion rules as your visualization but renders each object as a solid blob in a distinct color. You draw the scene, look at the pixel color under the mouse, and use the color as an index into the table of objects.
For ray-cast volume rendering, you have a problem somewhat like object picking but you have to solve it simultaneously for all pixels in the rendered scene. You have to determine the ray intersections of each pixel's perspective through your volumetric data grid, so you can run a sampling loop to integrate the 3D scalar field values along that ray. When your grid has a simple cube/box shape, you can render a polygonized box with the viewing perspective and trivially color-map it so each surface of the box has an RGB value encoding its XYZ volume coordinates. Your volumetric pixel shader, running on the GPU, can then independently lookup these XYZ positions out of screen-sized buffers to determine the start and end positions for each screen pixel's ray integration loops as 3D texture coordinates.
For object picking (determining what the user clicked on) in a complex visualization, it is often easiest to draw all objects with a simple color-mapping renderer which follows the same occlusion rules as your visualization but renders each object as a solid blob in a distinct color. You draw the scene, look at the pixel color under the mouse, and use the color as an index into the table of objects.
For ray-cast volume rendering, you have a problem somewhat like object picking but you have to solve it simultaneously for all pixels in the rendered scene. You have to determine the ray intersections of each pixel's perspective through your volumetric data grid, so you can run a sampling loop to integrate the 3D scalar field values along that ray. When your grid has a simple cube/box shape, you can render a polygonized box with the viewing perspective and trivially color-map it so each surface of the box has an RGB value encoding its XYZ volume coordinates. Your volumetric pixel shader, running on the GPU, can then independently lookup these XYZ positions out of screen-sized buffers to determine the start and end positions for each screen pixel's ray integration loops as 3D texture coordinates.