How might the final rendering look if it used some of the standard triangle shading techniques? Treat the sample points as coordinates in a mesh, assign colors to those coordinates based on what you sampled, then interpolate colors for the points between those coordinates using something like Gouraud or Phong shading (without the lighting). That might produce a satisfying result with fewer samples.
I wonder if this could be used as an image resizing mechanism? Take a large number of samples, then render the resulting image using those samples and a smaller or larger size. Or, generalizing further: turn the image into samples and associated colors, apply a transform to the sample coordinates, then render.
This also reminds me quite a bit of the algorithm used in http://research.microsoft.com/en-us/um/people/kopf/pixelart/... (for which, sadly, code is not available). I wonder if some of the techniques from there could improve the quality of the results with fewer samples?
What you are talking about is barycentric interpolation, which is what this is doing.
There are already image resizing algorithms that use triangulation (and something called DDE - data dependent interpolation) so the answer to your question is yes it is absolutely a valid idea.