Very impressive!How might the final rendering look if it used some of the standard triangle shading techniques? Treat the sample points as coordinates in a mesh, assign colors to those coordinates based on what you sampled, then interpolate colors for the points between those coordinates using something like Gouraud or Phong shading (without the lighting). That might produce a satisfying result with fewer samples.I wonder if this could be used as an image resizing mechanism? Take a large number of samples, then render the resulting image using those samples and a smaller or larger size. Or, generalizing further: turn the image into samples and associated colors, apply a transform to the sample coordinates, then render.This also reminds me quite a bit of the algorithm used in http://research.microsoft.com/en-us/um/people/kopf/pixelart/... (for which, sadly, code is not available). I wonder if some of the techniques from there could improve the quality of the results with fewer samples?

 That's exactly what it does, no? (Standard triangle shading technique, interpolating colors between the mesh, Gouraud shading without the lighting.) Phong shading (interpolate normal vectors) wouldn't make sense, as the mesh has no normals.
 It isn't obvious from the article that the color interpolation used here matches Gouraud.
 Phong is a lighting model.What you are talking about is barycentric interpolation, which is what this is doing.There are already image resizing algorithms that use triangulation (and something called DDE - data dependent interpolation) so the answer to your question is yes it is absolutely a valid idea.
 "Phong" refers to two distinct but related things: The Phong reflection model (Ambient + Diffuse + Specular) and Phong shading (normal vector interpolation).
 Correct (I over simplified). Neither really applies here because the interpolation is based on barycentric coordinates, which is the Phong part that was probably being referred to.

Applications are open for YC Summer 2019

Search: