I wonder which is better, actually. On the one hand, a lot of people will swear that what their internal pattern matcher produces is reality (although people with training tend to know that you can't put that much stock in it). So maybe the output of a computer will feel a little less real to laymen.
But I kind of doubt it. I have a feeling that, for example, it's going to be difficult to explain to a jury that the image that the computer spat out from eight pixels on a security feed is not reliable – that any resemblance they see to the defendant is simply not relevant. If an artist took those eight pixels and drew a picture, they'd be laughed out of the room. If a computer does it, people primed by shows like CSI might think that it's actually valid.
Maybe you can appeal to people's common sense, and show them the original input. A crafty defense might show alternative "enhancements" based on non-face training sets to drive the point home. But in the end, we're probably going to need to ban this kind of technology as evidence to avoid confusing jurors.