
DeepFaceDrawing Generates Photorealistic Portraits from Freehand Sketches - Yuqing7
https://syncedreview.com/2020/06/04/deepfacedrawing-generates-photorealistic-portraits-from-freehand-sketches/
======
XorNot
So this sort of thing seems like a big deal for indie video game devs.

One of the big problems at the moment is that photorealistic texturing
basically requires finding and mapping a real life actor, which is all sorts
of expensive.

This looks like a way to get past that in a super-efficient way: just mock up
what "look" you're going for with a character, hit render and out pops all the
data needed for a texture (I suspect we're not too many iterations way from
also getting bodies, meshes and animation skeletons).

This would make it a huge field-leveller for catching up for HD content
production.

~~~
fxtentacle
There's a huge copyright issue waiting for you when you use this kind of tech
outside of research.

The way how these AIs work is that they have memorized aspects of celebrity
photos and then recombine them as needed. That means if your sketch looks in
any way like a celebrity from the training data set, then the AI will likely
reuse parts of those photos, which would make your generated texture a derived
work, meaning you'd have to pay royalties to the celebrities in the dataset.

~~~
londons_explore
No court has yet decided on this.

Previous research has shown that if you search the entire training dataset for
the image most similar to one generated by a generative model like this, the
images actually look pretty visually different. To a non-expert, they'd say
the images are similar, but neither copied the other.

------
bencollier49
I have one question about this, I'm sure it's completely explainable and
honest but it comes across as suspicious.

In the image labelled "Illustration of the model’s deep learning framework
architecture", the input face has a strange line drawn underneath the chin. It
seems like an odd thing for a human drawer to put in, and makes the person
look like they have a double chin.

Yet in the output shown at the end of the pipeline, it appears as a shadow. I
didn't go into the article suspicious, but this immediately made me wonder if
for some of these sketches, a face to line drawing network was used for some
sort of reverse process.

The image does appear in a part of the article discussing their learning
methods, though, so I'm probably missing something important. But given that
they "are working to release their code" it doesn't really help with
confidence.

~~~
djannzjkzxn
Adding a line where you want there to be a shadow in the output seems like
something you could learn from trial and error when messing with a model. It
somewhat weakens the accomplishment of the paper if the sketches aren’t drawn
by naive users, but it’s a lot more defensible than generating the input like
you suggest.

~~~
bencollier49
Agreed. It just looks a bit strange and doesn't help to instil confidence in
the paper. My first guess would be that they've used a reverser for the
learning process somehow. As it's a preprint, hopefully comments like this
will help them to strengthen the paper and release their code!

------
aetherspawn
Wow, this is cooked.

In a few years, deep learning is going to make any sort of development of real
skill feel as archaic as assembler. Learn guitar? What's the point. The little
magic black box soon-to-be-smaller-than-your-smartphone can make just about
any song based on minimal inputs (i.e. a beat-boxed backing track). You'll be
able to generate unique and stylized paintings of your relatives and pets in
seconds. Probably you'll be able to generate printable 3D objects from
descriptions. Engineers will be able to sketch parts from one perspective and
have the details automatically fleshed-out from best-practices learned across
millions of similar parts.

You'll never get away with an illegal U-turn ever again because the city will
pull footage from peoples' internet-of-crap dashcams and the machine learning
algorithms will comb the feeds and send fines directly to your mailbox with
basically no human intervention.

~~~
OkGoDoIt
Those mostly sound like good things to me. I’m sure people who enjoy music or
art will still do it regardless. The people who might enjoy it but don’t quite
have the talent will have more powerful tools to help them express their
creativity. And as for the engineering, that seems like an absolutely good
thing, as long as we don’t get too complacent and overly trust the machines to
do it for us without double checking.

~~~
aetherspawn
Not saying it's a bad thing at all, it's just going to be very different from
the world we know.

------
Kaibeezy
Reminds me of this one (used to be called PaintsChainer) where the input is
line drawings, output is colored-in using anime palettes. User can override
the color choices. Very satisfying, especially on drawings by kids.

[https://petalica-paint.pixiv.dev/index_en.html](https://petalica-
paint.pixiv.dev/index_en.html)

~~~
cptskippy
That to me looks waaay more impressive than the DeepFaceDrawing.

------
totetsu
This reminds me of edges2cats
[https://affinelayer.com/pixsrv/](https://affinelayer.com/pixsrv/)

------
kleiba
Goodbye, modeling industry!

This is extremely impressive. Extrapolating for possible applications, I could
imagine that techniques like this could one day become invaluable tools, say,
for asset creators in the game industry. The video speeds up the process, but
of course in a couple of years, this will be actual real-time performance.
Extended to more than just human portraits, this could be a fantastic design
tool.

------
dpau
Could police sketch artists use this as a tool?

~~~
nullc
For extra fun hook it up to the DMV photo database so that every returned
image always looks just like an actual person in the area.

Your conviction rate will go through the roof when the artist's sketches are a
dead ringer for the suspect!

~~~
dwighttk
A) people commit crimes in places they don’t live

B) even so dmv photos are notoriously bad

~~~
nullc
> A) people commit crimes in places they don’t live

No problem! Conviction/case-closure stats don't care if the person you
convicted was local or not.

> B) even so dmv photos are notoriously bad

They're often unattractive but they usually identify people pretty well when
they're not too old to do so.

------
Firadeoclus
This is certainly interesting research, and as an artist I think I'd be hugely
frustrated by the amount of non-local change which seems to happen in the
video. A fair number of small pen strokes seem to affect a large part of the
generated face.

For example, take the difference between 2:20 and 2:27 in the video. The upper
half of the drawing hasn't changed, but the generated image has a lot more
hair and different ears. While the technology looks impressive as it is, it
seems to me that it would be better to leave areas the artist has barely
defined as blurred rather than flickering between various high resolution
features that are all roughly equally matching the sketch.

~~~
tw000001
That's most likely a training data issue.

The whole thing works on statistical priors: if I have feature a at location
x, there's a 90% I should have feature b at location y. So if the majority of
pictures of beards in my dataset were also, say, wearing sunglasses, then
naturally if I freehand draw a beard the net will probably output sunglasses
even if I don't change the eyes!

The solution is to ensure that you sample the full data space that you wish to
reproduce (not trivial). Neural nets do seem to interpolate but this is super
high dimensional space so it's not always intuitive...there are many orders of
magnitude more directions in which to move to get from point A to point B.

------
tanilama
There seems to be a good use case for video game assets generation without
copyright fear.

------
dhosek
I'd kind of like to see what it does with assorted cartoon character pictures.
What does it do with, say, Charlie Brown or Calvin?

~~~
bencollier49
Or Garfield.

------
totetsu
Combining this with Deep Fake Videos
[https://news.ycombinator.com/item?id=23423372](https://news.ycombinator.com/item?id=23423372)
[https://myvoiceyourface.com/video/?id=255d940d00](https://myvoiceyourface.com/video/?id=255d940d00)

------
econcon
When will accent correction come to market so non native speakers (who have
difficulty in pronouncing some words) are able to make quality voice overs for
YouTube and podcasts?

Looking at the virtual agents, it seems they are able to understand very
crappy English (with all my attempts), how far are we from correcting it?

~~~
antoncohen
I wouldn't be too concerned with getting accents perfect (unless your are an
actor). There are so many extremely diverse native English accents, like
Western American, Boston American, Received Pronunciation, Scouse, Scottish,
Irish, Welsh, Australian, etc. I think different pronunciations are accepted.

And the magical accent corrector wouldn't fix bad grammar.

~~~
notahacker
Yep, I can easily believe it'll fix the pitch, but not the phrasing, or the
pauses, or without the odd glaring error. And the uncanny valley effect might
actually be considerably worse than the traces of native languages English
people are used to hearing from ESL speakers.

I'm reminded of a former flatmate whose father chose not to raise her as
bilingual in the mistaken belief a second language would impair her learning.
Instead, when she chose to learned Spanish as an adult anyway, she picked up
the slang and pronunciation of her Colombian relatives, but never quite
reached native fluency. She pointed out the drawback to having a local
sounding accent and name instead of being an obvious foreigner was that
everybody who met her assumed her misunderstandings, pauses or the odd really
ungrammatical phrase was because she was an unusually stupid Colombian.

------
jondwillis
Is this open sourced anywhere? I didn’t see it in the article

~~~
ryannevius
From the article: "They are working on releasing the source code soon."

~~~
bencollier49
Article shouldn't have been accepted. Research papers relying on code without
any verifiability ought to be more of an issue.

------
michaelbrave
About a year ago I was explaining to a friend of mine how I thought that
increases in artistic AI combined with procedural generation could change art
forever.

He immediately thought "we should use to to make porn without actors, that
would make money" this seems closer to possible everyday.

------
bsenftner
The real question: if one is trying to draw a likeness of a real person, is it
possible to get this software to generate a decent likeness because the
drawing projects that likeness, or are these "drawings" really just weak
pseudo-random number generators and the software presents a realistic face
regardless?

------
cmdshiftf4
Given that this was created by Chinese people, in Hong Kong, why are all their
examples white/hispanic people?

~~~
grawprog
Of all the possible things you could choose to comment on about this product,
why did you feel like this was the most relevant for discussion?

~~~
cmdshiftf4
I just noticed it and found it interesting.

Why is it interesting? Well for one, it raises an eyebrow as to the
motivations behind either the technology or that of the promotional material
produced for it.

I'd find it similarly interesting if, for an example, an entirely Russian team
produced deepfake tech and produced promotional material for it entirely
consisting of black people.

Especially in an era where we already acknowledge the prevalence of nation
state cyber psyops / propaganda / manufactured news and "facts".

------
Daub
This is very cool indeed. I can imagine application in the game industry,
advertising etc. However.... all results are frontal. No three-quarters or
profile at all. This heavily restricts its use. If the user was allowed to
input both a profile and a front view, then all would be fine.

------
app4soft
Evolution of face reconstruction using eyewitnesses memory of face:

0\. Portrait painted by trained artist in consultation with a witness;

1\. "Identikit";

2\. "PhotoFIT";

3\. "DeepFaceDrawing" \- [0] powered by ML & AI (which are trained on [1] &
[2]?).

------
tartoran
I find it interesting and yet am unsettled by how AI engineering is ofsetting
human creativity and artistry. And whats with this fetish for deepfake? Are we
pursuing this or we do it because we can?

------
jonplackett
How do the faint lines work? The grey ones already there when they start
drawing in black. Are they just a guide or being used, if so in what way?

~~~
nullc
I think they're some kind of back projection from the nearby space of the
image. Without them, you'd be more likely to draw features in the wrong place
because you and the computer have a different idea of the intended scale and
end up monster faces.

~~~
jonplackett
Aha! That makes sense. Thanks

------
grizzles
Anyone know projects that have done this in reverse? I've seen a really good
one but I've lost track of it.

~~~
nedimm
Maybe this one [https://youtu.be/6LmQS4DJl6c](https://youtu.be/6LmQS4DJl6c)

------
jonplackett
IDEA/REQUEST (if anyone has this running)

Trace over some famous cartoon characters and see what it outputs.

------
voiper1
Wow, watched the video. I'd love to use the sketching program. Looks like fun!

~~~
LoSboccacc
yeah lot of these "AI does the thing" stuff looks extremely interesting to
play with, especially to see what happens when you poke the model with weird
data (say, no eye, no hair, two noses etc) but most of them are completely
inaccessible to non researchers, even when the code/model is released building
the solution requires so much effort and expertise to make it impossible to
play with

this is why I loved so much the pix2pix cat drawing demo
[https://affinelayer.com/pixsrv/](https://affinelayer.com/pixsrv/) I hoped it
would make a turning point for demoes but alas this is still unique

~~~
voiper1
Very cool!! Some of them don't quite look real once I've messed with it (bad
drawing skills) but that's very impressive, especially across such a diverse
set of objects (each has their own model, but still.)

------
Priem19
Someone please draw a penis and share the results.

------
Polylactic_acid
Is there a demo we can try?

~~~
pentestercrab
[http://geometrylearning.com/DeepFaceDrawing/](http://geometrylearning.com/DeepFaceDrawing/)
says "[Coming Soon]" for the code.

------
jvalencia
I find it a bit disturbing that there is a lack of racial diversity in the
models. Especially given what is going on nationally in the US.

I can imagine it's much easier to train on type of face, but this could lead
to later bias.

~~~
junga
I don‘t know why you are downvoted. To me this is a very valid point. Does
anybody care to give some arguments for downvoting?

