
Image-to-image translation in PyTorch - tcam1982
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
======
simlevesque
The live demo is super fun (I really hope this does not kill it) :
[https://affinelayer.com/pixsrv/](https://affinelayer.com/pixsrv/)

~~~
logicallee
Thanks for the link - you're right, it _was_ fun!

------
tonmoy
Serious question, can graphics rendering in games be replace with a technology
like these? Then the developer will train a model to convert a simple low poly
count 3d image to a very beautiful looking image, and gamer's PC will use
those weights on the fly to make the game look beautiful. No need for
expensive GPUs (as long as NN predictions are easy)

~~~
Houshalter
>No need for expensive GPUs

These techniques require expensive GPUS. Even with top of the line hardware it
takes a few seconds per frame. Perhaps something like this could be used to
assist animators and modelers though.

~~~
jacquesm
But you'd only have to render the level _once_ , after that it's all textures
to be slapped on polygons. And those 'expensive GPUs' are pretty much there in
that context since they were designed for games to begin with.

In fact, let's stimulate this to make sure GPUs get even faster and have more
memory.

GP is wrong in that GPUs won't be needed but that doesn't make it a bad idea
as such.

~~~
hwillis
>But you'd only have to render the level once, after that it's all textures to
be slapped on polygons.

No, then you'd have textures on low resolution polygons. That's using the
method during production, not in the renderer. If you want this method to
improve image rendering it needs to happen as part of the render ie every
frame.

------
billy2201
There is an implementation in TensorFlow too:
[https://github.com/vanhuyz/CycleGAN-
TensorFlow](https://github.com/vanhuyz/CycleGAN-TensorFlow)

------
alphapapa
This is almost scary if you think about it, because this is in its infancy. In
5, 10, 15 years, will any photo be able to be authenticated? Will this be
doable in real-time at high resolution, on live video feeds? Will we be able
to believe anything but our own eyes?

~~~
jorgemf
Infancy? Convolutional neural networks have like 20 years. It is just recently
when we were able to use bigger models and more data in GPUs when it becomes
something main stream and a lot of people are doing things with it. So it is
more like in its maturity. Now it is a field growing slowly but steady. Don't
expect big things but small increment also every year (which by the way are
awesome stuff)

------
rcarmo
The cat demo is hilarious. But kind of spooky if you consider that cats are
still recognisable as cats :)

[https://github.com/junyanz/pytorch-CycleGAN-and-
pix2pix/raw/...](https://github.com/junyanz/pytorch-CycleGAN-and-
pix2pix/raw/master/imgs/edges2cats.jpg)

------
WhitneyLand
The only thing not exciting about watching unfold the AI renaissance than may
eventually break the turing barrier, is that the research demos are shit for
quality.

I admit it's exciting to zoom in on that little zebra video, but the RealVideo
sizing makes it hard to characterize artifacts.

Some papers include dozens of images at 32x32 and I wonder if people really
stop to look at them.

A small price to pay I guess. Most first worldish problem ever?

------
r0muald
As always, I would prefer more focus on training your own models rather than
running prebaked ones.

------
tombert
This is crazy...I wonder how long it'll take for this to be a feature in
something like After Effects.

------
kirpekar
Wow - impressive!

