Hacker News new | past | comments | ask | show | jobs | submit login
Neural Rendering: How Low Can You Go in Terms of Input? (unite.ai)
66 points by Hard_Space on May 13, 2021 | hide | past | favorite | 10 comments



The video linked of applying a neural net to GTAV for photorealism is really impressive if you haven't seen it.

https://www.youtube.com/watch?v=P1IcaBn3ej0


I like the look when the model when it's trained with the Vistas dataset. Much more saturation. https://youtu.be/yLLhMkctfBY?t=4314


The approach that would be most interesting to me would be if you could build out your assets with really high quality assets, along with engine friendly low quality versions and then use a slower but high quality ray tracing setup to render out extremely well labeled training sets. This would potentially allow you to have detailed aesthetic control over the end results.


That's a neat idea. Using ai to approximate a slow process in real time is already done for different topics, I think I saw something for cloth simulation where they trained the network on a high resolution simulation and then used its output for the real time gameplay. But maybe you don't get as nice of a look as a real world video source, even the best rendering algorithm and assets might feel a bit "fake" and I wouldn't want to be the guy adding details to a model just to train the ai, and having the model thrown away after training.


I think one possible problem with this is that you don't really have the worldspace in the traditional sense, thus making many things like convincing reflections impossible. So probably they have to be rendered by the conventional pipeline, then augmented with the NN.


This neural rendering is really, really cool. Things like the GTA5 demo, this (https://www.youtube.com/watch?v=miLIwQ7yPkA), and pix2pix. The general concept - dumb sketch to real photo or artistic masterpiece - is the most impressive thing I've seen this past year. Seriously.

Best case scenario (from a technology perspective), it means people can make games with the most bare-bones, low-end graphics, plus neural enhancement, and they would look better than top-of-the line AAA games today. Someone with no experience could follow a basic tutorial and in 5 minutes be creating hyper-realistic landscapes, cities, characters, etc. from lazy sketches and clunky 3D shapes. Anyone can be a "talented" artist.

Which is actually really bad for real talented artists and raises serious ethical issues. That's why, best case from a technology perspective, it would be catastrophic if we had this technology today. But it's the best case, too good to be true, we're not going to have it anytime soon.

Right now we have tools like pix2pix, that turn decent-quality sketches into uncanny-valley products. If you squint they look realistic, but they're also obviously AI-generated. And overfitted: you can see what the network was trained on through the output, and you simply can't create anything too far from the training data. That's probably what we can expect in the near future.

But even that is really impressive. And I actually see a lot of practical uses for it. You can make art using these images, it will be obviously AI-generated art, but people won't really care.


>Which is actually really bad for real artists and raises some ethical issues.

This is raised every time a spectacular ML demo emerges. The generic answer is that the real value in artistic work is conceptualization. Execution is secondary. This method doesn't conceptualize anything, you probably need strong AI for that. Artists will just use the new tool, simple as that.


I would love to see this incorporated into a HUD in actual cars to augment unsafe driving conditions. For example outlining cars, roads clearly during the night or rainy/snowy weather.


All of this AI stuff is wizardry and magic.


>"How low can you go?"

Let's see what pops out when Intel points it at XKCD




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: