vestanCredit to Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi/Arxiv on their paper about Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. I suggest you give it a read if you’re interested in this kind of thing.
Guide if ya wanna try it yourself:
Credit to kingdomakrillic their amazing work!
(also you don’t need an NVIDIA card for this just go into test.py and change “device = torch.device(‘cuda’)“ to “ device = torch.device(‘cpu’)”. No AMD/Intel though)
(if you want to try this, make sure you have the image at its native resolution!)
(for the best results, don’t use compressed shit off the internet)
NVIDIA has their own Generative Adverserial Network but you have to sign up to use it as it is still in beta: https://developer.nvidia.com/gwmt
Please read through the whole thread! I won’t be updating this OP with newer images and instead’ll be posting newer stuff within the thread! There’s some cool stuff below!
Long story short, Enhanced Super Resolution Generative Adverserial Network, or ESRGAN, is an upscaling method that is capable of generating realistic textures during single image super-resolution. Basically it’s a machine learning technique that uses a generative adverserial network to upres smaller images. By doing it over several passes, it will usually produce an image with more fidelity than methods such as SRCNN and SRGAN. In fact, ESRGAN is based off SRGAN. The difference between the two is that ESRGAN improves on SRGAN’s network architecture, adversarial loss and perceptual loss. Furthermore ESRGAN
Obviously this isn’t going to make every image look amazing but it’s worth giving a shot. There are some genuinely great stuff out there.
ESRGAN has been used to improve the textures of older games such as Doom and Morrowind. In fact, there’s a DOOM texture pack that was released recently using this method.
For reference, Waifu2x uses Deep Convolutional Neural Networks as opposed to Adverserial Networks to scale 2x the original size.
With NVIDIA’s GWMT tools and their own GAN, expect to see some crazier stuff down the line. Any game dev looking into super-resolution will definitely be using NVIDIA’s stuff. This is only the precipice!