Techniques For Increasing Image Quality Without Buying a Better GPU - jcjohnson/neural-style GitHub Wiki


Waifu2x

Single-Image Super-Resolution for Anime-Style Art using Deep Convolutional Neural Networks. And it supports photo.

Linux: https://github.com/nagadomi/waifu2x

Site (Has upload size limit): http://waifu2x.udp.jp/

Windows (Use Chrome's translate feature on the site as the Waifu itself has an English language setting): http://inatsuka.com/extra/koroshell/

Waifu2x was designed to increase the quality of anime images and to re-size images. Thus depending on the style, the re-sizing and/or noise reducing will work to varying degrees. Waifu2x has been found to work surprisingly well on a variety of different styles.

Other implementations of Waifu2x exist, like waifu2x-caffe (Which only runs on Windows). Waifu2x-Caffe supports CUDA, and cuDNN, along with allowing you to choose from multiple different super resolution models.

Waifu2x-Caffe can be found here: https://github.com/lltcggie/waifu2x-caffe

Also see the Using Waifu2x On Neural Art Wiki page for more details on how well each implementation performs on style transfer artwork.


IrfanView

"IrfanView has the best sharpening algorithm I've seen. After you resize your image with waifu, open it in irfanview and hit SHIFT+S to sharpen". - Neural-Style User on Reddit.

Works for Windows: http://www.irfanview.com/


NIN Upres

  • content_image: VGG generated image

  • style_image: original style (used to generate VGG version)

  • keep the tv_weight as low as possible (I've used 0.00001)

  • This technique works best for traditional-art styles (like van gogh's starry night) where you can see some grunge/noise. For smooth styles (anime/sharp-edged) you're better off using waifu2x.

Source


Example of using NIN to increase Places205-VGG image size and quality:

  1. First, create an image with neural-style using the Places205-VGG model.

  2. Depending on the GPU resources available to you, either convert the previously created image to a .jpg or leave it as a png.

  3. Run the following command using the same style image that you originally used to create your content image. Make sure your content image is the one you created in step 1:

th neural_style.lua -style_image StyleImage.jpg -content_image ContentImage.jpg -output_image out.png -tv_weight 0.0001 -image_size 2500 -save_iter 50 -content_weight 10 -style_weight 1000 -num_iterations 1000 -model_file models/nin_imagenet_conv.caffemodel -proto_file models/train_val.prototxt -content_layers relu1,relu7,relu12 -style_layers relu1,relu3,relu5,relu7,relu9 -backend cudnn -cudnn_autotune -optimizer adam

Examples/Results:

Tubingen: https://imgur.com/a/ALzL7

Brad Pitt: https://imgur.com/a/Ws8x5

Escher Sphere: https://imgur.com/a/KS1mk

Stanford: https://imgur.com/a/M5jlz

Notes:

This has only been tested with Starry Night and the example images. I used an Amazon g2.2xlarge spot instance which had 4GB of GPU memory. This should work though with any combination of models and settings.

I used the following command to generate the original images:

th neural_style.lua -style_image StyleImage.jpg -content_image ContentImage.jpg -output_image out.png -tv_weight 0.0001 -save_iter 50 -num_iterations 1000 -model_file models/snapshot_iter_765280.caffemodel -proto_file models/deploy_10.prototxt -backend cudnn -cudnn_autotune -optimizer adam

Download the Places205-VGG model here: http://places.csail.mit.edu/model/places205vgg.tar.gz


Adobe InDesign Tiling

  1. Dream a neural-style result image

  2. In Adobe InDesign setup an overlapping grid and then paste the result image into each box.

  3. Set the document size to match your grid box size and make multiple pages for each box

  4. Output each of the pages as their own jpeg image using the export function.

  5. Dream each of the 12 images using the original style image. Best to setup a loop so you don't have to wait around running each one.

  6. Use the grid from step 2 and create a new document the exact size of the whole grid. Drag each of your new result images into each step of the grid and fit photo size to box.

  7. Use Gradient and Basic feather effects to blend the tile edges together

  8. Output final result image. Note - Since my Indesign document was setup at the size of the original result the final image resolution doesn't match the available resolution since each box now contains a higher resolution than the original 72dpi. To compensate I just up the resolution within the output settings and you gain detail in the final output.

Source Screencap/Video Tutorial with notes

  • Normally when processing the tiles through neural-style, I save the iterations at 50, 150, 250, and 500. I have found that depending on the style and settings used, the tile will diverge from the original output's style at different rates. The content image's content also affects the potential for divergence and thus some content image and style combinations may require tweaking of the settings to produce satisfactory results.(ProGamerGov)

An automated version of this process which uses free and open source software, can be found here: https://github.com/0000sir/larger-neural-style

A fork of the larger-neural-style script can be found here: https://github.com/ProGamerGov/Neural-Tile


Upres and Tiling Naming Conventions:

Model Name + Optional Tile Amount + Tiled + Optional Different Model Used for Upres + Upres

Examples:

VGG19 Tiled Upres - The default VGG19 model was used for the original output and the Upres of each individual piece.

VGG19 Tiled NIN Upres - The default VGG19 model was used for the original output and the NIN model was used for the Upres of each individual piece.

VGG19 4x3 Tiled Upres - The default VGG19 model was used for the original output and the Upres of each individual piece. There were 4 tiles by 3 tiles used to create the final output.

VGG19 4x3 Tiled 1500-150 Upres - The default VGG19 model was used for the original output and the Upres of each individual piece. There were 4 tiles by 3 tiles used to create the final output. The original output was at iteration 1500 while the Upres output used for each tile was at iteration 150.