Zhou et al. 2018.pol - guillaumedescoteauxisabelle/ma-biblio GitHub Wiki


page: 1 type: text-highlight created: 2021-01-01T06:39:53.933Z color: yellow Non-Stationary Texture Synthesis by Adversarial Expansio

page: 1 type: text-highlight created: 2021-01-01T06:40:12.651Z color: yellow YANG ZHOU ∗ , Shenzhen University and Huazhong University of Science & Technology ZHEN ZHU and XIANG BAI, Huazhong University of Science and Technology DANI LISCHINSKI, The Hebrew University of Jerusalem DANIEL COHEN-OR, Shenzhen University and Tel Aviv University HUI HUANG † , Shenzhen University

page: 1 type: text-highlight created: 2021-01-03T13:50:24.986Z color: yellow CCS Concepts: • Computing methodologies → Appearance and tex- ture representations ; Image manipulation ; Texturing ;

page: 1 type: text-highlight created: 2021-01-03T13:52:06.965Z color: #9900EF r, we propose a new approach for example-based non-stationary texture synthesis. Our approach uses a generative adversarial network (GAN), trained to double the spatial extent of texture blocks extracted from a specific texture exemplar.

page: 1 type: text-highlight created: 2021-01-03T13:52:39.000Z color: green s highly effective for capturing large scale structures

page: 1 type: text-highlight created: 2021-01-03T13:53:23.067Z color: green convolutional generator is able to expand the size of the entire exemplar, as well as of any of its sub-blocks

page: 11 type: text-highlight created: 2021-01-03T13:54:34.718Z color: yellow SUMMARY

page: 11 type: text-highlight created: 2021-01-03T13:54:39.153Z color: yellow We have presented an example-based texture synthesis method ca- pable of expanding an exemplar texture, while faithfully preserving the global structures therein. This is achieved by training a gener- ative adversarial network, whose generator learns how to expand small subwindows of the exemplar to the larger texture windows containing them. A variety of results demonstrate that, through such adversarial training, the generator is able to faithfully repro- duce local patterns, as well as their global arrangements. Although a dedicated generator must be trained for each exemplar, once it is trained, synthesis is extremely fast, requiring only a single feed- forward pass through the generator network. The trained model is stable enough for repeated application, enabling generating diverse results of different sizes.