In this paper, we combine the Deep Image Prior (DIP) framework with a Style Transfer (ST) technique to propose a novel approach (called DIP-ST) for image inpainting of artworks. We specifically tackle cases where the regions to fill in are large. Hence, part of the original painting is irremediably lost, and new content must be generated. In DIP-ST, a convolutional neural network processes the damaged image while a pre-trained VGG network forces a style constraint to ensure that the inpainted regions maintain stylistic coherence with the original artwork. We evaluate our method performance to inpaint different artworks, and we compare DIP-ST to some state-of-the-art techniques. Our method provides more reliable solutions characterized by a higher fidelity to the original images, as confirmed by better values of quality assessment metrics. We also investigate the effectiveness of the style loss function in distinguishing between different artistic styles and the results show that the style loss metric accurately measures artistic similarities and differences. At last, despite the use of neural networks, DIP-ST does not require a dataset for training making it particularly suited for art restoration where relevant datasets may be scarce.
Morotti, E., Merizzi, F., Evangelista, D., Cascarano, P. (2024). Inpainting with style: forcing style coherence to image inpainting with deep image prior. FRONTIERS IN COMPUTER SCIENCE, 6, 1-14 [10.3389/fcomp.2024.1478233].
Inpainting with style: forcing style coherence to image inpainting with deep image prior
Elena Morotti;Fabio Merizzi;Davide Evangelista;Pasquale Cascarano
2024
Abstract
In this paper, we combine the Deep Image Prior (DIP) framework with a Style Transfer (ST) technique to propose a novel approach (called DIP-ST) for image inpainting of artworks. We specifically tackle cases where the regions to fill in are large. Hence, part of the original painting is irremediably lost, and new content must be generated. In DIP-ST, a convolutional neural network processes the damaged image while a pre-trained VGG network forces a style constraint to ensure that the inpainted regions maintain stylistic coherence with the original artwork. We evaluate our method performance to inpaint different artworks, and we compare DIP-ST to some state-of-the-art techniques. Our method provides more reliable solutions characterized by a higher fidelity to the original images, as confirmed by better values of quality assessment metrics. We also investigate the effectiveness of the style loss function in distinguishing between different artistic styles and the results show that the style loss metric accurately measures artistic similarities and differences. At last, despite the use of neural networks, DIP-ST does not require a dataset for training making it particularly suited for art restoration where relevant datasets may be scarce.File | Dimensione | Formato | |
---|---|---|---|
fcomp-06-1478233 (1).pdf
accesso aperto
Tipo:
Versione (PDF) editoriale
Licenza:
Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione
4.67 MB
Formato
Adobe PDF
|
4.67 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.