Generative Adversarial Networks (GANs) create artificial images through adversary training between a generator (G) and a discriminator (D) network. This training is based on game theory and aims to reach an equilibrium between the networks. However, this equilibrium is hardly achieved, and D tends to be more powerful. This problem occurs because G is trained based on only a single value representing D’s prediction, and only D has access to the image features. To address this issue, we introduce a new approach using Explainable Artificial Intelligence (XAI) methods to guide the G training. Our strategy identifies critical image features learned by D and transfers this knowledge to G. We have modified the loss function to propagate a matrix of XAI explanations instead of only a single error value. We show through quantitative analysis that our approach can enrich the training and promote improved quality and more variability in the artificial images. For instance, it was possible to obtain an increase of up to 37.8% in the quality of the artificial images from the MNIST dataset, with up to 4.94% more variability when compared to traditional methods.
Rozendo, G.B., Lumini, A., Roberto, G.F., Tosta, T.A.A., do Nascimento, M.Z., Neves, L.A. (2024). X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence. Science and Technology Publications, Lda [10.5220/0012618400003690].
X-GAN: Generative Adversarial Networks Training Guided with Explainable Artificial Intelligence
Lumini A.;
2024
Abstract
Generative Adversarial Networks (GANs) create artificial images through adversary training between a generator (G) and a discriminator (D) network. This training is based on game theory and aims to reach an equilibrium between the networks. However, this equilibrium is hardly achieved, and D tends to be more powerful. This problem occurs because G is trained based on only a single value representing D’s prediction, and only D has access to the image features. To address this issue, we introduce a new approach using Explainable Artificial Intelligence (XAI) methods to guide the G training. Our strategy identifies critical image features learned by D and transfers this knowledge to G. We have modified the loss function to propagate a matrix of XAI explanations instead of only a single error value. We show through quantitative analysis that our approach can enrich the training and promote improved quality and more variability in the artificial images. For instance, it was possible to obtain an increase of up to 37.8% in the quality of the artificial images from the MNIST dataset, with up to 4.94% more variability when compared to traditional methods.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


