We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for real-world architectures, such as pre-trained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of under-sampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks but enjoys better computational properties for heavily over-parametrized models, which makes it possible to apply it to real-world networks.

Harutyunyan H., Alessandro Achille, Giovanni Paolini, Majumder O., Ravichandran A., Bhotika R., et al. (2021). Estimating informativeness of samples with Smooth Unique Information. International Conference on Learning Representations, ICLR.

Estimating informativeness of samples with Smooth Unique Information

Giovanni Paolini;
2021

Abstract

We define a notion of information that an individual sample provides to the training of a neural network, and we specialize it to measure both how much a sample informs the final weights and how much it informs the function computed by the weights. Though related, we show that these quantities have a qualitatively different behavior. We give efficient approximations of these quantities using a linearized network and demonstrate empirically that the approximation is accurate for real-world architectures, such as pre-trained ResNets. We apply these measures to several problems, such as dataset summarization, analysis of under-sampled classes, comparison of informativeness of different data sources, and detection of adversarial and corrupted examples. Our work generalizes existing frameworks but enjoys better computational properties for heavily over-parametrized models, which makes it possible to apply it to real-world networks.
2021
ICLR 2021 - 9th International Conference on Learning Representations
1
22
Harutyunyan H., Alessandro Achille, Giovanni Paolini, Majumder O., Ravichandran A., Bhotika R., et al. (2021). Estimating informativeness of samples with Smooth Unique Information. International Conference on Learning Representations, ICLR.
Harutyunyan H.; Alessandro Achille; Giovanni Paolini; Majumder O.; Ravichandran A.; Bhotika R.; Soatto S.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/943314
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? ND
social impact