In an era of information abundance and visual saturation, the need for resources to organise and access the vast expanse of visual data is paramount. Abstract conceptssuch as comfort, power, or freedom-emerge as potent instruments to index and manage visual data, particularly in contexts like Cultural Heritage (CH). However, the variance and disparity in the visual signals that evoke a single abstract concept challenge conventional approaches to automatic visual management rooted in the Computer Vision (CV) field. This paper critically engages with the prevalent trend of automating high-level visual reasoning while placing exclusive reliance on visual signals, prominently featuring Convolutional Neural Networks (CNNs). We delve into this trend, scrutinising the knowledge sought by CNNs and the knowledge they ultimately encapsulate. In this endeavour, we accomplish three main objectives: (1) introduction of ARTstract, an extensive dataset encompassing cultural images that evoke specific abstract concepts; (2) presentation of baseline model performances on ARTstract to elucidate the intricate nuances of image classification based on abstract concepts; and, critically, (3) utilization of ARTstract as a case study to explore both traditional and non-traditional avenues of visual interpretability, a trajectory inspired by Offert and Bell (2021). To more comprehensively understand how CNNs assimilate and reflect cultural meanings, and to discern the echoes reverberating within these visions, we unveil SD-AM, a novel approach to explainability that collapses visuals into hypericon images through a fusion of feature visualization techniques and Stable Diffusion denoising. Overall, this study critically addresses abstract concept image classification’s challenges within the CNN paradigm. By embracing innovative methodologies and providing comprehensive analyses of explainability techniques, we make a substantial contribution to the broader discourse surrounding automatic high-level visual understanding, its interpretability, and the ensuing implications for comprehending culture within the digital era. Through our exploration, we illuminate the multifaceted trends, complexities, and opportunities that underlie the fusion of high-level visual reasoning and computer vision.

Hypericons for interpretability: decoding abstract concepts in visual data / Delfina Sol Martinez Pandiani, Nicolas Lazzari, Marieke van Erp & Valentina Presutti. - In: INTERNATIONAL JOURNAL OF DIGITAL HUMANITIES. - ISSN 2524-7840. - ELETTRONICO. - 5:2-3(2023), pp. 451-490. [10.1007/s42803-023-00077-8]

Hypericons for interpretability: decoding abstract concepts in visual data

Delfina Sol Martinez Pandiani
Primo
;
Nicolas Lazzari
Secondo
;
Valentina Presutti
2023

Abstract

In an era of information abundance and visual saturation, the need for resources to organise and access the vast expanse of visual data is paramount. Abstract conceptssuch as comfort, power, or freedom-emerge as potent instruments to index and manage visual data, particularly in contexts like Cultural Heritage (CH). However, the variance and disparity in the visual signals that evoke a single abstract concept challenge conventional approaches to automatic visual management rooted in the Computer Vision (CV) field. This paper critically engages with the prevalent trend of automating high-level visual reasoning while placing exclusive reliance on visual signals, prominently featuring Convolutional Neural Networks (CNNs). We delve into this trend, scrutinising the knowledge sought by CNNs and the knowledge they ultimately encapsulate. In this endeavour, we accomplish three main objectives: (1) introduction of ARTstract, an extensive dataset encompassing cultural images that evoke specific abstract concepts; (2) presentation of baseline model performances on ARTstract to elucidate the intricate nuances of image classification based on abstract concepts; and, critically, (3) utilization of ARTstract as a case study to explore both traditional and non-traditional avenues of visual interpretability, a trajectory inspired by Offert and Bell (2021). To more comprehensively understand how CNNs assimilate and reflect cultural meanings, and to discern the echoes reverberating within these visions, we unveil SD-AM, a novel approach to explainability that collapses visuals into hypericon images through a fusion of feature visualization techniques and Stable Diffusion denoising. Overall, this study critically addresses abstract concept image classification’s challenges within the CNN paradigm. By embracing innovative methodologies and providing comprehensive analyses of explainability techniques, we make a substantial contribution to the broader discourse surrounding automatic high-level visual understanding, its interpretability, and the ensuing implications for comprehending culture within the digital era. Through our exploration, we illuminate the multifaceted trends, complexities, and opportunities that underlie the fusion of high-level visual reasoning and computer vision.
2023
Hypericons for interpretability: decoding abstract concepts in visual data / Delfina Sol Martinez Pandiani, Nicolas Lazzari, Marieke van Erp & Valentina Presutti. - In: INTERNATIONAL JOURNAL OF DIGITAL HUMANITIES. - ISSN 2524-7840. - ELETTRONICO. - 5:2-3(2023), pp. 451-490. [10.1007/s42803-023-00077-8]
Delfina Sol Martinez Pandiani, Nicolas Lazzari, Marieke van Erp & Valentina Presutti
File in questo prodotto:
File Dimensione Formato  
s42803-023-00077-8.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 6.94 MB
Formato Adobe PDF
6.94 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/956279
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact