Simple Summary This article discusses the potential of deep learning (DL) models in aiding the diagnosis of endometrial pathologies through hysteroscopic images. While hysteroscopy with endometrial biopsy is currently the gold standard for diagnosis, it heavily relies on the expertise of gynecologists. The study aims to develop a DL model for automated detection and classification of endometrial pathologies. Conducted as a monocentric observational retrospective cohort study, it reviewed records and videos of hysteroscopies from patients with confirmed intrauterine lesions. The DL model was trained using these images, with or without incorporating clinical factors. Results indicate that while the DL model showed promising results, its diagnostic performance remained relatively low, even with the inclusion of clinical data. The best performance was achieved when clinical factors were included, with precision, recall, specificity, and F1 scores ranging from 80 to 90% for classification and 85 to 93% for identification tasks. Despite slight improvements in clinical data, further refinement of DL models is warranted for more accurate diagnosis of endometrial pathologies.Abstract Background: Although hysteroscopy with endometrial biopsy is the gold standard in the diagnosis of endometrial pathology, the gynecologist experience is crucial for a correct diagnosis. Deep learning (DL), as an artificial intelligence method, might help to overcome this limitation. Unfortunately, only preliminary findings are available, with the absence of studies evaluating the performance of DL models in identifying intrauterine lesions and the possible aid related to the inclusion of clinical factors in the model. Aim: To develop a DL model as an automated tool for detecting and classifying endometrial pathologies from hysteroscopic images. Methods: A monocentric observational retrospective cohort study was performed by reviewing clinical records, electronic databases, and stored videos of hysteroscopies from consecutive patients with pathologically confirmed intrauterine lesions at our Center from January 2021 to May 2021. Retrieved hysteroscopic images were used to build a DL model for the classification and identification of intracavitary uterine lesions with or without the aid of clinical factors. Study outcomes were DL model diagnostic metrics in the classification and identification of intracavitary uterine lesions with and without the aid of clinical factors. Results: We reviewed 1500 images from 266 patients: 186 patients had benign focal lesions, 25 benign diffuse lesions, and 55 preneoplastic/neoplastic lesions. For both the classification and identification tasks, the best performance was achieved with the aid of clinical factors, with an overall precision of 80.11%, recall of 80.11%, specificity of 90.06%, F1 score of 80.11%, and accuracy of 86.74 for the classification task, and overall detection of 85.82%, precision of 93.12%, recall of 91.63%, and an F1 score of 92.37% for the identification task. Conclusion: Our DL model achieved a low diagnostic performance in the detection and classification of intracavitary uterine lesions from hysteroscopic images. Although the best diagnostic performance was obtained with the aid of clinical data, such an improvement was slight.

Detection and Classification of Hysteroscopic Images Using Deep Learning / Raimondo, D; Raffone, A; Salucci, P; Raimondo, I; Capobianco, G; Galatolo, FA; Cimino, MGCA; Travaglino, A; Maletta, M; Ferla, S; Virgilio, A; Neola, D; Casadio, P; Seracchioli, R. - In: CANCERS. - ISSN 2072-6694. - ELETTRONICO. - 16:7(2024), pp. 1315.1-1315.10. [10.3390/cancers16071315]

Detection and Classification of Hysteroscopic Images Using Deep Learning

Raimondo, D;Raffone, A
;
Salucci, P;Maletta, M;Ferla, S;Virgilio, A;Casadio, P;Seracchioli, R
2024

Abstract

Simple Summary This article discusses the potential of deep learning (DL) models in aiding the diagnosis of endometrial pathologies through hysteroscopic images. While hysteroscopy with endometrial biopsy is currently the gold standard for diagnosis, it heavily relies on the expertise of gynecologists. The study aims to develop a DL model for automated detection and classification of endometrial pathologies. Conducted as a monocentric observational retrospective cohort study, it reviewed records and videos of hysteroscopies from patients with confirmed intrauterine lesions. The DL model was trained using these images, with or without incorporating clinical factors. Results indicate that while the DL model showed promising results, its diagnostic performance remained relatively low, even with the inclusion of clinical data. The best performance was achieved when clinical factors were included, with precision, recall, specificity, and F1 scores ranging from 80 to 90% for classification and 85 to 93% for identification tasks. Despite slight improvements in clinical data, further refinement of DL models is warranted for more accurate diagnosis of endometrial pathologies.Abstract Background: Although hysteroscopy with endometrial biopsy is the gold standard in the diagnosis of endometrial pathology, the gynecologist experience is crucial for a correct diagnosis. Deep learning (DL), as an artificial intelligence method, might help to overcome this limitation. Unfortunately, only preliminary findings are available, with the absence of studies evaluating the performance of DL models in identifying intrauterine lesions and the possible aid related to the inclusion of clinical factors in the model. Aim: To develop a DL model as an automated tool for detecting and classifying endometrial pathologies from hysteroscopic images. Methods: A monocentric observational retrospective cohort study was performed by reviewing clinical records, electronic databases, and stored videos of hysteroscopies from consecutive patients with pathologically confirmed intrauterine lesions at our Center from January 2021 to May 2021. Retrieved hysteroscopic images were used to build a DL model for the classification and identification of intracavitary uterine lesions with or without the aid of clinical factors. Study outcomes were DL model diagnostic metrics in the classification and identification of intracavitary uterine lesions with and without the aid of clinical factors. Results: We reviewed 1500 images from 266 patients: 186 patients had benign focal lesions, 25 benign diffuse lesions, and 55 preneoplastic/neoplastic lesions. For both the classification and identification tasks, the best performance was achieved with the aid of clinical factors, with an overall precision of 80.11%, recall of 80.11%, specificity of 90.06%, F1 score of 80.11%, and accuracy of 86.74 for the classification task, and overall detection of 85.82%, precision of 93.12%, recall of 91.63%, and an F1 score of 92.37% for the identification task. Conclusion: Our DL model achieved a low diagnostic performance in the detection and classification of intracavitary uterine lesions from hysteroscopic images. Although the best diagnostic performance was obtained with the aid of clinical data, such an improvement was slight.
2024
Detection and Classification of Hysteroscopic Images Using Deep Learning / Raimondo, D; Raffone, A; Salucci, P; Raimondo, I; Capobianco, G; Galatolo, FA; Cimino, MGCA; Travaglino, A; Maletta, M; Ferla, S; Virgilio, A; Neola, D; Casadio, P; Seracchioli, R. - In: CANCERS. - ISSN 2072-6694. - ELETTRONICO. - 16:7(2024), pp. 1315.1-1315.10. [10.3390/cancers16071315]
Raimondo, D; Raffone, A; Salucci, P; Raimondo, I; Capobianco, G; Galatolo, FA; Cimino, MGCA; Travaglino, A; Maletta, M; Ferla, S; Virgilio, A; Neola, D; Casadio, P; Seracchioli, R
File in questo prodotto:
File Dimensione Formato  
cancers-16-01315.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 643.93 kB
Formato Adobe PDF
643.93 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/969374
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact