Deep learning has proven to be one of the most effective methods in analyzing biological images to extract parameters fundamental for studying physiological functions and pathological conditions. In particular, when coupled with time-lapse microscopy (TLM), deep learning proves particularly effective in studying behaviors involving temporal dynamics. However, TLM videos are often affected by experimental noise and setup limitations, which can lead to inaccurate and poorly reproducible results. Taking advantage of the variational and generative capabilities of Variational Autoencoders (VAEs), we propose VAE-MOTION, a deep learning-based model for the analysis of cardiac contractile dynamics. By incorporating a temporal encoder into its architecture, our model allows the restoration of video quality by removing noise or increasing resolution, while simultaneously extracting accurate contraction-related signals from the latent space. The generation of synthetic videos allowed extensive training of VAE-MOTION, which subsequently validated on real videos from two different cardiac tissue models: 2D monolayers and 3D microtissues. VAE-MOTION was compared to two gold-standard methods in extracting contraction parameters relevant to drug efficacy or toxicity studies, demonstrating its potential for analyzing temporal dynamics in a given phenomenon or process.

Curci, G., Casti, P., Sala, L., Brescia, M., Cascarano, P., D'Orazio, M., et al. (2025). VAE-MOTION: A deep generative model for cardiomyocyte contractility analysis for improving drug efficacy evaluation. EXPERT SYSTEMS WITH APPLICATIONS, Volume 299, Part C, 1-24 [10.1016/j.eswa.2025.130302].

VAE-MOTION: A deep generative model for cardiomyocyte contractility analysis for improving drug efficacy evaluation

Cascarano, Pasquale;
2025

Abstract

Deep learning has proven to be one of the most effective methods in analyzing biological images to extract parameters fundamental for studying physiological functions and pathological conditions. In particular, when coupled with time-lapse microscopy (TLM), deep learning proves particularly effective in studying behaviors involving temporal dynamics. However, TLM videos are often affected by experimental noise and setup limitations, which can lead to inaccurate and poorly reproducible results. Taking advantage of the variational and generative capabilities of Variational Autoencoders (VAEs), we propose VAE-MOTION, a deep learning-based model for the analysis of cardiac contractile dynamics. By incorporating a temporal encoder into its architecture, our model allows the restoration of video quality by removing noise or increasing resolution, while simultaneously extracting accurate contraction-related signals from the latent space. The generation of synthetic videos allowed extensive training of VAE-MOTION, which subsequently validated on real videos from two different cardiac tissue models: 2D monolayers and 3D microtissues. VAE-MOTION was compared to two gold-standard methods in extracting contraction parameters relevant to drug efficacy or toxicity studies, demonstrating its potential for analyzing temporal dynamics in a given phenomenon or process.
2025
Curci, G., Casti, P., Sala, L., Brescia, M., Cascarano, P., D'Orazio, M., et al. (2025). VAE-MOTION: A deep generative model for cardiomyocyte contractility analysis for improving drug efficacy evaluation. EXPERT SYSTEMS WITH APPLICATIONS, Volume 299, Part C, 1-24 [10.1016/j.eswa.2025.130302].
Curci, Giorgia; Casti, Paola; Sala, Luca; Brescia, Marcella; Cascarano, Pasquale; D'Orazio, Michele; Filippi, Joanna; Antonelli, Gianni; Mencattini, A...espandi
File in questo prodotto:
File Dimensione Formato  
VAE-MOTION.pdf

accesso aperto

Descrizione: articolo
Tipo: Versione (PDF) editoriale / Version Of Record
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 5.85 MB
Formato Adobe PDF
5.85 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/1027857
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact