The design of patient-specific implants for cranioplasty surgery is time-consuming and challenging. Hence, the 2021 AutoImplant II challenge, consisting of the SkullBreak and SkullFix datasets, was organized to foster research on computer vision techniques pursuing automation of the cranial implant design task. Data-driven methods working on Computed Tomography (CT) emerged as a promising procedure for the realization of such a task. The best performing approaches turned out to rely on ensembles of Convolutional Neural Networks (CNN) architectures that either process each CT slice separately or the entire voxelized volume through computationally demanding three-dimensional convolutions. More recently, few methods were designed to deal with different data representations, for instance point clouds, to perform skull completion. Similarly, we investigate a novel solution for implant generation that deploys a conditioned occupancy network. Starting from the partial point cloud, we directly reconstruct the completed voxel grid by evaluating the learned occupancy function in the given space resolution. Our approach can generate high-quality implants achieving qualitative and quantitative results comparable to state-of-the-art methods on the SkullBreak and SkullFix datasets while requiring significantly less computational resources. The model trained on the SkullBreak dataset successfully generalize to real craniotomies provided in the MUG500+ dataset.

Mazzocchetti S., Bevini M., Badiali G., Lisanti G., Di Stefano L., Salti S. (2024). Automatic Implant Generation for Cranioplasty via Occupancy Networks. IEEE ACCESS, 12, 95185-95195 [10.1109/ACCESS.2024.3425171].

Automatic Implant Generation for Cranioplasty via Occupancy Networks

Mazzocchetti S.;Bevini M.;Badiali G.;Lisanti G.;Di Stefano L.;Salti S.
2024

Abstract

The design of patient-specific implants for cranioplasty surgery is time-consuming and challenging. Hence, the 2021 AutoImplant II challenge, consisting of the SkullBreak and SkullFix datasets, was organized to foster research on computer vision techniques pursuing automation of the cranial implant design task. Data-driven methods working on Computed Tomography (CT) emerged as a promising procedure for the realization of such a task. The best performing approaches turned out to rely on ensembles of Convolutional Neural Networks (CNN) architectures that either process each CT slice separately or the entire voxelized volume through computationally demanding three-dimensional convolutions. More recently, few methods were designed to deal with different data representations, for instance point clouds, to perform skull completion. Similarly, we investigate a novel solution for implant generation that deploys a conditioned occupancy network. Starting from the partial point cloud, we directly reconstruct the completed voxel grid by evaluating the learned occupancy function in the given space resolution. Our approach can generate high-quality implants achieving qualitative and quantitative results comparable to state-of-the-art methods on the SkullBreak and SkullFix datasets while requiring significantly less computational resources. The model trained on the SkullBreak dataset successfully generalize to real craniotomies provided in the MUG500+ dataset.
2024
Mazzocchetti S., Bevini M., Badiali G., Lisanti G., Di Stefano L., Salti S. (2024). Automatic Implant Generation for Cranioplasty via Occupancy Networks. IEEE ACCESS, 12, 95185-95195 [10.1109/ACCESS.2024.3425171].
Mazzocchetti S.; Bevini M.; Badiali G.; Lisanti G.; Di Stefano L.; Salti S.
File in questo prodotto:
File Dimensione Formato  
Automatic_Implant_Generation_for_Cranioplasty_via_Occupancy_Networks.pdf

accesso aperto

Tipo: Versione (PDF) editoriale
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione (CCBY)
Dimensione 4.21 MB
Formato Adobe PDF
4.21 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/984258
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact