Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.

Habert, S., Meng, M.a., Kehl, W., Wang, X., Tombari, F., Fallavollita, P., et al. (2015). Augmenting mobile C-arm fluoroscopes via stereo-RGBD sensors for multimodal visualization. Institute of Electrical and Electronics Engineers Inc. [10.1109/ISMAR.2015.24].

Augmenting mobile C-arm fluoroscopes via stereo-RGBD sensors for multimodal visualization

TOMBARI, FEDERICO;
2015

Abstract

Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.
2015
Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2015
72
75
Habert, S., Meng, M.a., Kehl, W., Wang, X., Tombari, F., Fallavollita, P., et al. (2015). Augmenting mobile C-arm fluoroscopes via stereo-RGBD sensors for multimodal visualization. Institute of Electrical and Electronics Engineers Inc. [10.1109/ISMAR.2015.24].
Habert, Severine; Meng, Ma; Kehl, Wadim; Wang, Xiang; Tombari, Federico; Fallavollita, Pascal; Navab, Nassir
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/554034
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 1
social impact