Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.
Titolo: | Augmenting mobile C-arm fluoroscopes via stereo-RGBD sensors for multimodal visualization |
Autore/i: | Habert, Severine; Meng, Ma; Kehl, Wadim; Wang, Xiang; TOMBARI, FEDERICO; Fallavollita, Pascal; Navab, Nassir |
Autore/i Unibo: | |
Anno: | 2015 |
Titolo del libro: | Proceedings of the 2015 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2015 |
Pagina iniziale: | 72 |
Pagina finale: | 75 |
Digital Object Identifier (DOI): | http://dx.doi.org/10.1109/ISMAR.2015.24 |
Abstract: | Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm. |
Data stato definitivo: | 16-lug-2016 |
Appare nelle tipologie: | 4.01 Contributo in Atti di convegno |