Pan–tilt–zoom (PTZ) camera networks have an important role in surveillance systems. They have the ability to direct the attention to interesting events that occur in the scene. One method to achieve such behavior is to use a process known as sensor slaving: one (or more) master camera monitors a wide area and tracks moving targets so as to provide the positional information to one (or more) slave camera. The slave camera can thus point towards the targets in high resolution. In this paper we describe a novel framework exploiting a PTZ camera network to achieve high accuracy in the task of relating the feet position of a person in the image of the master camera, to his head position in the image of the slave camera. Each camera in the network can act as a master or slave camera, thus allowing the coverage of wide and geometrically complex areas with a relatively small number of sen- sors. The proposed framework does not require any 3D known location to be specified, and allows to take into account both zooming and target uncertainties. Quantitative results show good performance in tar- get head localization, independently from the zooming factor in the slave camera. An example of cooperative tracking approach exploiting with the proposed framework is also presented.

Exploiting Distinctive Visual Landmark Maps in Pan-Tilt-Zoom Camera Networks

LISANTI, GIUSEPPE;
2010

Abstract

Pan–tilt–zoom (PTZ) camera networks have an important role in surveillance systems. They have the ability to direct the attention to interesting events that occur in the scene. One method to achieve such behavior is to use a process known as sensor slaving: one (or more) master camera monitors a wide area and tracks moving targets so as to provide the positional information to one (or more) slave camera. The slave camera can thus point towards the targets in high resolution. In this paper we describe a novel framework exploiting a PTZ camera network to achieve high accuracy in the task of relating the feet position of a person in the image of the master camera, to his head position in the image of the slave camera. Each camera in the network can act as a master or slave camera, thus allowing the coverage of wide and geometrically complex areas with a relatively small number of sen- sors. The proposed framework does not require any 3D known location to be specified, and allows to take into account both zooming and target uncertainties. Quantitative results show good performance in tar- get head localization, independently from the zooming factor in the slave camera. An example of cooperative tracking approach exploiting with the proposed framework is also presented.
2010
DEL BIMBO, ALBERTO; DINI, FABRIZIO; LISANTI, GIUSEPPE; PERNICI, FEDERICO
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/654568
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 37
  • ???jsp.display-item.citation.isi??? 25
social impact