In recent years, deep learning has revolutionized computer vision and has been widely used for monitoring in diverse visual scenes. However, in terms of some aspects such as complexity and explainability, deep learning is not always preferable over traditional machine-learning methods. Traditional visual tracking approaches have shown certain advantages in terms of data collection efficiency, computing requirements, and power consumption and are generally easier to understand and explain than deep neural networks. At present, traditional feature-based techniques relying on correlation filtering (CF) have become common for understanding complex visual scenes. However, current CF algorithms use a single feature to describe the information of the target and locate it accordingly. They cannot fully express changeable target appearances in a complex scene, which can easily lead to inaccurate target locations in time-varying visual scenes. Moreover, owing to the complexity of surveillance scenes, monitoring algorithms can lose their target. The original template update strategy uses each frame with a fixed interval length as a new template, which may lead to unreliable feature extraction and low tracking accuracy. To overcome these issues, in this work, we introduce an original location fusion mechanism based on multiple visual cognition processing streams to achieve real-time and efficient visual monitoring in complex scenes. First, we propose a process for extracting multiple forms of visual cognitive information, and it is periodically used to extract multiple feature information flows of a target of interest. Subsequently, a cognitive information fusion process is employed to fuse the positioning results of different visual cognitive information flows to achieve high-quality visual monitoring and positioning. Finally, a novel feature template memory storage and retrieval strategy is adopted. When the location result is unreliable, the target is retrieved from memory to ensure robust and accurate tracking. In addition, we provide an extensive set of performance results showing that our proposed approach exhibits more robust performance at a lower computational cost compared with 36 state-of-the-art algorithms for visual tracking in complex scenes.

Liu, S., Huang, S.C., Wang, S., Muhammad, K., Bellavista, P., Del Ser, J. (2023). Visual tracking in complex scenes: A location fusion mechanism based on the combination of multiple visual cognition flows. INFORMATION FUSION, 96, 281-296 [10.1016/j.inffus.2023.02.005].

Visual tracking in complex scenes: A location fusion mechanism based on the combination of multiple visual cognition flows

Bellavista, P;
2023

Abstract

In recent years, deep learning has revolutionized computer vision and has been widely used for monitoring in diverse visual scenes. However, in terms of some aspects such as complexity and explainability, deep learning is not always preferable over traditional machine-learning methods. Traditional visual tracking approaches have shown certain advantages in terms of data collection efficiency, computing requirements, and power consumption and are generally easier to understand and explain than deep neural networks. At present, traditional feature-based techniques relying on correlation filtering (CF) have become common for understanding complex visual scenes. However, current CF algorithms use a single feature to describe the information of the target and locate it accordingly. They cannot fully express changeable target appearances in a complex scene, which can easily lead to inaccurate target locations in time-varying visual scenes. Moreover, owing to the complexity of surveillance scenes, monitoring algorithms can lose their target. The original template update strategy uses each frame with a fixed interval length as a new template, which may lead to unreliable feature extraction and low tracking accuracy. To overcome these issues, in this work, we introduce an original location fusion mechanism based on multiple visual cognition processing streams to achieve real-time and efficient visual monitoring in complex scenes. First, we propose a process for extracting multiple forms of visual cognitive information, and it is periodically used to extract multiple feature information flows of a target of interest. Subsequently, a cognitive information fusion process is employed to fuse the positioning results of different visual cognitive information flows to achieve high-quality visual monitoring and positioning. Finally, a novel feature template memory storage and retrieval strategy is adopted. When the location result is unreliable, the target is retrieved from memory to ensure robust and accurate tracking. In addition, we provide an extensive set of performance results showing that our proposed approach exhibits more robust performance at a lower computational cost compared with 36 state-of-the-art algorithms for visual tracking in complex scenes.
2023
Liu, S., Huang, S.C., Wang, S., Muhammad, K., Bellavista, P., Del Ser, J. (2023). Visual tracking in complex scenes: A location fusion mechanism based on the combination of multiple visual cognition flows. INFORMATION FUSION, 96, 281-296 [10.1016/j.inffus.2023.02.005].
Liu, S; Huang, SC; Wang, S; Muhammad, K; Bellavista, P; Del Ser, J
File in questo prodotto:
File Dimensione Formato  
Revised Manuscript_INFFUS-D-22-00883_25 Jan 2023_Accepted(1).pdf

embargo fino al 03/02/2025

Tipo: Postprint
Licenza: Licenza per Accesso Aperto. Creative Commons Attribuzione - Non commerciale - Non opere derivate (CCBYNCND)
Dimensione 2.58 MB
Formato Adobe PDF
2.58 MB Adobe PDF   Visualizza/Apri   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/952079
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 54
  • ???jsp.display-item.citation.isi??? 43
social impact