The recognition of the activity of texting while driving is an open problem in literature and it is crucial for the security within the scope of automotive. This can bring to life new insurance policies and increase the overall safety on the roads. Many works in literature leverage smartphone sensors for this purpose, however it is shown that these methods take a considerable amount of time to perform a recognition with sufficient confidence. In this paper we propose to leverage the smartphone front camera to perform an image classification and recognize whether the subject is seated in the driver position or in the passenger position. We first applied standalone Convolutional Neural Networks with poor results, then we focused on object detection-based algorithms to detect the presence and the position of discriminant objects (i.e. the security belts and the car win-dow). We then applied the model over short videos by classifying frame by frame until reaching a satisfactory confidence. Results show that we are able to reach around 90 % accuracy in only few seconds of the video, demonstrating the applicability of our method in the real world.

Montori, F., Spallone, M., Bedogni, L. (2023). Texting and Driving Recognition leveraging the Front Camera of Smartphones. New York : IEEE [10.1109/CCNC51644.2023.10060838].

Texting and Driving Recognition leveraging the Front Camera of Smartphones

Montori, Federico
;
2023

Abstract

The recognition of the activity of texting while driving is an open problem in literature and it is crucial for the security within the scope of automotive. This can bring to life new insurance policies and increase the overall safety on the roads. Many works in literature leverage smartphone sensors for this purpose, however it is shown that these methods take a considerable amount of time to perform a recognition with sufficient confidence. In this paper we propose to leverage the smartphone front camera to perform an image classification and recognize whether the subject is seated in the driver position or in the passenger position. We first applied standalone Convolutional Neural Networks with poor results, then we focused on object detection-based algorithms to detect the presence and the position of discriminant objects (i.e. the security belts and the car win-dow). We then applied the model over short videos by classifying frame by frame until reaching a satisfactory confidence. Results show that we are able to reach around 90 % accuracy in only few seconds of the video, demonstrating the applicability of our method in the real world.
2023
2023 IEEE 20th Annual Consumer Communications & Networking Conference (CCNC)
1098
1103
Montori, F., Spallone, M., Bedogni, L. (2023). Texting and Driving Recognition leveraging the Front Camera of Smartphones. New York : IEEE [10.1109/CCNC51644.2023.10060838].
Montori, Federico; Spallone, Marco; Bedogni, Luca
File in questo prodotto:
File Dimensione Formato  
CCNC_2023___Object_Dectection_Drive.pdf

accesso aperto

Tipo: Postprint
Licenza: Licenza per accesso libero gratuito
Dimensione 2.34 MB
Formato Adobe PDF
2.34 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/921851
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact