Autonomous cars have been in the making for over 15 years. Skepticism has taken the place of initial hype and enthusiasm. Current autonomous driving systems give no guarantee of 100% correctness and reliability, and users are not willing to take a chance on a car that is unable to cope with all the possible driving scenarios. Robotic drivers are expected to be perfect. Major players such as Tesla and Waymo rely on highly detailed maps and very large sensor data in a race to build the ultimate robotic driver to cope with all possible driving scenarios. This approach optimizes for safety but delays the dream of fully autonomous cars. In this paper we consider robot-drivers as teen-drivers eager to learn how to drive but prone to mistakes in the beginning. The question we are trying to investigate is 'what if we allow autonomous cars to make mistakes like young human drives do?' In this paper, we explore reinforcement learning for small size autonomous vehicles fusing information from several sensors including a camera, color sensors, and sonar sensors. The robot-drivers have initially no information about the driving scenarios they learn with experience through a mechanism of rewards designed to quickly help our robot-teen to learn its driving skills.

Robot drivers: Learning to drive by trial & error / Bosello M.; Tse R.; Pau G.. - ELETTRONICO. - (2019), pp. 9066149.284-9066149.290. (Intervento presentato al convegno 15th International Conference on Mobile Ad-Hoc and Sensor Networks, MSN 2019 tenutosi a Shenzhen Base of the Hong Kong Polytechnic University (PolyU), chn nel 2019) [10.1109/MSN48538.2019.00061].

Robot drivers: Learning to drive by trial & error

Bosello M.
;
Pau G.
2019

Abstract

Autonomous cars have been in the making for over 15 years. Skepticism has taken the place of initial hype and enthusiasm. Current autonomous driving systems give no guarantee of 100% correctness and reliability, and users are not willing to take a chance on a car that is unable to cope with all the possible driving scenarios. Robotic drivers are expected to be perfect. Major players such as Tesla and Waymo rely on highly detailed maps and very large sensor data in a race to build the ultimate robotic driver to cope with all possible driving scenarios. This approach optimizes for safety but delays the dream of fully autonomous cars. In this paper we consider robot-drivers as teen-drivers eager to learn how to drive but prone to mistakes in the beginning. The question we are trying to investigate is 'what if we allow autonomous cars to make mistakes like young human drives do?' In this paper, we explore reinforcement learning for small size autonomous vehicles fusing information from several sensors including a camera, color sensors, and sonar sensors. The robot-drivers have initially no information about the driving scenarios they learn with experience through a mechanism of rewards designed to quickly help our robot-teen to learn its driving skills.
2019
Proceedings - 2019 15th International Conference on Mobile Ad-Hoc and Sensor Networks, MSN 2019
284
290
Robot drivers: Learning to drive by trial & error / Bosello M.; Tse R.; Pau G.. - ELETTRONICO. - (2019), pp. 9066149.284-9066149.290. (Intervento presentato al convegno 15th International Conference on Mobile Ad-Hoc and Sensor Networks, MSN 2019 tenutosi a Shenzhen Base of the Hong Kong Polytechnic University (PolyU), chn nel 2019) [10.1109/MSN48538.2019.00061].
Bosello M.; Tse R.; Pau G.
File in questo prodotto:
Eventuali allegati, non sono esposti

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11585/778708
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact