Deploying Internet of Things (IoT) in our cities will enable them to become smarter, thanks to the connection of everything everywhere, such as smart meters, street lighting, trash bin sensors, parking areas. However, a centralized-architecture approach, where all sensors and actuators send and receive data from the cloud, is not sustainable in terms of both the amount of data flooding from sensors to the cloud and the energy required to keep all these sensors alive. This is particularly true in the field of vision sensors, where the amount of data to be handled and transmitted can be high, while the real information we are interested in is possibly less "bulky" (e.g. a classification category or a feature). Data reduction is therefore desirable at the node level. This paper evaluates the use of a smart sensor, the FORENSOR sensor, which embeds motion detection in hardware, in a classification scenario. We achieve 87% accuracy, and we demonstrate the advantages of our sensor w.r.t frame-difference based ones. We discuss the classification algorithm chosen and we present the estimation of the power consumption, proving that the overall system consumes less than 2mW, thus being adequate for an IoT scenario.
Paissan F., Cerutti G., Gottardi M., Farella E. (2019). People/Car Classification using an Ultra-Low-Power Smart Vision Sensor. 345 E 47TH ST, NEW YORK, NY 10017 USA : Institute of Electrical and Electronics Engineers Inc. [10.1109/IWASI.2019.8791337].
People/Car Classification using an Ultra-Low-Power Smart Vision Sensor
Cerutti G.;Farella E.
2019
Abstract
Deploying Internet of Things (IoT) in our cities will enable them to become smarter, thanks to the connection of everything everywhere, such as smart meters, street lighting, trash bin sensors, parking areas. However, a centralized-architecture approach, where all sensors and actuators send and receive data from the cloud, is not sustainable in terms of both the amount of data flooding from sensors to the cloud and the energy required to keep all these sensors alive. This is particularly true in the field of vision sensors, where the amount of data to be handled and transmitted can be high, while the real information we are interested in is possibly less "bulky" (e.g. a classification category or a feature). Data reduction is therefore desirable at the node level. This paper evaluates the use of a smart sensor, the FORENSOR sensor, which embeds motion detection in hardware, in a classification scenario. We achieve 87% accuracy, and we demonstrate the advantages of our sensor w.r.t frame-difference based ones. We discuss the classification algorithm chosen and we present the estimation of the power consumption, proving that the overall system consumes less than 2mW, thus being adequate for an IoT scenario.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.