The remarkable success of attention-based models in real-world applications has sparked a crucial question for Reservoir Computing (RC): Can its inherent computational efficiency compete with the high-performance, yet energy-intensive, novel deep learning architectures? Can Deep and modular RC neural networks address state of the art challenges in Computer Vision and Natural Language Processing? In the attempt to consolidate RC capabilities towards more complex tasks, this paper delves into the exploration of a comprehensive RC's offlineonline cycle cost analysis. Our investigation highlights hyperparameters (HPs) optimization as a major bottleneck in RC deployment, particularly for those exploring RC capabilities and those who want to maintain user-level knowledge of the solution. To address this, we introduce an adaptive ϵ-Greedy based search exploration mechanism, significantly streamlining the off-line optimization process while maintaining high accuracy. Furthermore, we enhance existing RC frameworks to support online transfer learning and inference, enabling seamless fast and energy-efficient adaptation to real-world environments. By analyzing the impact of optimized HPs on performance, we aim to demonstrate the viability of RC as a powerful and efficient alternative for many practical applications, including those on devices with limited resources. Experimental results proved that our solution is able to reduce the time required for offline HPs optimization by 70%, enabling energy savings of up to 88%. Moreover, in the online scenario, it guarantees similar performance in terms of accuracy while reducing memory usage by 66%.
Mendula, M., Miozzo, M., Dini, P. (2025). Reservoir Computing in Real-World Environments: Optimizing the Cost of Offline and Online Training. IEEE [10.36227/techrxiv.174742037.78551676/v1].
Reservoir Computing in Real-World Environments: Optimizing the Cost of Offline and Online Training
Mendula, Matteo;
2025
Abstract
The remarkable success of attention-based models in real-world applications has sparked a crucial question for Reservoir Computing (RC): Can its inherent computational efficiency compete with the high-performance, yet energy-intensive, novel deep learning architectures? Can Deep and modular RC neural networks address state of the art challenges in Computer Vision and Natural Language Processing? In the attempt to consolidate RC capabilities towards more complex tasks, this paper delves into the exploration of a comprehensive RC's offlineonline cycle cost analysis. Our investigation highlights hyperparameters (HPs) optimization as a major bottleneck in RC deployment, particularly for those exploring RC capabilities and those who want to maintain user-level knowledge of the solution. To address this, we introduce an adaptive ϵ-Greedy based search exploration mechanism, significantly streamlining the off-line optimization process while maintaining high accuracy. Furthermore, we enhance existing RC frameworks to support online transfer learning and inference, enabling seamless fast and energy-efficient adaptation to real-world environments. By analyzing the impact of optimized HPs on performance, we aim to demonstrate the viability of RC as a powerful and efficient alternative for many practical applications, including those on devices with limited resources. Experimental results proved that our solution is able to reduce the time required for offline HPs optimization by 70%, enabling energy savings of up to 88%. Moreover, in the online scenario, it guarantees similar performance in terms of accuracy while reducing memory usage by 66%.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


