Large Language Models (LLMs) are rapidly migrating from cloudcentric services to serve as the cognitive core of the Internet of Things (IoT). This evolution marks the second wave of innovation in the field, moving beyond initial feasibility studies to tackle the complex challenges of deploying such robust, scalable, and truly autonomous systems. The first part of this special issue laid the groundwork, showcasing the foundational potential of LLMs in the IoT domain. Now, as the installed base of IoT endpoints is projected to be generating zettabytes of multimodal data, the need for advanced, on-device intelligence has never been more critical. Concurrently, advancements such as parameter-efficient fine-tuning (PEFT), aggressive quantization, and powerful on-device neural accelerators are making the deployment of sophisticated models on resource-constrained hardware both technically and economically viable.
Sun, G., Niyato, D., Wang, J., Fonseca, N., Bellavista, P., Yeh, S. (2025). Guest Editorial: Applications of Large Language Models in Internet of Things. IEEE INTERNET OF THINGS MAGAZINE, 8(6), 14-16 [10.1109/miot.2025.3598961].
Guest Editorial: Applications of Large Language Models in Internet of Things
Bellavista, Paolo;
2025
Abstract
Large Language Models (LLMs) are rapidly migrating from cloudcentric services to serve as the cognitive core of the Internet of Things (IoT). This evolution marks the second wave of innovation in the field, moving beyond initial feasibility studies to tackle the complex challenges of deploying such robust, scalable, and truly autonomous systems. The first part of this special issue laid the groundwork, showcasing the foundational potential of LLMs in the IoT domain. Now, as the installed base of IoT endpoints is projected to be generating zettabytes of multimodal data, the need for advanced, on-device intelligence has never been more critical. Concurrently, advancements such as parameter-efficient fine-tuning (PEFT), aggressive quantization, and powerful on-device neural accelerators are making the deployment of sophisticated models on resource-constrained hardware both technically and economically viable.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


