Music in virtual environments has long been treated as a temporal evolving element, enhancing atmosphere and game pace but rarely considered as a performance adaptive element. Recent advances in artificial intelligence (AI) and procedural audio make it possible to generate music that adapts in real time to player actions and system state. Yet, despite its potential, the behavioural impact of such adaptive generative soundtracks in head-mounted display-based virtual reality (VR) remains largely unexplored. To address this gap, we introduce a VR archery system that integrates Google MusicFXDJ with Ubiq-Genie to deliver continuous AI-generated adaptive music driven by gameplay events. In a within-subjects experiment (N = 22), participants completed trials with either a stylistically-matched fixed soundtrack or an adaptive soundtrack that escalated tension across four phases as arrows depleted. Measures combined self-reported ratings of presence, focus, stress, and emotional impact, with performance metrics of accuracy and aiming time. Results reveal that adaptive generative music not only heightens immersion and emotional salience but also modulates motor precision in an arousal-dependent inverted-U pattern: moderate musical tension improved accuracy and speed, whereas excessive tension impaired them. These findings establish AI-generated music as a powerful behavioural feedback modality in VR, opening pathways for training, rehabilitation, and next-generation immersive entertainment.
Wen, J., Giunchi, D., Cascarano, P., Bovo, R., Ofek, E., Steed, A. (2026). Tuning Immersion and Performance with Adaptive Generative Music in VR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, PP, 1-11 [10.1109/TVCG.2026.3679907].
Tuning Immersion and Performance with Adaptive Generative Music in VR
Cascarano P.;
2026
Abstract
Music in virtual environments has long been treated as a temporal evolving element, enhancing atmosphere and game pace but rarely considered as a performance adaptive element. Recent advances in artificial intelligence (AI) and procedural audio make it possible to generate music that adapts in real time to player actions and system state. Yet, despite its potential, the behavioural impact of such adaptive generative soundtracks in head-mounted display-based virtual reality (VR) remains largely unexplored. To address this gap, we introduce a VR archery system that integrates Google MusicFXDJ with Ubiq-Genie to deliver continuous AI-generated adaptive music driven by gameplay events. In a within-subjects experiment (N = 22), participants completed trials with either a stylistically-matched fixed soundtrack or an adaptive soundtrack that escalated tension across four phases as arrows depleted. Measures combined self-reported ratings of presence, focus, stress, and emotional impact, with performance metrics of accuracy and aiming time. Results reveal that adaptive generative music not only heightens immersion and emotional salience but also modulates motor precision in an arousal-dependent inverted-U pattern: moderate musical tension improved accuracy and speed, whereas excessive tension impaired them. These findings establish AI-generated music as a powerful behavioural feedback modality in VR, opening pathways for training, rehabilitation, and next-generation immersive entertainment.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


