Introduced in Effect of laboratory conditions on the perception of virtual stages for music2025
Introduction
These audio files accompany the preprint by Accolti (2025), which presents a preliminary study on the effect of the acoustical conditions of three different rooms on the perception of virtual stages for music. The three laboratory rooms include:
An anechoic room, which represents an ideal recording condition. A custom-made hearing booth 1 with insufficient sound absorption, likely representing the worst-case scenario. A custom-made hearing booth 2 with better, achievable absorption, serving as a compromise scenario. The aim of the study is to assess how these environments affect the perception of virtual stages for music.
Description of the virtual stages
Two virtual stage configurations were simulated: A small stage: 12 m (width) × 10 m (depth) × 6 m (height) A large stage: 24 m (width) × 10 m (depth) × 12 m (height)
Both stages share a common audience area with dimensions: 41.5 m (length) × 23 m (width) × 19 m (height). Refer to the virtual room model in Accolti [2025].
The surface properties used were: Audience area: absorption coefficient = 0.80, scattering coefficient = 0.70 Remaining surfaces: absorption = 0.20, scattering = 0.10
TABLE I: Main conditions of the three laboratory rooms
Room Width Length Height α (absorption coefficient) Anechoic room 3.5 m 4.5 m 2.5 m 0.99 Hearing booth 1 2.0 m 2.0 m 2.0 m 0.50 Hearing booth 2 2.1 m 3.0 m 2.5 m 0.97
Soundfield simulation
Raven software is used to simulate both the virtual stages and the three laboratory rooms. The software is based on image source and ray tracing methods [Schröder y Vorländer, 2011]. Skipping the direct sound, is a feature of Raven that allows to simulate the sound reflections in a room without the direct sound reaching the listener.
A violist is placed in the centre of the virtual stages, modelled with the source directivity and head related transfer function from public databases [Ackermann y Brinkmann, 2024, Brinkmann et al., 2017]. The anechoic sound of the viola is a recording of the first 6 seconds of the third movement of the Summer concerto of the four seasons by Vivaldi (RV315) extracted from the databse of the Sorbonne University [Thery y Katz, 2019].
Simulations were carried out using Raven [Schröder & Vorländer, 2011], a software based on image-source and ray-tracing methods. Raven allows the direct sound to be skipped, making it possible to simulate only the reflections in the room.
A violist was placed at the center of each virtual stage. The directivity of the sound source and the listener's head-related transfer function (HRTF) were modeled using public databases [Ackermann & Brinkmann, 2024; Brinkmann et al., 2017].
The anechoic recording used is the first 6 seconds of the third movement of Summer from Vivaldi’s Four Seasons (RV315), sourced from the Sorbonne University database [Thery & Katz, 2019].
Audiofiles code
Each audio file is named using the format:
R<r><aa>_<t>.wav Where:
<r> = l for large or s for small concert hall model
<aa> = absorption value in %, can be: (empty), 50, 97, or 99
(empty) → no lab effect included
<t> = type of simulation:
v: default (anechoic rendering in the concert hall)
u: lab effect only (the simulated coloration due to the room)
T: combined (anechoic rendering + lab effect)
How to listen and compare
You can aurally compare the pure virtual stage simulation (e.g., Rl_v) with the colored versions due to each laboratory room:
Rl99_T: Anechoic room
Rl50_T: Hearing booth 1
Rl97_T: Hearing booth 2
Alternatively, load Rl_v into a DAW and add the isolated coloration (Rl99_u, Rl50_u, Rl97_u) as a second track. You may use mute/unmute for A/B comparisons.
Similar comparisons can be made with the small virtual hall using:
Rs_v vs Rs99_T, Rs50_T, Rs97_T
Rs_v in one track and Rs99_u, Rs50_u, Rs97_u in another track
References
Accolti, E. (2025). Effect of laboratory conditions on the perception of virtual stages for music. arXiv preprint. https://arxiv.org/abs/2505.20552
Ackermann, D., & Brinkmann, F. (2024). A database with directivities of musical instruments. J. Audio Eng. Soc., 72(3).
Brinkmann, F., Lindau, A., Weinzierl, S., Van De Par, S., Müller-Trapet, M., Opdam, R., & Vorländer, M. (2017). A high resolution and full-spherical head-related transfer function database for different head-above-torso orientations. Journal of the Audio Engineering Society, 65(10), 841–848.
Schröder, D., & Vorländer, M. (2011). Raven: A real-time framework for the auralization of interactive virtual environments. In: Forum Acusticum, pp. 1541–1546.
Thery, D., & Katz, B. F. G. (2019). Anechoic audio and 3D-video content database of small ensemble performances for virtual concerts. In: International Congress on Acoustics (ICA 2019).