Responsive image

A Front End for Adaptive Online Listening Tests

Johan Pauwels, Simon Dixon, Joshua D. Reiss
A number of tools to create online listening tests are currently available. They provide an integrated platform consisting of a user-facing front end and a back end to collect responses. These platforms provide an out-of-the-box solution for setting up static listening tests, where questions and audio stimuli remain unchanged and user-independent. In this paper, we detail the changes we made to the webMUSHRA platform to convert it into a front end for adaptive online listening tests. Some of the more advanced workflows that can be built around this front end include session management to resume listening tests, server-based sampling of stimuli to enforce a certain distribution over all participants, and follow-up questions based on previous responses. The back ends required for such workflows need a large amount of customisation based on the exact listening test specification, and are therefore deemed out of scope for this project. Consequently, the proposed front end is not meant as a replacement for the existing webMUSHRA platform, but as starting point to create custom listening tests. Nonetheless, a fair number of the proposed changes are also beneficial for the creation of static listening tests.
            
@inproceedings{2021_5,
  abstract = {A number of tools to create online listening tests are currently available. They provide an integrated platform consisting of a user-facing front end and a back end to collect responses. These platforms provide an out-of-the-box solution for setting up static listening tests, where questions and audio stimuli remain unchanged and user-independent. In this paper, we detail the changes we made to the webMUSHRA platform to convert it into a front end for adaptive online listening tests. Some of the more advanced workflows that can be built around this front end include session management to resume listening tests, server-based sampling of stimuli to enforce a certain distribution over all participants, and follow-up questions based on previous responses. The back ends required for such workflows need a large amount of customisation based on the exact listening test specification, and are therefore deemed out of scope for this project. Consequently, the proposed front end is not meant as a replacement for the existing webMUSHRA platform, but as starting point to create custom listening tests. Nonetheless, a fair number of the proposed changes are also beneficial for the creation of static listening tests.},
  address = {Barcelona, Spain},
  author = {Pauwels, Johan and Dixon, Simon and Reiss, Joshua D.},
  booktitle = {Proceedings of the International Web Audio Conference},
  editor = {Joglar-Ongay, Luis and Serra, Xavier and Font, Frederic and Tovstogan, Philip and Stolfi, Ariane and A. Correya, Albin and Ramires, Antonio and Bogdanov, Dmitry and Faraldo, Angel and Favory, Xavier},
  month = {July},
  pages = {},
  publisher = {UPF},
  series = {WAC '21},
  title = {A Front End for Adaptive Online Listening Tests},
  year = {2021},
  ISSN = {2663-5844}
}