Responsive image

Real Time Synthesized Sound Effects Web Service

Thomas Vasallo, Adan L Benito
Sound effects are employed in the post-production process in order to create tension, atmosphere, and emotion, as well as add focus to desired aspects of a scene. Traditionally sound designers are required to either source these sounds from commercially available libraries or to record audio themselves. Additionally, sound designers are usually required to manually manipulate these sources in order to accurately sonify the scene. This whole process requires time, planning and effort from the sound designers. While traditional synthesis techniques have been adapted to be incorporated into this process, the synthesisers employed are almost always not designed for this particular purpose. The RTSFX (Real Time Sound Effect Synthesis) web-based platform offers a range of synthesis models tailored to recreate a spectrum of sound effects that may be used for this task. An exposed set of parameters allows for the fine tuning and manipulation of a particular sound object to match the desired characteristics. A basic selection of post processing options is found within the platform to allow for a self-contained sound design process. A browser based platform has been created on which the sounds are generated in real time. A client-side architecture has been employed, allowing for a more flexible workflow for the sound designers. This approach makes it easily accessible to the user, while simultaneously not requiring any permanent local memory allocation. The browser-based aspect means that the platform is not limited by server availability, while also providing a low latency experience. RTSFX relies on the standardised Web Audio API in order to establish a framework for synthesising effects. Different models can be generated using a mixture of Web-Audio API nodes, customised Javascript processors or Pure Data patches through the use of WebPD or Enzien Audio Heavy. A number of different approaches are used in the model design process. Ranging from accurate representations of physical phenomena, to more perceptually informed qualitative methods. RTSFX offers a centralised set of elements which may be utilized to build these designs. The platform is an ever growing source of synthesis models with the scope of incorporating more complex techniques which involve the analysis of audio sources in order to generate models.
            
@inproceedings{2017_EA_79,
  abstract = {Sound effects are employed in the post-production process in order to create tension, atmosphere, and emotion, as well as add focus to desired aspects of a scene. Traditionally sound designers are required to either source these sounds from commercially available libraries or to record audio themselves. Additionally, sound designers are usually required to manually manipulate these sources in order to accurately sonify the scene. This whole process requires time, planning and effort from the sound designers. While traditional synthesis techniques have been adapted to be incorporated into this process, the synthesisers employed are almost always not designed for this particular purpose. The RTSFX (Real Time Sound Effect Synthesis) web-based platform offers a range of synthesis models tailored to recreate a spectrum of sound effects that may be used for this task. An exposed set of parameters allows for the fine tuning and manipulation of a particular sound object to match the desired characteristics. A basic selection of post processing options is found within the platform to allow for a self-contained sound design process. A browser based platform has been created on which the sounds are generated in real time. A client-side architecture has been employed, allowing for a more flexible workflow for the sound designers. This approach makes it easily accessible to the user, while simultaneously not requiring any permanent local memory allocation. The browser-based aspect means that the platform is not limited by server availability, while also providing a low latency experience. RTSFX relies on the standardised Web Audio API in order to establish a framework for synthesising effects. Different models can be generated using a mixture of Web-Audio API nodes, customised Javascript processors or Pure Data patches through the use of WebPD or Enzien Audio Heavy. A number of different approaches are used in the model design process. Ranging from accurate representations of physical phenomena, to more perceptually informed qualitative methods. RTSFX offers a centralised set of elements which may be utilized to build these designs. The platform is an ever growing source of synthesis models with the scope of incorporating more complex techniques which involve the analysis of audio sources in order to generate models.},
  address = {London, United Kingdom},
  author = {Vasallo, Thomas and Benito, Adan L},
  booktitle = {Proceedings of the International Web Audio Conference},
  editor = {Thalmann, Florian and Ewert, Sebastian},
  month = {August},
  pages = {},
  publisher = {Queen Mary University of London},
  series = {WAC '17},
  title = {Real Time Synthesized Sound Effects Web Service},
  year = {2017},
  ISSN = {2663-5844}
}