Responsive image

Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring

R Michael Winters, Takahiko Tsuchiya, Lee W Lerner, Jason Freeman
This paper describes ongoing research in the presentation of geo-located, real-time data using web-based audio and visualization technologies. Due to both the increase of devices and diversity of information being accumulated in real-time, there is a need for cohesive techniques to render this information in a useable and functional way for a variety of audiences. We situate web-sonification—sonification of web-based information using web-based technologies—as a particularly valuable avenue for display. When combined with visualizations, it can increase engagement and allow users to profit from the additional affordances of human hearing. This theme is developed in the description of two multi-modal dashboards designed for data in the context of the Internet of Things (IoT) and Smart Cities. In both cases, Web Audio provided the back-end for sonification, but a new API called DataToMusic (DTM) was used to make common sonification operations easier to implement. DTM provides a valuable framework for web-sonification and we highlight its use in the two dashboards. Following our description of the implementations, the dashboards are compared and evaluated, contributing to general conclusions on the use of web-audio for sonification, and suggestions for future dashboards.
            
@inproceedings{2016_84,
  abstract = {This paper describes ongoing research in the presentation of geo-located, real-time data using web-based audio and visualization technologies. Due to both the increase of devices and diversity of information being accumulated in real-time, there is a need for cohesive techniques to render this information in a useable and functional way for a variety of audiences. We situate web-sonification—sonification of web-based information using web-based technologies—as a particularly valuable avenue for display. When combined with visualizations, it can increase engagement and allow users to profit from the additional affordances of human hearing. This theme is developed in the description of two multi-modal dashboards designed for data in the context of the Internet of Things (IoT) and Smart Cities. In both cases, Web Audio provided the back-end for sonification, but a new API called DataToMusic (DTM) was used to make common sonification operations easier to implement. DTM provides a valuable framework for web-sonification and we highlight its use in the two dashboards. Following our description of the implementations, the dashboards are compared and evaluated, contributing to general conclusions on the use of web-audio for sonification, and suggestions for future dashboards.},
  address = {Atlanta, Georgia},
  author = {Winters, R Michael and Tsuchiya, Takahiko and Lerner, Lee W and Freeman, Jason},
  booktitle = {Proceedings of the International Web Audio Conference},
  editor = {Freeman, Jason and Lerch, Alexander and Paradis, Matthew},
  month = {April},
  pages = {},
  publisher = {Georgia Tech},
  series = {WAC '16},
  title = {Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring},
  year = {2016},
  ISSN = {2663-5844}
}