Towards a representation model and fog-based device orchestration for audio-centric pervasive storytelling
Internet of Things (IoT) devices, such as smart speakers and wearables, are increasingly accessible and part of people's daily lives. This opens up great new possibilities for innovative storytelling experiences, allowing new forms of interactive and truly immersive content consumption, going b...
Saved in:
Published in | 2023 4th International Symposium on the Internet of Sounds pp. 1 - 10 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
26.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Internet of Things (IoT) devices, such as smart speakers and wearables, are increasingly accessible and part of people's daily lives. This opens up great new possibilities for innovative storytelling experiences, allowing new forms of interactive and truly immersive content consumption, going beyond conventional multimedia. In this context, the need for advances in the representation of pervasive storytelling is perceptible and an audio-centric approach utilizing the Internet of Sounds (IoS) can potentially better fit into people's routines because of the widespread audio capabilities in IoT devices. This work proposes a conceptual model entitled A-Presto (Audio-centric PeRvasivE STOrytelling) that aims to realize stories in a pervasive way, adapting to the users' context and available IoT devices. By modeling the specific domain of audio-centric pervasive storytelling at a high abstraction level, the proposal transparently supports the typical variability of pervasive environments, such as changes in users' location, device connectivity, power availability, and proximity between users, among others. Supported by latency experiments using a cloud-based orchestrator prototype and local IoT devices, this work proposes a fog-based runtime engine, capable of interpreting and orchestrating A-Presto storytelling instances with reduced latency. |
---|---|
DOI: | 10.1109/IEEECONF59510.2023.10335450 |