Rendering sound and images together
Computer generated images in cinema and games are rendered based on detailed physical models of the scene, resulting in very natural looking (realistic) images as perceived by a human observer. Sound is most often rendered with limited or no reference to these models. Thus the rendered sound does no...
Saved in:
Published in | 2013 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM) pp. 395 - 399 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.08.2013
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Computer generated images in cinema and games are rendered based on detailed physical models of the scene, resulting in very natural looking (realistic) images as perceived by a human observer. Sound is most often rendered with limited or no reference to these models. Thus the rendered sound does not achieve the level of realism that is potentially available by using the models. In this paper we review methods used for sound mixing and rendering for cinema and games. Acoustic models were standardized in MPEG-4, but are not used widely. Modern cinema sound rendering uses one of the new tools that are popular with cinema directors and producers that do not appear to refer to a scene model. Game sound engines do use scene models for obstructions but not reverberation. For any new method to be successful, it must yield obviously better results with reasonable CPU load and fit into the workflow. A game engine solution is to use the MPEG-4 scene models augmented by adjustable perceptual parameters and convolution with measured reverberation tails. This solution requires a tool and library to enable acoustic properties to be assigned to a visual scene and frequency dependent acoustic distribution (radiation) patterns to be assigned to sound sources. |
---|---|
ISSN: | 1555-5798 2154-5952 |
DOI: | 10.1109/PACRIM.2013.6625509 |