Tying Sounds into Animations
Most of the effect of any VR environment is smoke and mirrors. Attempting to make the environment look, sound and feel as real as possible, in as few a clock cycles as possible. One area that has long been over par in this encompassing illusion has been the tying of sounds into animation effects.
Typically, pre-canned sound files are played whenever the engine detects suitable object animation. For example, toss a small stone into a pond, and you hear a pre-recorded splash sound. Toss a larger stone in, and you get the same splash sound. Let a building slide in, and again, the identical splash sound.
The problem is, sounds are rarely the same twice. The sound of a tennis ball hitting water should be completely different to a piece of paper hitting water. Likewise, a flat sheet of paper should make a different sound to a scrunched up ball of paper. Trying to pre-record all the sounds you could possibly ever desire, is an impossible task, of course. It would take a large team decades to assemble even a subset, and they would occupy a hundred terabytes or more of storage space.
What if there was another way? What if you could generate algorithmically, the sound you required, in real time? You could analyze the shape and the material of the colliding objects or zones, and calculate when and how to emit sounds.
Sounds like a fanciful dream, but that is exactly what Cornell university achieved, at the start of June 2009.
The work by Doug James, associate professor of computer science, and graduate student Changxi Zheng was such a breakthrough, it was flagged to be reported at the 2009 ACM SIGGRAPH conference. By analyzing the interaction between water or other fluids, and fast moving objects of different shapes and densities, they were able to devise simple, fast algorithms based on analyzing the spread of bubble formation at the contact point. These bubbles, both in terms of atmosphere entering the water, and water entering the atmosphere, analyzed as a particle swarm, are responsible for sound production.
The direct result is, an object hitting water splashes, bubbles, sploshes, poops, drips, or splatters according to a dynamic analysis, instead of a sound file. The sound produced is totally unique, and in sync with the event.
The work does not end there. This is but the first step in a broader research program on sound synthesis supported by a $1.2 million grant from the Human Centered Computing Program of the US National Science Foundation (NSF).
The next stage is to analyse solid objects colliding, how this affects the atmosphere at the point they collide, and develop algorithms to accurately replicate the sound of different materials, densities, and shapes colliding at different velocities. If, or when that is accomplished, we will have a much more realistic kind of environment, where sounds can be realistically generated from banging objects together, and without the need for developers to spend resources compiling stock sound files. A win for both sides of the process.
The current methods still require hours of offline computing time, and work best on compact sound sources, the researchers noted, but they said further development should make possible the real-time performance needed for interactive virtual environments.