Monday, 20 July 2009

Dysnomia - Development Diary Part Six



This week, I'd like to talk a little about Dysnomia's audio "engine". It's no so much an engine as it is a simple class to control the XACT project behind the game. XACT (Cross-platform Audio Creation Tool) is Microsoft's high-level audio library and engine, releaed as part of DirectX.

The XNA community seems to besplit between those that love XACT and those that prefer to use XNA's built-in audio calls. Because I started using XACT in XNA 1 when it was the only option, I'm reasonably comfortable with both the tool and the API calls in XNA so it was my first choice when thinking about the audio in Dysnomia.

There were two tasks I needed the Dysnomia audio engine to perform, alongside the usual simple sound playing. The first was to have some sounds play positionally, according to the location of certain events on screen. Enemy footsteps, weapon hits, doors opening and so on can all occur at any position on the screen, as opposed to simply coming from the player. For this, I needed to set up the Attenuation RPC preset in XACT and use it in my audio class when playing the appropriate sounds.

First of all, I modified the "Distance" Cue variable, giving it a range of 0 to 1000. The range can represent anything you want it to, but in my case I've set it up to be the distance in pixels between the player and the sound emitter.


Next, we set up an "Attenuation" RPC preset. As you can see in the screenshot, we are setting up the parameter to control the volume of the playing sound, using our Distance variable as input. The curve is set up to roll off the volume of the sound as the Distance variable increases, with a sharp rolloff at the end to completely mute the sound when Distance reaches the point where the player should no longer hear the sound.


Next up, we need to attach the Attenuation RPC to all sounds that need to be played positionally. Simply right-click the Attenuation preset and select "Attach/Detach Sounds". Select all neccessary sounds and click "Attach" to move them to the attcahed column.


That's it for XACT. Now for some code. First of all, we need to set up an AudioEmitter and an AudioListener. Note that all code here comes from my Audio class, which is static simply because I'm using the same XACT project and calls for the entire game.

public static AudioEmitter aE = new AudioEmitter();
public static AudioListener aL = new AudioListener();

private static Vector3 aEVect = Vector3.Zero;
private static Vector3 aLVect = Vector3.Zero;

As you can see, I've set up a Vector3 for the Emitter and the Listener as well. We'll use these in the play method. First though, I'm initialising a Velocity for the Emitter. I do this in the LoadContent method of my Audio class:

public static void LoadContent(ContentManager content)
{
...
aE.Velocity = new Vector3(30.0f, 30.0f, 30.0f);
...
}

You might have to play around with this to get the velocity just right. 30 seems to work for me. Next up, we define a method to play any sound positionally:

public static void Play3D(string EffectName)
{
aL.Position = aLVect;
aE.Position = aEVect;
EffectsSoundBank.PlayCue(EffectName, aL, aE);
}

So, we're passing in only the name of the sound we wish to play. The method sets the position of the AudioListener and the AudioEmitter to the values of aLVect and aEVect, which we set up earlier. Lastly, it calls PlayCue on our SoundBank, using the positional override.

Now for an example of the methods we call from the actual game classes:

public static void DoorOpen(ref Vector2 DoorPos)  
{
aEVect.X = DoorPos.X;
aEVect.Y = DoorPos.Y;
Play3D("door1");
}

So, this method is for playing the sound of a door opening. We pass in the location of the door in the game world, and copy the values to the X and Y positions of our Emitter Vector3.

Time out for a quick explanation of what's going on. In my game, I know the position of the player in the world. The AudioListener must be set to the player's position in the world, which I do on each update:

(in Player.Update:)
Audio.Audio.UpdateListenerPosition(ref Position);

(in Audio:)
public static void UpdateListenerPosition(ref Vector2 PlayerPos)   
{
aLVect.X = PlayerPos.X;
aLVect.Y = PlayerPos.Y;
}

Every time we play a sound in the world, we know where the sound is coming from. A door, an enemy - they all have a position in the world so we set the AudioEmitter position to the same. When PlayCue is called, XNA takes care of positioning the sound for us based on the Player's position (AudioListener) and the sound position (AudioEmitter). Our Attenuation RPC further affects the resulting sound by rolling off the volume according to the distance between Emitter and Listener. Simple? Don't worry - follow the steps and you'll get it eventually!

In my next diary post, I'll write about the second task I needed the engine to perform - crossfading audio tracks. For now, I leave you with another gameplay video.



No comments: