top of page
Search
  • whjsoundandmusic

Cave & Swordsman: Close Combat Audio Implementation Practices




The aim of this project was to tackle several areas of audio design and implementation within one project. Firstly the project was a catalyst to get familiar with the Wwise spatial audio components of the unreal integration as well as the Wwise reflect plugin and using it to control the parameters of spot reflectors. Secondly to experiment with the use of hyper-real sound on combat animation montages and the necessary blueprinting skills for combat sound design, and finally to explore ways to create more detailed ambient design.


I created two spatial audio volumes within the map for the two seperate caverns. Two auxiliary buses were created in Wwise with reverb effects on them, these buses were used for the late reverb aux input on these volumes. The spatial audio volumes are set to use rooms enabling the reproduction of a sound source emitting from a different room. As seen in the video below there is an occlusion system in place using the curve settings in Wwise in tandem with the integrated spatial audio put into place so when there is occlusion between the sound source and listener the audio signal is filtered and reduced in volume depending on the amount the sound source is occluded. A spatial audio portal was placed in the opening connecting the two rooms allowing for sounds to be heard through the portal.




Surface reflectors were also enabled on the spatial audio volumes, with an appropriate acoustic material applied to each side of the spatial audio volume, spot reflectors were also placed around the rooms again with correlating acoustic materials attached to achieve a realistic reflection effect.




The sound of the waterfall is created from several layers, split into two blend containers; one containing more detailed and splashy waterfall sounds recorded close and the other

containing more washed out and roaring waterfall sounds.



These two blend tracks are contained in another blend track, in order to crossfade between these two selections of sounds. The blend between these containers is driven by the distance from the player pawn to the waterfall, seen in the blueprint below. The float containing this value drives an RTPC within Wwise which as well as defining the blend crossfade also controls the volume, low pass and high pass filters for all the waterfall sounds.




The majority of combat is dependent on animation sequences which are triggered by 'skills' - namely fatality, stealth kill, combo and heavy swing. For example in one of the fatality animations seen below, a large number of Wwise events are used to create both a detailed and an exaggerated sound design for these skills including the footstep sounds, hyper-real foley sounds and non-diegetic sound effects.



In the video below the audio system in place for the slow motion effect within these animation montages can be seen. To highlight this effect i created a 'hyper_real' state group with an on and off state, when the state is set to on, pitch, filter and reverb changes are applied to differing sound elements so that intended 'hyper-real' sounds (i.e skill foley and non diegetic sounds) are emphasised by the change of ambient sounds for instance.



The image below shows the 'hyper_real' state being set during the fatality or stealth kill animations when the slow motion effect is called as well as an example of the processing that takes place on the ambience bus when the state has been set






As well as the 'hyper-real' slow motion state used in the fatality sequences the 'X' key also triggers a bullet-time slow motion mode which has similar but slightly different effect on the audio dependent. Generally the filter amount is increased and the selection of audio objects that are effected is also different. The beginning of video below shows the transition from neither state being set into the slow motion state and the end shows the transition from the slow motion state into the hyper-real state.




The combat sounds are implemented either through blueprint or through the animations. For instance, within the enemy character blueprint the pain vocal sounds and sword contact damage sound events are posted when damage is applied to the character. However the block hit event is called from an animnotify on the 'block_hit' animation.



Both the footstep and sword drop sounds are controlled through switch containers assign to a switch group holding switches for all the material types. Within the switch containers are several random containers which include the sound of the footstep or sword drop on each surface type. These random containers are assigned to the respective switch type through the contents editor.



Within the physics menu of the project settings in unreal all the surface material names are created, physical material objects are then created and assigned to them, the physical materials are then assigned to the appropriate materials in unreal. Two separate switch blueprints are attached as a component to both the player and enemy actors respectively. The blueprint draws a line between the actor and the ground returning the surface type, this result is checked against the current surface type and if they are different the new surface type is set. This is casted to the necessary actor and the switch is set according to the newly set surface type.




The sword drop sounds consist of two random containers within a blend track for each surface type. These two random containers hold soft and hard sword drop sounds dependent on its surface type. The crossfade for these blend tracks are defined by the velocity at which sword hits the ground. By setting a variable a frame after the linear velocity has been obtained the variable is set as the last known velocity, this variable can then be subtracted from the current velocity to get the difference.




Music was added towards the end of the project as means of a secondary objective to play with the ideas of integrating music as an element of sound design or blurring their distinction or roles in subtle ways. This approach is illustrated best through the sounds used for the significant cluster of fireflies. These musical granular sounds are pitched in key with the music, and are more referential to the sounds of the fireflies but in co-ordination with the music act as a sound effect and music effect at the same time.



To further place the music in the diegesis the arpeggio part is spatialised within the main room, it's transform positioned on the tree. The volume of the arpeggio is driven by an RTPC that is derived from the distance between the central tree and the player pawn. The spatialisation of this component of the music gives symbolic weight to the tree which could be used in a narrative way.




12 views0 comments
bottom of page