top of page
Search

City 16 Audio Implementation Practices

  • whjsoundandmusic
  • Dec 10, 2019
  • 5 min read

Updated: Dec 17, 2019

In the 'City16' project the audio was implemented using Wwise audio middleware and its integration with Unreal Engine 4, this post will cover the audio systems setup within Wwise as well as evidencing the implementation and blueprinting methods and approaches used to program in and manipulate the audio based on in game parameters. This project also covers approaches to adaptive music and experimenting with methods to integrate the functionality and aesthetic of music with sound design.


The map is comprised of two subway areas and a street level area the audio between these ares are separated using the Wwise integrations spatial audio objects, specifically the spatial audio volumes with the 'rooms' feature enabled. A portal is then placed on in the openings where the stairs lead to street level, enabling audio bleeding and occlusion between the 'rooms'.



Security tracking cameras are placed throughout the map, the blueprints for these cameras contain an alarm system function which is triggered when the player enters the 'viewcone' of the camera. The audio system implemented for this function is made up of several parts; a number of various scanning loops for when the camera is both tracking and has been alerted and a combination of sounds that indicate the camera has been triggered and reset. All of these Wwise events are posted within the blueprint as seen in the blueprint below.



The pitch of both the un-triggered and triggered scanning sounds are manipulated based on the amount of yaw rotation of the camera actor, using the values of rotation to set an RTPC which is manipulating the pitch of the sound in time with the animation according to the curve pictured below.



Several approaches were taken to create slight variety between the camera sounds from one game object to the next. Firstly for the un-triggered loop there is a random container added to the event with multiple scanning loops, introducing a different layer to each game object. Secondly, a seek event was used within all events that used looping sounds, the seek percentage is randomised between 0-100% so each loop will start from a different position. Finally any incidental or un-looped sounds are also contained within random containers so each time one of these events is triggered a different version of the sound occurs.



Within the map there are also two small floating robots that patrol the street level, their path derived from the level sequencer. The main body of the sound attached to these robots is made up of numerous loops that emulate the sound of them hovering or floating. The pitch of these hovering sounds are again controlled through an RTPC that is set based on the speed at which the robot actor moves.



The rest of the sounds for these actors are held within a blend container. The blend container's crossfade is dependent on an RTPC that is controlled by the distance between the player and these robots introducing the more detailed 'vocal' like sounds and scanning sounds when the player is within close proximity and blending over to a sparser combination of sounds, allowing the hovering loop to take precedence when there is distance between the two actors.



To add some more interest and depth to the sound design of these robots a fly-by sound is posted within the blueprint, the event is only posted when the conditions for the branch are met, which in this case is when both the speed value exceeds the value set and when the robot actor is below the threshold set for the amount of distance between that particular actor and the player; seen in the blueprint below (continuation of previous blueprint shown).



The ambience for the street level is constructed from a large quantity of ambient recordings and sound effects varying between close and distant traffic, distant crowd/general ambient noise, wind sounds, and multiple incidental effects. To give the world a sense of dynamism and reality all the incidental sound effects are held within various random containers, the initial delay and trigger rates for all of these containers are randomised to varying degrees. This gives the overall street level ambience a degree of unpredictability.



Another element of the sound used to give depth to the ambience is in the way the sound of the flickering lights have been implements. The volume and low pass of this loop is controlled by the intensity of the light attached to the actor.



The player footsteps were implemented using the same method as in the cave project (see previous post for details). However in this project the varying materials on the stairs became an element that i wanted to detail within the sound. Several material footstep random containers were created and depending on the materials in the scene a combination of these footsteps were added to that switch in the contents editor. For instance the stairs at the end of the video below are a combination of concrete stairs, metal and tile recordings, the levels of these various material sounds are fine-tuned to best replicate the sound of footsteps on the surface depicted.



The music was composed and implemented in a way that the walkthrough video of the map is given a vague narrative told by the music. The changes in music are controlled through two different state groups. The first is dictated by whether the player is in one of the subway areas or on street level, achieved through the use of box triggers to set the state when the player pawn overlaps one of the box triggers.



The second state group used is set based on whether any of the cameras within the map have been triggered (seen in the camera blueprint shown earlier), this is used to create changes within the music that matches the aesthetic created when the cameras are triggered. Firstly when the camera is triggered on street level a textural layer is introduced using a the states tab within the interactive music segment to set the volume based on the state.



The second instance uses this state in a similar way introducing the distorted percussive elements contained on a music switch track, the sub-track playing is dictated by which state the Camera_Trigger state has been set to.



To compliment the tone of these elements as well as integrate the music and sound design together, the bass drone within this section is side-chained to the alarm/siren sound (using the Wwise meter plug in on the siren and an RTPC controlled by the input of the siren on the music track) which occurs at a regular meter; the same tempo as this piece of music. This process creates a pumping effect on the bass ramping up the tension. When the camera is reset the side-chain on the bass persists; achieved by creating a separate blueprint for this particular camera actor where the camera trigger state does not set to the 'off' state when the camera's alarm system resets.

Upon leaving the second subway and re-submerging in the street the music is unchanged, i created a pad texture and Shepard tones, the filter amount on these parts are dependent on the distance between the player and robots through the use of the RTPC's set by the robot blueprint previously emphasising the rising tension and adding a euphoric element upon the robots becoming visible again. The shepard tones are again placed on a music switch track...




 
 
 

Comments


  • twitter
  • facebook
  • soundcloud

©2019 by William H. Jennings. Proudly created with Wix.com

bottom of page