Band 1: For the Best: Drink Up (Mixing)

This is part of my final major project: click here to return to the main menu:Once I was done recording the band, It was then left for me to select the best takes from each instrument and put together a final mix, with one of my revelations being that the track of the top snare was too resonant when added,  and it didn’t need to be in the mix due to having to bottom snare track present, giving the snare its “rattle” or “bite”, while the overheads picking up the” body.”

(In hindsight I could of taken this out by applying masking tape to specific points on the drum head.)

I also panned the overhead tracks hard left and right respectfully and bussed all the drum tracks to a single stereo auxiliary input.

I also came to the conclusion that the bass player could not play the song in time all the way through, as some parts where so out of time they couldn’t be used, so instead I re-created the bass track using different sections and then putting them in order of what progression was being played. Luckily for me, the bassist was playing the root notes of the main guitar progression, so it wasn’t too difficult to work out.


I also chose to use the DI’ed signal over the D112, as the rhythm guitars where distorted, having a cleaner signal gave meant that it would be easier to pick out in the mix, though I only chose this once I had re-created the bass, so I could hear them both back first “in time” to make a decision.

For the rhythm guitar I used two separate takes and put them onto separate tracks, allowing me to “stack” them on top of each other creating a more denser sound and if panned a larger stereo image. Meanwhile I had the Lead guitar on two channels, and the Clean Lead guitar on another two, though I decided to only use the sm57 tracked clean, as it cut through the mix better, having less lower mid’s to muddy up the mix. I then bussed all the rhythm tracks together to a stereo auxiliary input.

For vocals, I had two main vocal tracks on 2 separate faders, while having a vocal harmony and a gang vocal on two other ones, though I had to copy and move the gang vocals to the right parts in the song, as by having the gang vocals the same. they would be more consistant.

Once the tracks where selected, I then went through the track and deleted the segments of background noise where an instrument or vocal wasn’t playing, eliminating excess frequencies and giving the mix more clarity.














I then started addressing individual tracks, beginning by EQ-ing to the kick drum; boosting the bandwidth around 75hz by 5.7dB to bring out the lower end “boom”, cutting the bandwidth around 400hz by 6.8dB to take away some of the “boxi-ness” as well as make some room for the bass’s fundamental frequencies to sit, and final boosting the bandwidth around 4Khz by 7.6dB accentuating the high end beater “click” sound.



Next I applied a High Pass Filter (referred to as HPF in future) to the DI’ed bass signal, at 74.7Hz with a 18dB/octave shelf to make some room for the kick drum’s lower frequency range, so they both had room to breathe, without losing as much of the bass’s lower register.



For the Rhythm guitars I proceeded to pan the PG52 recorded tracks hard left and right, allowing the bass and drums to sit in the center, while having the two Sm57 takes at 10 and 2 o’clock, finding a happy medium between the center and the sides. I then applied a HPF with a 18db/octave shelf at 144hz, again freeing up room for the bass, while boosting the bandwidth around 607.8hz and 3.03Khz by 4 and 5.2dB respectfully to give the guitars a bit more “bite” by boosting those higher mid range frequencies.



The backing vocals I used a compressor on, with a ratio of 15:2.1, a threshold of -23dB, an Attack/Release time (future referred to as A/R) of 10/80ms (milliseconds), and make up gain of +14.5dB. The makeup gain was used due to having the gang vocals being projected from across the room (a precaution to stop the mic overloading) while the ratio is set quite high to give one consistent level, with the A/R allowing the compressor to be always on but not acting as a limiter.









This was then EQ-ed using a HPF with a 18dB/Octave shelf at 773.2Hz to make the vocals seem less throaty, and more nasally, something the genre of which the band play in is more known for.



Main Vocal tracks 1 and 2 where compressed individually  with the same settings, as if I had bussed it together the whole thing would of been effected, with the benefit giving me more control if I needed to adjust one.  I used a 6.6.1 ratio, to reduce the vocal dynamic, while having the makeup gain set quite high to give the vocal the volume to be heard over the rest of the mix. The A/R of 10/80 means that the compressor smoothes the vocal without cutting it off.











I then proceeded to add some automation to certain tracks, starting with the drum auxiliary input, where I used tab to transient just before the bridge section of the song to find the down beat, dropping the volume from -1.6db so they become un-hearable, before slowly rising up to a maximum of +2.5dB by the start of the last chorus. This makes the song pause for a section, making the vocals stand out more, while then slowly building to a crescendo when everything becomes more audible again.12There were also some sections where the clean guitar was too quiet in the mix, so I used volume automation in some segments to boost the level of the track, as can be scene below, with a few extra dB added so it can be heard.13


I then  added some of my own touches to the song to fill out certain areas and make it sound more professional, starting by adding some sweeping low pass filtered, (referred to in future a LPF) flanged white noise at the start and end of the song, adding intensity to these segments, this was done by creating 2 new tracks, one being a mono audio channel, while the other an auxiliary input. A signal generator is then added to the aux input  bussed to the input of the audio channel. This allows the white noise to be recorded internally onto the track.

The Low Pass was set at 120Hz with a 24dB/octave shelf as the starting point, while then I added the LPF frequency parameter to the automation list by clicking on the box icon underneath the AUTO setting. This then let me set the sweep to last for the duration of the white noise, giving me control of how fast or slow I wanted it to go.

The overall idea of this was to make it sound like a plane passing overhead, with the LPF representing how close it would be as higher frequencies don’t travel as far. A flanger was then added with the depth set at 0.97ms so it’s modulation would be exaggerated, but subtle while having the mix parameter and feedback controls set around 50% allowing the original sweep to still be heard.









During the bridge, I added a synth part playing a slow descending melody, this fulfilled two purposes, the first being that it filled up the spectrum so the vocal wasn’t the only thing standing, and two providing a different timbre to the guitars and bass the song was build around. However the mix slowly got more muddy as the rest of the instruments came in, so added a HPF with a 24dB/octave shelf at 539.7hz to make it take up less space.

At the start of the song I decided to use a “telephoned” version of the bridge vocals, to flesh out the intro as it was just bass, plus the lyrical content of the song made it sound appropriate, sounding like half a conversation.

This was done by cutting both the lower and higher frequencies, while boosting the middle ones, and effect which I then printed on that section, meaning I didn’t have to add any bypass automation after that segment was finished.

I also took the clean guitar from the intro section of the song and added it to the same place as the telephone vocals, but instead it’s in reverse, making it sound more layered, as the guitar transients backwards with a slow attack time are reminiscent of violin related instruments, while also sounding strange enough to catch the listeners ear.

Towards the end of the song, I decided to take a section of gang vocals I was originally going to put at the start, but with the additions listed above it would of been too crowded. (pun not intended.) so by putting it at the end of the song, it catches the listener off guard, and adds comedic effect.

I also tried a technique known as a “vocal swell” though I’m sure it goes by other names, in which you take the word you want to have the swell on, copy that word into an empty segment of track, select a bit of space afterwards and drench it in a plate reverb, with the effect printed onto the waveform.

after this you reverse the original waveform, which in turn reverses the reverb so it “swells” up, it is then just a case of moving it up to the original word, and crossfading the gap inbetween, as well as tapering the end the start of the swell so it comes out of nowhere. I used this on two of the “drink up” gang vocals and works well with the track.

The inspiration behind the technique was from a youtube channel im subscribed to, called the recording-revolution, where the host shows gives tips and tutorials on everything from recording to mixing, with the video I used as a guide for this technique shown below:


Finally I added a master fader, and after a couple of listens through made minor adjustments to make sure nothing was peaking, set the master fader at -6.2dB.




Band 2: Sound The Siren: Get Out: (Recording)

This is part of my final major project: click here to return to the main menu:

In addition to For the Best, I’ve also recorded another band from outside college; they are called Sound the Siren, a 4-piece hard rock/alternative metal group from Bournemouth. Though I hadn’t seen them live at this point, I had heard the name a couple of times, and from a couple of low quality live videos on their Facebook page I saw they had potential, plus I knew the lead singer through a friend of a friend.

Unlike For the Best, who came to me a week before they wanted to record, and already had the song pretty much in a finished state, I wanted to take a much more hands on role with Sound the Siren, taking the time in pre production to make sure that we got the song they wanted to record in the best form possible before we hit the studio.

In return for working with them, I asked a simple set of terms:

  • That I could use there work for my portfolio, and my course
  • The Cost of the recording would be split between the 5 of us, working out cheaper for everyone and also all of us sharing the cost of the sessions.
  • I would be allowed to keep the stems from the track to make a remix.

The band agreed and we began Pre Production.

In pre production, I met up with the band at their local practice space, so I could hear how they sounded and see how they interacted with each other; they had an idea of two or three songs they wanted to record, as well as playing me through their set list in case any other songs jumped out to me.

After a couple of practice sessions, we decided on a song titled “Get Out” since It was a mid paced, fairly heavy song which I felt would be the best representation of their sound, as well as being fairly straight forward to record as it followed the verse-chorus structure.

However I proposed to the band that they should try to add some extra parts into the song to give it more character, since it had the potential to be so much more then it was at this point.

Through some improvisations and some suggestions from myself and his band mates, James (Guitars) added some clean parts in the choruses, as well as a fully fledged solo towards the end of the song, making it sound like it is slowly building towards a climax, as well as giving him more options for playing a live performance.

Though at the same time could see that less is more, so after the first verse, decided to drop out so David (Bass) could carry the rhythm and stand out more.

With some of my ideas and Heather’s (Vocals) we also added some extra vocal parts to the song, coming up with an additional bridge section, as well as repeating some lines so I could have them sound like an whispered echo. I also had a look over at the lyrics, and offered some changes, a word or so hear and they’re but nothing to make it sound different.

Once we had worked out the necessary changes we moved onto recording.

I decided to record the band mostly individually, and then mix the multi-track later on, as I felt this would give me most control on the mix, plus would make it easier to fit in all the extra parts.

IMG_1110My order of recording was to record drums first, using a DI’d Rhythm guitar as a point of reference for Leon (Drums) to listen to, hopefully to get a better performance out of them, plus then having the option of re-amping the DI signal later on if I needed to.  (Part 1)

Then I’ll move on to Bass, as they would have two points of reference from the DI’d Guitar and the Drums, followed by the distorted guitars, where I would take the DI’d Guitar out so they wouldn’t interfere and throw James off.  (Part 2)

Once this was all tracked, I would record vocals.  (Part 3)

I wanted to record them to a click track, but after a couple of practice takes, It became clear that they where struggling to do so, so instead I got Leon and James to play to each other, and use that as the template for all the other parts to play over. Later on I would tempo map the song to make it easier for me to edit.

Overall the recording took two sessions to complete, as James got a new amp in-between recordings so redid the guitars, and Hev wasn’t happy with her vocal performance so again there where re-takes.

Part 1:

IMG_1111The Drums I had placed where in the live room facing the diffusers on the wall, so all the immediate reflections wouldn’t bounce back into the microphones, with a rug also being placed underneath the drums to dampen any vibrations from the floor.

Also me and Leon went round the kit and listened to the individual drums, applying masking tape to certain parts, especially the snare, since It was very resonant, which resulted with the David’s cigarette packet being used for weight.

Channel No. Name: Microphone:
1. Kick Shure PG52, 4 inch’s away from the beater head, off axis, bottom right of pedal.
2. Snare Top Shure Sm57, on the rim facing the center of the snare.
3. Snare Bottom Shure Sm57, on the rim facing the center of the snare, Flipped Polarity.
4. OHL Samson C02, XY Coincident Pair, about a meter back from the drums, center line between the kick and the snare, a couple of inches higher then the cymbals.
5. OHR Samson C02, XY Coincident Pair, about a meter back from the drums, center line between the kick and the snare, a couple of inches higher then the cymbals.
6. Room Avantone CK6, couple of meters back from kit, about 4 o’clock from the drummers back.
7. DI Guitar N/A


IMG_1114The Shure PG52 was chosen due to its frequency response, as it has an exaggerated response between 50-100Hz, capturing the low end “boomy” aspect of the Kick (60-80Hz) as well as registering the “attack” of the beater head (4Khz) as the microphone is boosted around the 5-6Khz.

I wanted to try a different microphone for recording the Kick, as I had previously used an Audix D6, I chose the PG52 for Its more high end response, as Leon would be using a double kick pedal, and makes a great alternative.

It was on axis to the beater head within the kick drum, with its cardioid pick up pattern making the higher frequencies of the kick drum being more pronounced when its on axis, moving it slightly off axis however, will produce a slightly darker tone.

Within the shell of the drum kit, it is isolated from the sounds of the rest of the drum kit, giving the recording it more clarity in the mix.

Kick important frequencies: Boom (60-80Khz) Attack (4Khz)


IMG_1408The Sm57’s cardioid pick up pattern and frequency response, makes it a great chose for close mic-ing, as mentioned before a Cardioids pick-up pattern on axis picks up the higher frequencies directly in front of it clearer, so with one on the rim facing towards the center of the resonant head, and one placed towards the beater head, it picks up both the body of the snare and the rattle bellow.

The role off on its frequency response from 100Hz is designed to avoid proximity effect, with a boost around 240Hz  (roughly the body of the snare) and another between 4-6Khz (snare wire) sit in the spectrum.

Snare Important frequencies. “Body” (240Hz) “Ringing Overtones” (6Khz)


Samson C02’s, which are small Diaphragm, cardioids condenser microphones. When placed in an XY coincident pair configuration on a stereo bar, both microphones are placed 90-degrees from each other.

This makes them phase coherent, producing a focused stereo image, with the center being quietest as both microphones are off axis. I had these stand about a meter away above the cymbals to get a balanced sounding kit, with the cymbals sounding crisper by having the microphones on axis.

The C02’s are a good choice for use as overheads, due to their frequency response gently rising from 2Khz onwards and levelling out at 9Khz, where both the Cymbals and the High Hat sit in the spectrum.

Overhead Important frequencies:  “Gong and Clunk”  (100-300hz) “Ringing Overtones” (1-6Khz) “Sizzle” (8-12KHz)


IMG_1405I wanted to experiment using a room microphone, as I felt it would allow me to utilise the ambience of the room, which when mixing I could have it sit underneath the actual kit, plus use it by itself creatively to get a more distant sound.

I chose to use the Avantone CK6 due to its fairly flat frequency response, giving an accurate representation of the room, with a slight dip around the 300HZ mark to take out some of the “boxy”ness within the room, also having behind the drummer results in a more distant sound, as the kick and snare are further away then the cymbals.

Part 2:

Channel No. Name Microphone:
1. Bass DI N/A
2. Bass D112 AKG D112, On Axis.

5For Bass I decided to take the DI signal, and also mic up a Combo Amp so I have a clean signal to use for possible re-amping, and an overdriven tone to fit with the song.

The amp was placed in the live room, facing the diffusers in the same manner as the drums, which will block some of the reflections in the microphone, as well as having the amp on a stand, to stop the vibrations being picked up from the floor, and on a rug

The AKG D112 was used to mic up the bass, and was placed off axis to the speaker cone to make the higher frequencies more accentuated. The AKG has a similar frequency response to the PG52, but I chose the AKG due it having a flatter frequency response between 400Hz to 1Khz, where as the Pg52 has a reduced response between those areas, which is roughly, where the Bass’s fundamental frequencies sit.

After this I moved onto Guitars:

Channel No: Channel Name: Microphone:
1. Guitar (On Axis) Shure Sm57, On Axis
2. Guitar (Off Axis) Shure PG52, Off Axis bottom right hand corner.

As mentioned previously I placed the Guitar Amp in the same place as the Bass, on a rug, on a stand facing the diffusers mic-ed up with an Sm57 on Axis and a PG52 off axis.

IMG_1425The 57 was used to pick up the higher frequencies, the more trebly parts being (on axis) of the guitarists tone, which was heavily distorted, while the PG52 was used because its frequency response is exaggerated in the higher Mids and low mids, giving the body of the tone, while reducing the frequencies the lower parts of the bass sits in. Because it was off axis, these higher frequencies would be not as prominent while giving more of a warmer tone. (note picture is from the first session, as he recorded with the black-star amp the second time.)

Essentially I have taken the principle of how I would capture an acoustic guitar and have applied it to an electric one, using the PG’s lower frequency response to pick up the lower frequencies, while the 57 picking up the highs.

I recorded both the lead and rhythm guitar a minimum of twice, so during the mixing stage I could have two pairs at different levels and panned differently, dropping out and reappearing to reinforce the other ones.

Part 3:



For vocals I have chosen to use the RODE NT2a, a large diaphragm condenser, using its Cardioid picks up pattern setting. It has a fairly flat frequency response, which gets slightly more exaggerated from 2Khz onwards towards 6Khz.

I had the microphone set up in the same room as all the other instruments where recorded in, with a pop shield to limit the sibilance being picked up. I chose this due to Heather having a fairly high vocal range, allowing this range to be the most prominent thing in the mix.

The Microphone was mouth level, mainly to accentuate her head voice, rather then the lower throaty voice, making it sound more feminine.

Music Technology Research: The History of the Sampler

Task 1: Describe a piece of music technology and produce a time line detailing its key developments:

The piece of music technology I am going to researching is the audio sampler, which although similar to the synthesiser; which generating its own sounds, the sampler triggers pre existing sounds which are either recorded or loaded onto it and then played back. I will mainly be focusing on Hardware Samplers.


Mellotron (1963)


Prior to digital sampling, tape replay keyboards where employed by musicians, with the Mellotron being one of the most notable. It was a electro mechanical, polyphonic tape replay device developed in Birmingham, England, and was popularised by acts such as The Beatles, Moody Blues, and King Crimson.

Computer Music Melodian (1976)

Made by Harry Mendal in the mid seventies, it is known as the first commercially available digital sampler, which was monophonic, had a 12 bit A/D converter, and had a sample rate of up to 22Khz. It also had a feature, which allowed it to be compatible with analogue synthesisers by syncing with their pitch.

Note: The backwards compatibility, something which doesn’t tend to happen much at the present day.

Fairlight CMI (1979)

2The first polyphonic digital sampler, created by Peter Fogel and Kim Ryrie, it was originally designed to create sound by modelling waveform parameters in real time. However its processing power was incapable of accomplishing those feats, so they tried it with naturally recorded sound, which had a far more successful outcome.

It could run at both 8 bit and 16 bit depths, with 8 Bit having a 16Khz Sample rate, while 16 could sample at 100Khz.

E-mu SP-1200 (1987)


Created by E-mu Systems, Inc, this drum machine and sampler was revolutionary as it was able to construct the bulk of a song by itself, making it a favourite of hip hop producers at the time. It could use existing samples or could record 10 seconds of audio at a sample rate of 26.04Khz and at a bit Depth of 12 bit depth. It also had a mono output plus an MIDI in/out/through.

Akai MPC60 (1988)


Released by Japanese company Akai in partnership with Roger Linn, who was the creator of the first programmable drum machine, the Linn Electronics LM-1; the Akai MPC60 was the first non-rack mounted model by Akai to be sold. It was also one of the first samplers to feature touch sensitive trigger pads, giving rise to the MPC design of samplers. The 16 velocity pads could store 4 banks worth of sounds (64 in total) with a sampling rate of 40Khz and a bit depth of 16, with the option of the data being stored in 12 bit format in a non-linear format to reduce noise.

Software Samplers: (1989-onwards)


Towards the start of the 90s, advances in processing power and memory capacity made it possible to run software samplers, with their interfaces modelled of their hardware counterparts. Even though hardware samplers are still used, software samplers tend to be bundled with DAW’s (Digital Audio Workstation) as a VST or plug-in, which can be used in conjunction with other sound modules and effects

Examples include Kontakt by Native Instruments, HAlion by Steinberg, and Emulator X by E-mu.

Task 2: Explain how the music technology works comparing and contrasting 2 points on the time line.

Samplers are usually controlled by attached keyboard, or from an external MIDI source, with each trigger assigned to a different sound. When multiple samples are arranged on a keyboard, it is known as a keymap.


The example above shows that if G2 or A2 is pressed, it will trigger the sample “Violin G2” however if the A2 key is played it will pitch the sample up by one tone. This is known as Keyboard Tracking, and with it a single sample can be pitched up and down across a keyboard or MPC, so they can make chords and play scales. You can also go one step further and map two samples to one key each controlled by the velocity of how it is triggered, giving a more intense sound.

Polyphonic Samplers can play more than one sample at the same time, while if it is monophonic, it will only be able to play one note at a time, with the previously triggered note being cut off.

Samples can also be set to loop from a particular point within the audio file, so that when a key is pressed, once the sample is finished it will replay indefinitely from that point.

Figure 1: As an example of an old hardware sampler, I will be using the E-mu Emulator released in 1981.


In order to record your own samples into the Emulator, the sound has to be an analogue line level or Mic level input 2 seconds in duration. where it is then digitised via the Analogue to Digital converter, which samples it at a rate of 30Khz.

Once the sound has been recorded it was held in RAM (128KB) where you could edit it, until you where happy and save to a floppy disk (storage capacity of 1.2MB) where it can be reloaded at any time. Once it is loaded the sample is keyboard tracked along, so by pressing higher keys will result in the sound being played back at a higher pitch and vice versa. It also featured a 4 octave split, allowing simultaneous control of two independent sounds.

It also comes with 5 discs worth of pre-recorded samples, which have to be inserted and reloaded to use.

Once the key triggers the recorded sound, the sound is then decoded back to analogue and is heard through an attached speaker.

Below is a flow chart of its signal flow:


Figure 2: Fast-forward a couple of years to 1986, Akai brought out the Akai S900. Unlike the E-mu Emulator, It had a sampling rate of 40Khz, giving it a larger bandwidth, a 12 bit Depth, giving an increased dynamic range, and a RAM of 750KB so it processed information much faster than the E-mu Emulator. It also had 8 outputs for each of the different voicing’s.


The downside of the S900 is that it required you to plug in an MIDI input device, so acted more as a sound module rather than a stand-a-alone piece of kit. The S900 is still being used by today’s musicians and at its time of release was probably more expensive then the Emulator, given there was 5 years between for it to drop in price.

Task 3: Comment Critically on an area that has been affected by the development of your chosen music technology. This could be a genre of music ,a performance technique or an artist’s career, for example.

1980’s Hip Hop

The development of the digital sampler had a significant effect on the hip hop genre in the 80s, where sampling other records via non digital means, such as turntables, was a crucial part of how a song was created. A lot of Hip-Hop producers embraced this new technology, as it had a number of benefits over the conventional turn-table based method of the time.

It allowed them to edit the samples they were using for the first time, changing the pitch and length of the sample opened up a number of new creative possibilities. Its main benefit being that the edit was printed onto the sound, unlike a DJ who would have to manually drag the vinyl slower or faster to create the change in pitch and speed.

Looping became so much easier with the advancement in technology as well, as it was a case of setting “loop points” where when a sample was played back, it would repeat itself if the trigger was held down, unlike on a turntable where two of the same song would have to be “cued” up and changed at the right time in order to get a perfect loop.

Multiple samples could now be layered on top of each other, as well as being assigned to a single trigger, allowing for more dense and dynamic sounds in the mix, as well as freeing up roles for other performers to play other instruments or sing. This can be heard on the Public Enemy’s 1987 album Yo! Bum Rush the Show, which was produced by The Bomb Squad, of which their production style features a sample heavy aesthetic.

The Akai MPC60 was released in 1988, which was followed by the Beastie Boys album, Pauls Boutique a year later, which utilised the emerging technology to create a album made up of a dense mix of samples, with the Dr Dre produced N.W.A’s debut Straight Outta Compton, also exhibiting the sample heavy approach.

The most popular type of samplers in the 80s where the sequencer based models, such as the Roland TR-808 and the E-mu SP-1200, as it allowed producers to created the backbone of a song with a single piece of hardware, plus as the decade went on the amount of time the samplers could use increased, with more and more artists recording their own sounds and playing them back.

Modern Use and moving forward:

Outside of the 80’s the digital sampler has help create some musical aesthetics which can now be found in many different genres, as well as hip hop. During the 90’s, sampling time continued to increase, allowing producers to sample larger sections of existing songs, which producers such as Puff Daddy used on The Notorious B.I.G’s album Ready to Die released in 1993, which contained whole sections of well know songs.

This also lead to Remixed tracks appearing more frequently, which started to flourish alongside both the Hip-Hop Scene and the creation of digital sampling. It made it easier for artists to take parts from existing songs as mentioned and put their own take on it, such as using the vocals of one song with other samples or new instrumental elements being added.

Taking influence from Newcleus’ song “Jam on It” which features pitched up samples, 2 decades later it inspired Roc-fella producer Kanye West to popularise the “Chipmunk” technique, using higher pitched samples over an original vocal take. Trap music also makes use of the opposite, having segments of audio played back with a lower pitch accompanying the original.

As the cost of samplers has come down and the technology has increased, Looper pedals have become widely available, giving the option for musicians to record themselves in a real time environment, and then layer multiple parts on top of each other. This uses similar technology to that of digital sampling, but literally repeats the segment which has been recorded, allowing single musicians to create a backing part without the need for a full band. Acts like Beadyman, KT Tunstall and Ed Sheeran have all found success in using them creatively.


Film Soundtrack Components:

Task 1: Explain these film soundtrack components:


Music is used throughout film and other media in numerous different ways, most of which are laid out in Zofia Lissa’s Ästhetik der Filmmusik (1959: 115-256). [1.]

The 10 Most prominent uses are:

  1. Emphasise on Movement –underlines a specific movement, such as choral synths while flying.
  2. Emphasise on Real Sounds –exaggerates a real life sound, something falling down accompanied by loud bass drum.
  3. Representation of Location –provides a particular stereotype for a cultural or historical origin e.g. lutes and drums for a medieval setting.
  4. Source Music –Diegeticsound within film, which happens within that world e.g. marching band at a parade.
  5. Comment – gives off a particular vibe for the scene.
  6. Expression of Actors Emotions – used to exaggerate character emotions, e.g. music in a minor key when the main character is sad.
  7. Basis for Audiences Emotions – Leads the audience into feeling a certain way, e.g. a build up before a “jump scare”
  8. Symbol –associated with a character, such as when a character is being spoken about but not present, at some point in the narrative becomes intrinsically linked with the character.
  9. Anticipation of Subsequent Action – used as a cue that the mood of the scene is going to change.
  10. Enhancement and demarcation of the films formal structure – music that progresses the films narrative, may show passage of time or that a new section of the film is about to commence.

Robert L Mott laid out nine of the most crucial components of how a sound is perceived, with any major change resulting in a the sound giving off a different notion or a new sound being created entirely. [2.]


Music Components:

  • Pitch:

– Lower frequencies give off notions of power, while midrange frequencies give the sound its energy and higher frequencies tend to imply presence, or how close we are to its origin.

  • Timbre:

– The “Tone” or “Colour” of the sound, which is made up of the unique balance between the fundamental frequency, harmonics and overtones.

  • Loudness:

– The intensity of the sound, becomes meaningful when compared to something else, humans are more sensitive to mid range frequencies, so they will sound louder when combined with a sound of a lower or higher pitch. It is also a good representation of the viewer’s distance from the sound source.

  • Rhythm:

– How the sound relates to Tempo, if music, which notes are accented or exaggerated, if a sound is repeated at regular intervals, we perceive it to be a pattern, and look out for it later on.

Sound Envelope Components:

  • Attack

– How fast of slow the sound builds, with a fast attack creating the sense of immediacy, while a slow attack builds tension.

  • Sustain

– How much energy the sound has until it decays, with a long sustain implying a sense of strength, while a weak one implying the opposite.

  • Decay

– How long before the sound dies away to silence, a sound with a long delay implies it’s in a very reverberative space, possibly indoors, while non would imply it’s in an outdoor environment.

Record and Playback Components:

  • Speed

– By slowing a sound down you increase the sustain, but it also lowers the pitch, this could give the impression something is building in intensity or its in a dreamlike impression.

Sound Effects:

Musical elements need to be backed up with non-musical sounds, which would be added since they wouldn’t be able to be tracked during filming.

This could take the form of something fictional which needs a new sound created for it because one doesn’t exist, which would fall under sound design.

While real life sounds would have to be performed in postproduction along to the created footage, called Foley.

Dialogue and Spoken Word:

Like sound effects, its very rare a movie will use the recorded audio from the live take in the final version, instead, the actor(s) will take it in turns to come into a studio and sync their lines to the video, this process is known as dubbing.

Task 2: Identify examples in the provided film clip of Terminator 2 to illustrate and elaborate on your explanations of Task 1:

1.Emphasise of movement:

01:51 John Connor is walking through the trenches, while patriotic music plays as he passes his men. This consists of pounding drums and orchestral elements, with the tempo of the drumming being the same as a marching military pace.

2.Emphasise on Real Sounds:

00:38: A machines foot crushes a human skull underneath its foot, creating a realistic cracking sound as it breaks followed by a choral synth pad with a slow attack and release. It is loud and sudden, cuts through the atmospheric textures with the pitch matching the proximity the camera is to the subject.

3.Representative of Location:

00:50 the camera shows a machine vehicle running over a mound of human skulls. In the background overwhelming all the other sound effects is an ominous bass tone, pulsating, rising and dropping in volume, reprehensive of the bleak post apocalyptic future.

4.Source Music:

05:53 The Terminator walk’s into a bar with country music playing on the radio, and is about conversation level in volume. This sound can be heard within the films world and is diegetic, as it helps set the scene without interfering with the sound effects and dialogue.


01:05 during the fighting a human solider gets shot and is accompanied with a scream and a minor synth chord stab. Due to the timbre of the chord, it is being used to comment we should feeling of sad and melancholy.

6.Expression of Actors Emotions:

07:57 The Terminator walks out of the bar with his newly acquired clothes, with George Thorogood’s “Bad to the Bone” playing non-diegeticly. Though being a machine and not having emotions, it reflects and comments on the characters emotionless persona.

7./8. Symbol/Basis for Audiences Emotions:

06:30 throughout the film we get the Terminator’s “Motif” every time they enter an otherwise ambiguous scene, which consists of the sound similar to two metal pipes being beating together in a regular rhythmic pattern, and a sustained foreboding bass synth.

9.Anticipation of Subsequent Action:

15:11 During a first person shot of an un seen character as they sneak up on the a police officer, The motif is repeated, associating subsequent violence every time it played.

10.Enhancement or Demarcation of the Films Formal Structure:

09:17 after the prologue and the opening credits have finished the Terminator theme starts playing, building in intensity. This consists of rhythmic drums with singular metal pipe hits ringing out, backed up with choral and orchestral elements, which slowly builds to a crescendo before moving to the next scene.

Sound Effects:

00:42 onwards during the opening battle, there are elements of both real and created sounds, these include the explosions from the bombs (Real) accompanied by the laser fire (Created) vary in pitch and volume depending on their proximity to the camera, (as evidenced at 01:34)

00:50 the sound of skulls cracking and breaking underneath the weight of the machines treads, and machine aircraft noises at 01:43 both of which would use modified existing sounds plus synthesised elements to create the otherworldly futuristic sound which sounds familiar enough to know what it is, but different enough to fit the mood.

Dialogue and Spoken Word:

00:22 Sarah Connor sets the scene by explaining what has happened for the future to end up like this, holding the viewers hand. This is known as Non-Diegetic sound as the characters in this part of the film cannot hear this audio.

The other use of dialogue in the film allows the films characters to interact with each other, and drive the story’s narrative, and needless to say is diegetic since everyone is aware of it.

Task 3: Using the musical elements terms collated in class and the “12 functions of music” as your vocabulary, analyse the audio components’ influence on narrative in the film clip of Terminator 2.

1.Emphasise on Movement:

The pounding drums with a fast attack, giving a sense of immediacy with the tempo of these drums accentuating the movement as it brings to mind an army march.

The lower bass element implies that he demands respect, as he is the leader of the human resistance, and an important character, while the higher pitched elements give the intimacy that we are in a place of great importance.

2.Emphasise of Real Sound:

When the Terminator crushes the human skull it is loud and sudden, therefore it is making a statement that the machine is superior to mankind, while the choral synth’s long sustain and release represents that the soul of humanity is being crushed out of existence. The sustained note continues into the next scene were its builds suspense until it is revealed that its not just one machine.

3.Representative of Location:

The scene consists of an almost siren like bass sound, which pulsates at a fairly slow tempo, giving the impression of a looming threat. It’s sweeping from low to mid range frequencies inferring the machines closing in distance.

4.Source Music:

The song playing on the radio is “Guitars, Cadillac’s” the typical genre of sound to be heard in that environment, and features the lyrics “Another tale about a naïve fool who came to Babylon, and found out that the pie didn’t taste so sweet.”

This is also a foreshadowing comment, relating to the patrons of the bar not getting what they expected when the Terminator walked through the door.


The minor chord stab, due to the tonality, expresses to the audience, that the machines are gaining the upper hand, and should be shocked and appalled at the loss of human life.

If it where a major chord, the tonality would give us the feeling that we should be routing for the machines, since to this point we have seen only a Terminator as a stand alone character.

An example of a major chord progression during a fight scene to emphasise who we should be routing for can be seen in the opening scene of Indiana Jones: The Last Crusade, when Indi is fighting on top of the train, fighting off the bad guys, since he is the hero, the music becomes associated with him.

6.Expression of Actors Emotions:

“Bad to the Bone” features mid tempo pounding drums and guitars, a rock and roll tune with association with that lifestyle now being portrayed by the character. The lyrics “Said ‘leave this one alone’ she could tell right away, that I was bad to the bone” acts as a warning message that he not to be messed with.

7/8/9 Symbol/Expression of Audiences Emotion’s/Anticipation of Subsequent action:

The metal pipe hits represent the machine-like efficiency these killing machines operate at, with the rhythmic pattern acting as a way of taunting their victims, while the lower bass not emphasises that they are powerful. When this sound plays the audience knows the subsequent action is going to get hurt or killed.

10.Enhancement or Demarcation of the Films Formal Structure:

Much like the tempo of the pipe hits in the previous point, the drums in the credits imply the threat is getting closer as they increase in volume each repeat. They also bring on ideas of war and combat, against the backdrop of the terminators face wreathed in flame, we are told that this character we need to look out for in the next scenes, as it is a threat to the hero’s. The Choral and Orchestral elements, add timbre to the credits, implying it’s a big finale before the next section of the film starts.

Sound Effects: [3.]

As a quote from James Cameron, the director of the film, the music had to

“Sound like it was injected with testosterone, it had to be inflated to unworldly possibilities.

On example is the skull being crushed at the start, which is actually just a pistachio nut in a metal plate, but sonically enhanced to sound more intimidating, implying the machines great strength.

Gary Rydestrom, the sound director, also said that he liked adding voice like elements to non-vocal sounds. This change in envelope and modulation works well in the opening scene, as it plays along side the image of all the skeletons in the barren landscape, making it sound like the whispers of the dead.

Dialogue and Spoke Word:

All dialogue in feature films is re-dubbed once the scene is shot, since this makes it crisp and clear, since the audio from the set would be filled with environmental noises and other sounds. By having the re-dubbed sounds, the director can normalise them if they are to quite, modify them to sound unnatural, with the overall goal to make certain parts emphasised for the audiences benefit.





[4.] (Terminator 2 Clip Used)