ProSoundWeb Community

Please login or register.

Login with username, password and session length
Advanced search  
Pages: [1] 2  All   Go Down

Author Topic: Introduction to Mixing-Through Part 10 - Latest Update 12-08  (Read 48135 times)

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Introduction to Mixing-Through Part 10 - Latest Update 12-08
« on: December 14, 2007, 06:01:41 PM »

Hello all,

I make no claims to be anything but a competent live mixer, and an out of practice one at the moment!  That said, I have always tried to approach mixing as systematically, and scientifically, as possible, and I hope that some of the rigor I needed to learn how to hear can translate into useful information for those learning the craft.

I will resist nearly every attempt to present advice that is extremely specific to an instrument or genre, as neither is terribly useful.  This is also NOT postings on how to use the routing or features of a mixing board. I will not be discussing how to keep channels from clipping, or such things.  A basic competence from reading the manual of a relevant mixer should be considered a pre-requisite for this thread.

I post this in the lounge primarily because I believe that folks here are the ones most likely to benefit from a tutorial, as well as needing to shorten their learning curve as much as possible.

I feel this first post, in what will likely be a several part series, is a discussion of the process (really mistakes) I have wrestled with since I first stepped behind a mixing board, and the evolution to a degree of competence, and (more importantly) confidence.
______________
My History

The first mixing board I ever remember sitting behind was a Yamaha 1604.  On my first ever gig i remember seeing a button labeled "pad" and having no idea what it did, and out of panic, pressing it in on every channel!  I also remember not understanding how the channels had to go through the subgroups first.  Needless to say there was very little sound at the beginning of that gig, and what did come out was pretty quiet (due to the pads!).  Thankfully that was an amateur theater show in a small auditorium when I was early in high school!

My two points of that anecdote are:
1. everyone starts awkwardly
2. basic knowledge of how your gear functions is essential.

Next came the "get the volumes right" stage.  I merely threw up microphones and played with the faders until the placing of things felt about right.  The boards I was exposed to had little flexibility in equalization, and I was totally unsure what to do with that equalization!

These events had what I would now call "instrument balance" in that I learned to try to get the vocals on top, and make the bass sit in a pocket, etc.  What these events lacked were "spectral balance."  In other words, if an instrument was boomy or nasal, my solution was to turn it down, rather than try to balance it out with eq.

My next evolution was the "play with eq" phase.  At first I was scared of the eq! I would only turn the knobs 3-5dB and then not be able to tell any changes!  I was scared of the perception that I was somehow damaging the "purity" of the sounds.  Needless to say I had not spent any time in a recording studio at this point Very Happy

After learning to turn the eq knobs a little more, I then realized "I have no idea what frequency to put my equalization at!"  I was stuck here the longest.  It takes time and practice to learn where to start with eq.

Not knowing what ranges to equalize resulted in a very haphazard mixing process.  The order of cleaning up the channels, making mix adjustments, scanning the board during the show, etc. had no structure.

Another problem in this stage was the classic mistake of trying to make everything smooth and covering a lot of frequencies when in my headphones, and then wondering why my mix is a complete and total wall of mess as soon as paid attention to the house mix again.

Working in challenging circumstances forced me to learn how to mix faster.  That meant forcing myself to mix with some hierarchy, otherwise I would screw around with one channel for three songs and miss the big picture entirely!

I realized that without some semblance of speed that I would never be able to get a good name on gigs.  So I forced myself to prioritize mixing activities, and became willing to radically alter, or remove, inputs that did not fit the mix picture.

Finally, I got a point where I felt comfortable enough with how to start my mix to focus on the other problems of the stage or songs.  Backline bleed, bad instrument tone, bad arrangements, weak singers, the list is practically endless.

My desire from these posts is to help people speed through their version of my early steps and dive in deeply to the last stage of the mixing journey at a level of depth that will allow them to have real success with their gigs/bands/clients/company.

Lets begin!
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-Table of Contents
« Reply #1 on: December 14, 2007, 06:28:25 PM »

It has become apparent that it will be well after the new year before I have time to flesh out this entire thread.  I wanted to give everyone a sense of the direction I plan on heading, so that everything retains a sense of purpose:

Part and Contents: Basic
1.  Universal Principles of Mixing
2.  Approaching the Mixing Console
3.  Line Check-Precursor to Mixing
4.  Introduction to Acoustics Related to Mixing
5.  Human Hearing and Mixing
6.  Both the Forest and the Trees
7   Keeping Your Bearings Behind the Console
8   How to "Tuck in the Corners" of Your Inputs
9.  Leave it in the Mix, or Take it Out?
10. Stage Sources-Micing and Control
11. Mixing Quiet (and Mixing Loud)
12. Post Mixing Console Etiquette

Parts and Contents: Advanced

13. Dynamics Processing-Beyond Auto Mode
14. Effective Multiple Micing
15. Ambient Effects
16. Mixing Motif-Genre, Environment, and Preparation

Beyond that, anything else will be audience generated.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 1
« Reply #2 on: December 14, 2007, 07:26:24 PM »

Universal Principles of Mixing

1.  What is Mixing?--I believe the most basic question to ask is "what is mixing?" for any particular gig.  For a band in a small bar it may be "bring the vocal levels up to meet the backline;" for a corporate show it may be "create natural sounding speech amongst three presenters;" for an arena rock show it may "make a Vox AC30 seem ten feet tall and twenty feet wide to this audience."  All valid, clear goals that give a state of mind to build a hierarchy from.

2.  Universal Principles--Are there any universal principles in mixing audio?  Some people will argue me, I am sure, but I firmly believe that there are at least a few axioms:
A. Feedback is bad, and audiences remember it.
B. Vocals are THE most important thing in almost all reinforcement.
C. Isolation between sources will never be as high as desired.
D. Its too loud for at least person in the audience.
E. Missing a cue, resulting in silence, is far worse than any minor eq or balance error.

3. Less Universal Principles--Are there more principles that are mostly, but not always, true?  Of course:
A. Close micing rules the day.  Feedback immunity and isolation require this on most gigs.  This will result in mix compromises for some sources.
B. Proximity effect is real on most sources.  Most microphones in live sound are cardiod in nature, and therefore experience a build-up (due to their internal equalization) of mids, and low mids, when placed near sources.
C. Stage sources will contribute to the sound in house.  Only in large venues, with good instrument isolation, and in ear monitoring, does this not true.
D. Stage sources will differ tremendously in volume, timbre, and dynamics.  All of this variation contributes to the rough "live" character of the event.
E. The general public is usually very weak in being able to pick out sonic subtleties in your mixing, but will complain readily about big picture things (volume, genre, lack of PA coverage).

Now that I have exposited a few of the ground rules, as I see them, I will use part three to talk about a methodology for approaching the mix console, and starting to mix.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 2
« Reply #3 on: December 15, 2007, 01:20:20 PM »

Approaching the Mixing Console

I hesitate to go into too much detail here, because a very large part of your approach the mixing console on a gig is dictated by the nature of the event.  I can't assume you can influence the number, or location, of the channels for the show.  If you are the opener, or at a festival, these things may be completely beyond your control.

Additionally, I bring my own prejudices into this discussion.  I am left handed, and left eye dominant, which makes me opposite of most folks in some respects.  I scan naturally from left to right, and reach for things with my left hand first.  I also tend to listen more critically when cueing with my left ear.
_________________

Mixing a gig starts well before stepping up to the mixing console.  It may start when advancing the show, and downloading the manual to an unfamiliar console.  It may start on the stage setting up, and micing, some or all of the instruments.  It may start by talking to the festival system tech or patch guy.

For me it always starts with hearing protection.  I will wear earplugs (foamies or my Sensaphonic ER15s) as much as possible.  During loading in, while on the stage setting up, during the openers, after the show loading out, while traveling/flying.  The goal is to maximize the rest I provide my ears.

I keep my earplugs clean, and occasionally  use the over the counter earwax removal medicine.  I also try to listen to a least two different songs I know well several times before the event to gauge my hearing for the day.  If I were on a tour I would try to listen to a room mix of the night before, primarily for vocal level.  I find that vocal levels are increasingly hard to judge as familiarity with the song grows.

Now that the stage is set, and I am approaching the console, its time to ask the "What is mixing?" question.  At this point I view the question as an audio version of the serenity prayer.

Good answers before I approach the console would be "make sure the teenage girls can hear the vocals on the single" or "survive on a console I have never used before" or "fight the noticeable rumble from the stage during the opener."  Unproductive answers would be "dream I was mixing on xyz instead" or "pray for better channel count/eq/routing."

The goal is to create a mindset to work successfully within the framework currently in front of me.  Remembering that a minor error in eq or level is much better than missing a cue, first word, or first beat.  I find that this helps calm me, and minimizes a very real personal battle to panic on sub-par gigs/equipment.  It also tends to make me audience centric, and minimize any disappointment in the room/system/local crew, etc.

After the "what is mixing" question its time to walk up to the board.  And here the mechanics of a methodology kick in.

What are the first four things I do when I stand at the console?
1.  Check to make sure its on and functioning, including glancing at outboard PSU and racks
2.  Plug in my heaphones
3.  Check for all relevant channel/group/main mutes.
4.  Locate the solo/cue/pfl/afl of the moment.

A functioning console, proper mutes, cue, and heaphones are crucial to a line check, which to my mind, is the predecessor to any gig whether or not there was a soundcheck.  It sets the baseline that things are functioning properly, and frees my mind to have confidence towards mixing, and not playing system tech (I default to system tech).  With a functioning console prepped for a line check, tis time to move forwards.

Part 3 talks about about what constitutes an effective, and professional, line check.  I personally don't believe it involves letting the whole audience hear you eq each individual drum over the main PA, nor hear the monitor engineers hiss and "check, check" into every mic.  Rolling Eyes
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 3
« Reply #4 on: December 16, 2007, 10:55:47 AM »

Line Check-Precursor to Mixing

Here in part three I want to discuss the mechanics of a successful line check, and then take a several section excursion into the basics of human hearing and acoustics.  I have found it extremely helpful to have a sense of how we hear, and how sound plays in the room, when it comes to learning how to use the tonal-shaping tools available.

I, like most mixers I have met, was not blessed with perfect pitch, and any frequency recognition I have gained has been with practice and repetition.  Absence from practice I have found results in loosing the ability to identify the frequency ranges of tones quickly.  Any "cheat sheets" that can be internalized help speed the ability to get in the ballpark, frequency-wise.  That will be the motivation for the next couple sections.  Now, on to a line check
______________

A line check's primary purpose, to my mind, is to insure that all relevant signal inputs make it to the signal outputs.  Any more than this is going to take too much time, resulting in mistakes, missed channels, etc.

A line check is generally not a time to fiddle with eq, adjust attack or release times on compressors, and other such diversions.  If you can adjust such parameters quickly from memory on consistent sources, perhaps consider adjusting those parameters on one or two key channels (e.g. lead vocals).

I feel that a line check is essential, whether there was a soundcheck or not.  In the case of no soundcheck, it is an essential component of making noise come out of the pa.  In the case of a full sound check, it is a confidence booster that everything is still plugged in and working as before, and that no one has toyed with the console.  In the festival situation, the system tech/babysitter may offer to help you with this, and I suggest taking advantage of that resource.

In my methodology for a line check I try to split tasks between what requires active signal and what doesn't.  For instance, setting the gain trim of a channel requires the channel to have signal passing through it, but checking the bus routing of the channel does not require signal.  Checking whether a compressor is bypassed or not does not require signal, but checking whether the insert point and compressor are passing audio does require signal.

If there was a substantial soundcheck, simply looking at the channel input meters (or cueing the channels on boards without per channel metering) while the band/tech/presenter are warming up and plugging in is sufficient.  On most better boards, a quick scan across the channels trim is all that is required.
If the board is an analog board, i will then scan up/down the channel strips to check that inserts are turned on/off, eq is turned on/off, and that the routing (LR, groups, VCAs, etc.) is correct.  On boards with detailed VCA assignment, or digital boards, this obviously is done in a different way.  The essence is to check for signal first, then check for routing.

If one has the luxury of moving input by input across the stage, and doing a more thorough scan of the each channel while a tech/musician provides a source, then count yourself in a good situation.  I personally don't feel I can depend on this happening at line check in every situation, but it is preferable.

An aside on professionalism--if you do have the luxury of going down every input in a structured manner before the downbeat, please resist the temptation to turn this into a miniature soundcheck.  I personally feel that channels should not be routed to the mains system during a line check.  It subjects the audience to random noises, appears unprofessional to me, and generally detracts from the impact of your act's downbeat.  If you have channels that you feel can be successfully equalized in the absence of the rest of the mix instruments, simply commit those equalization settings to memory!  Any source that is truly SO independent of the mix, PA, and room (and there aren't that many) should be marked on a crib sheet and set by sight, or stored in the console memory.

I feel the time to systematically evaluate each input through the pa is at a soundcheck.  If there can be no soundcheck, then part of surviving in live audio is building a mix on the fly.  The two largest struggles for me in building mixes quickly have always been being too tweaky, and panic that I missed something before the show started.  A comprehensive line check goes a long way towards alleviating any pre-mixing panic.
____________

A topic that could be discussed during this segment is the correct balance between levels of the trim gains, and positions of the board faders.  I will, however, hold off on that for several segments, as this is something better covered under general soundcheck and mixing.

Both of those will come after discussions on hearing and acoustics.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 4
« Reply #5 on: December 21, 2007, 12:41:55 AM »

Introduction to Acoustics Related to Mixing

Acoustics is indeed a massive subject, one that covers well beyond the scope of this tutorial.  I certainly am not qualified to teach a high level course on the topic.

Part 4 endeavors to outline a few basic acoustics concepts that matter to the world mixing in a (hopefully) brief enough way that they can be retained as a mental notes during mixing.

First, sound is a wave-As nice as it is to think of sound as tennis balls bouncing off the walls, sound is really a pressure wave.

What does this mean practically?  It means that sound has a wavelength (distance between high and low pressure points).  Longer wavelengths travel much more easily around large objects.  That is why you can stand behind a pillar in a venue and loose all the high frequency information, but still feel the bass.  The bass is diffracting around the pillar and reaching you.  The mids and highs are either bouncing off the pillar (reflection) or being sucked up (absorption).

In light of this, an important question is when do I need to worry about the wave behavior of sound?  The short answer is in the lower frequencies (below approximately 500hz).  At 500hz the wavelength of sound is about five feet, and this is large enough to successfully bend around pillars, guitar amps, doorways, speaker stacks, etc.

The corollary to this is to realize that when the wavelengths of sound get short enough, they are generally not greatly influenced by the room acoustics.  Above approximately 6khz all sound that hits the walls of a venue is either reflected like a mirror, or absorbed entirely.  This can be different in some studio-type settings that have specific devices to spread out the sound (called diffusors).  But since most live sound gig are performed in real rooms, with little or no acoustic care, the above rule of thumb is reasonable.

Another consequence of sounds waves is what are called "room modes".  The title of "modes" comes from some esoteric mathematics called eigenmodes.  It is not necessary to understand the math to be able to understand the principles.  All you really need is a slinky.  Go find the nearest slinky before reading further:

Take your slinky and hold it at one end, move it up and down, and notice the variety of different waves you can produce in the slinky.  Now grab the slinky by both ends and try the same trick.  You will find that the slinky now wants to make:
1. One big wave
2  Two smaller waves half the size of the first
3. Three smaller waves of equal size
4. Four smaller waves (if you can move the slinky really fast)

So, what's happening here?  Why can't you make 1.3 waves or 2.5 waves show up in the slinky?  The reason is that as the wave travels up and down the slinky and reflects from the end, if the reflection does not match (i.e. is "in phase") with the incident wave, the two waves interfere and cancel out.  In the end the slinky will only support the waves that match each other at the reflection.  These are the eigenmodes of the slinky!

Notice that the waves in the slinky are fixed in both frequency and location!  This is a very important concept.  Room modes (same concept as slinky) aren't just at specific frequencies, they show at specific locations in the room!  Walk a few feet and the mode frequency and amplitude changes.

In architectural acoustics there is a concept called the Schroeder frequency.  This frequency acts a boundary line between where the room modes  are dense and evenly spread in the space, and where they get sparse and spread out-and NOTICEABLE!  The Schroeder frequency for many rooms falls below 200hz.  The smaller the room, the higher it is-smaller wavelengths equal higher frequencies.  Room modes in a slinky are much more pronounced than in real rooms.  This is because your hands holding the ends of the slinky are a much stiffer boundary than the walls of a real room.  It turns out a consequence from the math is that the stiffer the walls, the more discrete the modes.  Rooms with heavy stone or concrete walls are generally going to have more obvious room mode distributions.

Below the Schroeder frequency the room modes are going to play a role in your mixing.  They may give you an exagerated, or depressed, sense of the bass in your mix.  They may make an instrument that is thin sound boomy, or vice versa.  It is critically important to walk around away from the mix position to get a sense of the real low frequency distribution in the
room!


Where does all this technical detail leave us, and what does it mean for mixing?  Here are some practical results of the above:

1.  Room modes are a fact of life-you can't eq them out because they are different every few feet! The most you can hope for is to excite a really bad one less.  Subwoofer positioning can help control what modes you excite, but that is another post.

2.  High frequency eq should be for tone shaping-eq in the last octave and a half is essentially independent of the room acoustics, as you can't tremendously influence them with an eq.

3.  The midrange is very important-the midrange represents the transition between directional sound, and sound that wraps around everything.  Most speakers/guitar amps/etc. have decent directivity at 2khz, but at 500hz have very little directivity.  A speaker pointed at the audience is spitting right on them at 2khz, but at 500hz the sound is wrapping around the speaker, onto the stage, back wall, etc.  The same holds true with amps on stage.  The same holds true with stage monitors.  The highs go at the stage, but the lows and mids spill into the audience.

4.  Longer wavelengths can combine and reinforce each other-this is especially true for speaker stacks.  One speaker by itself will sound balanced, but when you place several together, their low-mid sound interacts with the other speakers in a constructive manner.  The result is a "haystack" of low-mid energy that leads to muddy sound and a lack of clarity.  Many speaker controllers for high quality speaker systems apply filtering in this range to reduce the "mud" from coupling of multiple speakers.  That is why these controllers have presets that depend on the number and arrangement of speaker boxes being used.

5.  Everything interacts with everything, room-wise, in the lows and mids-Sometimes this interaction is constructive, and sometimes it is destructive.  Because you can't magically defy physics and increase the directionality of speakers/amps/etc., you must be willing to include the spill from amps/monitors/back wall into your overall mix picture.  This can removal of instruments from the FOH mix, or drastic equalization to fill in "holes" in the spill.

Now on to part five, and a discussion of how we hear.  Then its back to the task of mixing in part 6.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 5
« Reply #6 on: January 16, 2008, 02:32:02 PM »

Human Hearing and Mixing

The way we hear is perhaps the least well understood part of this entire process.  It is certainly the part where I have the least science knowledge.  A lot of this post is as much an observation of my own hearing, and how to analyze your own hearing, as it is about the science of hearing.  I believe it is important for any mixer to have a sense of their own ears, both good and bad.  You need to learn, or unlearn, your own hearing to have a neutral baseline behind the console.

The first thing I have noticed about my own hearing is that my two ears are different.  I have several more dB of clarity above 5khz in my right ear, than i my left. I am also much more sensitive to the 1-2khz range in my left ear than right.  My Sensaphonic custom ER-15 earplugs clearly tell me that my right ear canal is much smaller than my left, and my experience with in ear monitors tells me that I have very small ears, and ear canals, relative to the general populace.

The next thing I have noticed about my hearing, and indeed about hearing in general, is that it is depressingly nonlinear.  My perception of lows/mids/highs is highly dependent on the volume of the sounds, and the time of exposure to those sounds.  Commercial CDs that are mastered to listening levels between about 80-90dBA are often depressingly shrill, bright, and "sizzly" at live sound concert volume levels.

Two major studies trying to characterize the nonlinearity of human hearing were Fletcher Munson in 1993, and Robinson-Dadson in 1956.  The most recent extension of this work I am aware of this the ISO 226 standard from 2003.  The loudness contours of this standard are shown below:

index.php/fa/13730/0/

The way to interpret this graph is that the y (vertical) axis shows the sound pressure level to produce and equivalent perceived volume.  For the 80 phon equivalent loudness curve it takes about 80dB of sound at 3khz to intersect the curve.  At 100hz, however, it takes about 95dB!  This clearly shows are ears are much more sensitive in the 1khz-7khz regime than at lower frequencies, in terms of equal volume perception.

Also notice that as the total volume increases, the volume of low frequencies needed to perceive the same level decreases.  This is why a cd that sounds thin when played quietly gets fuller and thicker on the bottom end simply by turning up the volume.  This phenomenon is very noticeable in the studio mixing setting, where turning up the nearfield monitors a few dB often makes them "come alive" and makes the mix much more impressive.

Practically all of live sound mixing lies above the 80 phon loudness curve.  Unfortunately this means a relative lack of hard data in this regime for live sound mixers.  However, certain trends in terms of perceived loudness should be very obvious quickly from the above graph.

A very obvious question that arises is "why are my ears so sensitive from 1khz to 7khz?"  The answer lies partly in the geometry of our ears, and partly in the nature of the perception of human speech.  Your ear canal cavity forms a natural resonator, and the resonance frequency is approximately 2700hz.  This coincides well the range in which human consonant sounds are formed.

Speech (and singing) can be roughly split into two types of sounds.  The first are consonants, and the second are vowels.  Consonants carry the information of speech, and vowels carry the power.  Your ears' resonance is tuned to help pick up the information component of speech, so it makes sense that your ears would be most sensitive in this range.  Too much cutting equalization in this band can destroy the audience's ability to discern the meaning of the words being spoken or sung.

The vowel range can traverse the range from about 150hz to 1khz for most singing.  Somewhere in this range many singers, especially those with little formal training, or a specific accent, will often have a pretty specific sinus cavity resonance.  This is the "nasal" or "whiney" tone that is ascribed to singers, especially in rock music.  It is a fairly safe bet that this nasal cavity resonance lies in the octave between 400-800hz.  This resonances is often made even more prominent by the close proximity of the vocal mic to the singer's face.  Cutting equalization in this octave can reduce the nasal quality of a singer's voice.

While I am not familiar with the Asian languages, the latin-based languages of the world have a good degree of uniformity in the nature of the speech content frequencies.  Keeping the above information in mind can help you mix effectively in a language in which you do not understand the words, simply by listening to the consonant/vowel balance.

Another thing I have noticed about my own hearing is that the more familiar I am with the words, the better I perceive them.  This can result in mixing the vocals progressively lower over time as familiarity with the songs increases.  While this remains sufficient for my personal vocal intelligibility, it can strand the audience who is less familiar with the material.

Now that we have spent a substantial amount of time discussion how our ears perceive level and speech, I now turn to a discussion of tone.  The classic thirty-one band equal has equal octave spacing of its frequencies.  That means the upper sliders influence a much larger swath of frequencies than the lower frequencies.  This is also a fair analog for human hearing; at low frequencies we can generally readily distinguish between tones only a few hertz apart.  People can also do the same at higher frequencies, but usually only under laboratory conditions.  As a rule of thumb, as frequency increases, our ability to identify the specific tone from a tone at a nearby frequency decreases.  A little shelving "air" may be enough in the last octave, but low and mid frequencies usually demand more discriminating equalization.

This is compounded by the reality that the fundamental tones of almost all instruments lie between 100hz and 5khz.  Now this is not a universal rule, but it is often reflected practically when behind the mixing board.  The typical high-end analog mixing console will have an adjustable highpass filter, a low shelf/parametric, two mid parametric eqs, and a high shelf/parametric.  Three or four of those five eq implements are common targeted at the midrange band between 100hz and 5khz!  Now, obviously many signals may need shaping above 5khz, but this shaping is of overtones, and not the raw notes/chord/tone.

I find a common mistake of starting mixers, and one that I made, was to assume that frequency of a tone was much HIGHER than it actually was.  A musician may consider A440 on a piano a fairly high tone, in reality 440hz is squarely in midrange from a mixing and equalization perspective.  If you will recall the previous post, it is also in a range where most speakers have only moderate directivity control.  Taking this reminder into the mixing environment can improve the ability to quickly identify the problem range of frequencies.

A final point that needs covered on the nature of human hearing is what I call "threshold shift."  This is the activation of the muscles in the middle ear as a built in compressor to protect our hearing from continuous loud sounds.  If you have ever been to a loud concert than started out unbearable, and then became "glassy" and ok volume wise, and then noticed that the world was REALLY quiet after the show, you have experienced threshold shift.

One thing that, unfortunately, often accompanies the "really quiet" phase is ringing in the ears.  Ringing in the ears comes from leaking calcium ducts in the ears, ducts that have been damaged by excessive vibration of the inner ear hairs.  When these ducts leak, the brain falsely perceives that the tone range of that particular hair is happening continuously.  Ringing in the ears is a clear indication that you have caused acute trauma to your inner ear!  Whether or not these trauma repair themselves is a matter of debate, but the damage has been done at least temporarily.

Threshold shift typically takes a matter of minutes to set in.  For me it is about 5-10minutes, and typically releases between fifteen and thirty minutes after the exposure.  The louder the sound, and the longer the exposure, I find increases the release time.

Threshold shift is the death of good mixing for me.  My ability to judge mix balance, and quickly pick up problems frequencies greatly diminishes.  Today I threshold shift about 97dBA slow.  If I am asked to mix above this level I typically have to alternate songs with my earplugs in, and then out, just to keep my ears out of threshold shift.  I suspect that i threshold shift at a lower level than most people, as I am relatively young, and have taken very good care of my ears.  Threshold shift is big motivating factor for me to try to mix at moderate levels.

I also suspect that threshold shift is one of the biggest problems between musicians and monitor engineers.
 A brief soundcheck, short enough to make the monitors seem loud and clear, and not long enough to cause threshold shift, goes well for band an monitor engineer.  However, once the show is going, and the band has been subjected to high levels for a more extended period, threshold shift sets in, the sound from the wedges goes to mush in the muso's ears, and all they know to do is ask for more level!  This is the crux of so many soundcheck vs. show moments, that I feel this must be a major part of the underlying cause for the problems the musicians experience during the show.  Food for thought as we head to part six...

Part six will be much shorter, and less theory in nature.  In part six I discuss the need to be able to evaluate both the full mix, and the individual instruments, and methodologies of practice to learn this art.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 6
« Reply #7 on: January 19, 2008, 04:20:38 PM »

BOTH the Forest and the Trees

Most people who have worked for a regional sound company can relate to the experience of watching a band engineer obsess over one input for an entire show.  If it was a young skinny kid in Florida it might have been me, lol. Rolling Eyes

Simply put, this does not work for mixing.  Not only is it overly tweaky given the ephemeral nature of live sound, it ignores the fundamental difference between how the live audience is perceiving the sound versus the mixer person.  Very few people in your audience have any ability to pick apart your mix!  most only know whether it is too loud or not, whether it is "exciting," and select group will notice whether or not they can hear all the instruments.  Mixing for the individual instrument is antithetical to what matters for your audience.  Their perceptions lie on the "big picture" level, and they are largely ignorant and unconcerned with the steps necessary to get there!

I feel that learning to mix means learning to step away from the PA as your own personal, private stereo system, and also stepping away from it as a means to get the "most awesome {insert signal here}".  Ideally all signals would be most awesome, but that is often not the case, and it can't short circuit the global mixing process.

This is the first point in this tutorial where I am going to branch into something thats somewhat abstract, and that I can't give a definite means of success.  I can't tell you how I learned to pick out a voice in a crowd, but I know that I can do it, and I know that if I practice it, I get better at it.  Learning to focus on mix elements has the same vibe for me.

I personally suggest learning how to pick a mix apart before learning how to put it back together.  The opposite approach may work better for other people, but that is the direction I am approaching from here.

Pick out a good quality recording of a piece of music that you know well, and that you enjoy the sound of the production on.  I would say that the genre does not matter terribly, but I feel it should relate to the types of music you find yourself mixing.  I also believe that it should include vocals, so no instrumental jazz please  Very Happy

Place a particular track on repeat in headphones, or on a stereo whose tonal balance you trust.  Now try to focus on the vocals, but without processing the meaning of the words.  Focus until the words are more like notes from another instrument than conveying a specific meaning.  Here I can't tell you exactly how to this more than focus only on the ebb and flow of tones, like humming the melody in your head to a song where you can't remember the lyrics clearly.  With practice I have found the word meanings can be separated from the sounds behinds the words.  Balancing the vocals in the mix then becomes easier.  You will start to notice the loud breaths between words (from compression), or the lack thereof (because of studio editing).  You will notice how the last couple words on phrases often have several vocal overdubs, and that choruses may have different reverb than verses.

Next, focus on a percussive sound (e.g. hi hat, snare, etc.)  Focus on how the beginning and end of the sound move, and also the frequency balance.  Is the snare a bright "crack" of jazz, or is it the "thunk" of compressed modern rock.  When you hear the snare, what does it cover up in the mix?  Does it step on vocals, or other instruments?  Does it do this in a manner that drives the mix, or muddies it up?  If it is dense rock mix, does the snare feel forced in, or does it have room to snap and decay?  Can you hear the release of the compressor on the decay.  Does the snare bloom up and drive the mix, is their weird "trash" from the release time being too short?  I realize the words in this paragraph may not make complete sense to you, as its very hard to describe some of these behaviors without hearing them directly, so worry less about my descriptors, and more about paying attention to whatever pops out at you.  You will start hearing things consistently, even if you can't put a name to them.

This process of deconstructive listening can be applied to any and all signals in a mix.  Steady practice makes it clearer, and faster.  I would sugggest listening to the same song three or four times in a row in a session, then stop, and try again the next day with a different song.  Take it a little at a time, and it will become more automatic.

On the opposite end of the spectrum, it pays to listen to the whole mix picture.  What is the panning like, is everything in the center, or spread widely?  What background percussion is there driving the song-shakers and tambourines and the like are very common in pop.  Is the chorus actually louder than the verses, what things come in and out the mix across the song?  Is the song very bright or unbalanced tonally?  Are the bass instruments well placed and clean, or is the bottom end "tubby"?  Is the mix very dry, or does it have lots of effects?  Are the drums up on top, or tucked in back?  Is the vocal right in the middle of the mix, or riding on top?

The things in the last paragraph are what truly set the sonic palette of the song, they were the goals the mix engineer had in mind when he used compression, eq, and other fx to influence each instrument.  Some instruments may sound very "weird" or "wrong" when focused on individually, but then actually fit well in the mix context.
 Guitars or drums may seem small, dull, or thin at certain places in a song in isolation, but they may have been trimmed back by the mix engineer to let the singer howl the emotion of the chorus, or let the piano ambiance drive the song.

Sometimes the best way to make the mix seem bigger is to reduce the scope of each individual element in the mix.  There is only a fixed amount of sonic landscape to fill, and trimming something to the point of being "weak" "thin" or "dull" may indeed be just the medicine the whole mix needs.  If you can learn pick out an individual instrument clearly, you can quickly appraise what about it is not enough/too much for the mix.  Then you can apply the needed change in eq, level, or other fx, and quickly return to the big picture.
------

Some are now probably pretty frustrated with me, shifting from something so technical to something much less concrete.  Let me try to give a practical example of something that I did while mixing that will give you an idea of what I am talking about.  Several years ago I was mixing for a regular religious event.  The music was pop rock, the levels in the mid 90s, and the band pretty cooperative, so I had lots of freedom as a mixer.  The band leader played acoustic guitar, and a fair number of songs had acoustic guitar intros.

The "tree-level" view of his acoustic guitar was that it was a piezo pickup, and had a LOT of picking noise at about 2.5khz.  It was also wonky around 400hz, and thin about 150hz.  My channel equalization was a bit of low boost, some cut around 400hz, and a pretty steep eq cut at 2.5khz.

With that as the tree level, here is the forest level mix decision;  The guitar introductions were usually simple chord progresssions, or perhaps an arpegiated chord, but they got the songs going.  Once the songs were in full swing, though, the acoustic was stuck strumming and playing on top of the main melody, or otherwise not adding to the arrangement of the song.  I would intentionally let the band instruments overwhelm the acoustic guitar after they kicked in, letting the acoustic fall to the back of the mix.  But, I would add BACK some of the picking noise at 2.5k, by reducing the eq cut!  Therefore the picking of the acoustic was still in the mix enough to give a sense that the guitar was still there, but it really wasn't a factor volume-wise, and therefore left room for the rest of the band.  If the song ending required the return of the solo acoustic, I simply reached up and recut the 2.5khz out of the acoustic.

That was one of my better moments as a mixer, and a good example of using a known weakness with an instrument to keep that instrument in play.  I had to be able to characterize the raw acoustic sound by itself to be able find the weakness, but I did not let fixing that weakness get in the way of the total mix picture, and even used it to my advantage.

Now on to Part 7, and how to keep everything straight during the show.

Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 7
« Reply #8 on: January 19, 2008, 07:21:59 PM »

Keeping Track Behind the Console

Grabbing the wrong fader, One channel off from the one I intend, remains a struggle for me behind the live sound board.  I don't have  large hands, and on many analog boards I can't easily control three or four at the same time.  Part 7 strikes very close to home for me.

The problem with trying to define a specific path around the board is that no board is ever laid out in exactly the same way, no input list is exactly the same, and there is no guarantee that a random input won't flake out at the worst time, etc.

So rather than a specific path, I want to give provide tips for workflow and "anchoring" yourself in the the heat of the battle.

1. Place most touched channels near the master section-all the stuff you will fiddle with the most needs to be near the middle of the console.

2. Group inputs by type-guitars together, percussion together, vocals, etc.

3.  Group inputs by a personal or visual flow-I use left to right across the front of the stage (house perspective) within each section of inputs.

4.  Return effects to channels-having channel control over effect levels on the board will keep your head out of the FX rack.

5.  Anchor your non dominant hand-I am left handed, and as much as possible my right hand stays within the boundaries of the console master section during a show.  If I am turning an aux or eq with my right hand, I still try move it back to the master section when I am done.

6. Scan down a channel strip before grabbing anything-sometimes that extra half second will save you a lot of embarrasment.

7. Label your console as much as possible-I prefer to label all channels and aux sends.  I would love if consoles allowed labeling both the top and bottom of the channel strip.

8.  Don't Fiddle-If it's a rock show, don't fret if a rack tom is a little quieter than it should be.  If it is WAY too quiet, guess at a fader boost and leave it.

9.  Stop and listen to at least half a chorus every song-this may not be an option on the first two songs or so, but after that stop moving around enough to actually listen to the whole mix picture.

10.  If you lose your place, go back to the master section-ask yourself "it is too loud" then "can I hear the vocals" then "what about the guitars/piano/etc.?"  This will return you to a rhythm for mixing.

11.  Don't bury yourself in your headphones-if you need them at the beginning to solo instruments, I totally understand, and I do exactly the same, but they need to be around your neck before the middle of the set.

12.  A late addition, but a worthy one to me.  Don't bother with gates, especially for one-off events.  They are too picky to set effectively and quickly.  They have a place, but can be a major distraction from other parts of mixing.
-----

Now is the appropriate time to discuss the setting of levels and fader positions.  I firmly believe there is a correct way to do this, especially in the modern era of quiet mic preamps.  First, always mix with faders, never on trim pots.  Trim pots are left alone, unless they are clipping.

Second, set your trim pot levels with all faders on the board at unity.  Shoot for a mix that is close to correct when all the board faders are at unity.  Some inputs will obviously have less gain on the preamps than they might otherwise, but that is ok!  You may have to throw this method out in a festival setting, where the gains are going to have to be modified from the previous bands' inputs, but in general faders at unity is my preferred method.

I used to be someone who set every input right below clip on the trim pots at the soundcheck to maximize signal to noise ratio.  This sounds good in theory, but it is lousy in practice.  Everyone plays louder in the real set, and now you have a ton of clipping channels--OOPS!  Also the taper of the faders on modern mixing boards gives them the most control about the unity gain mark.  Try running a quiet input super hot, and then mixing smoothly on the faders around -25dB, it not a pleasant experience!

Does the fader at unity method make the console noise a little more apparent?  The answer is of course yes, but I have never had it be a problem relative to the ambient noise on a properly set up sound system.  If a sound sytem has very loud "hiss" problems, then the problems are more than likely elsewhere in the gain structure.

Also, the "faders at unity" method allows a known baseline on analog consoles if you work yourself into a corner.  Placing the fader back to unity returns your levels to what you had at the outset.

If you are regularly on a digital board, the faders at unity method looses some of its gains in simplicity, but it will still help with the fader taper behavior, as the digital consoles mimic analog consoles in their fader laws.
-----

I can't guarantee my methods will give you the best rhythm here, but it is important to realize that you need a rhythm and structure behind the console, and to quickly piece together one based on your natural movement patterns.

In Part 8 we will zoom in on a channel strip, and talk about the larger picture of equalization of sources in the mix.
Logged

Phillip_Graham

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1584
Re: Introduction to Mixing-A Tutorial 8
« Reply #9 on: January 20, 2008, 10:31:21 PM »

"Tuck in the Corners" of Your Mix

I started writing this tutorial with a commitment to not make it specific to any specific genre or type of event.  However, use of equalization demands at least some ability for identifying the frequencies in question.  You are not going to be able to read this post and immediately improve knowing what frequencies need equalized, but you should have an improved sense about how to equalize the source.

The most basic of channel eq is usually a simple high/low tone control.  These are seen on guitar amps, low end mixers, home stereos etc.  These types of eq circuits are typically of the "Baxandall" implementation type.  These most basic tone controls are largely useless in the world of live sound.  They simply paint with too large a brush stroke.

Above this level of equalization are many different levels of eq amount and flexibility.  A good quality analog or digital board will typically have a low shelf eq, a high shelf eq, two mid range sweepable semiparametric/parametric eqs, and a high pass filter on each channel.  There are many variations on this theme, such as adjustable frequency high pass filters, and adjustable width (i.e. Q) sweepable filters.  Some boards let you pick whether the low and high equalizers are parametric or shelving.  Regardless, four equalization filters plus a high pass filter is fairly standard these days.  This is the level of equalization I am going to assume for the duration of this section.
----

Equalization is a difficult topic.  It requires a pretty fair amount of basic physics understanding to grasp what it does, and why it can be necessary.  I am going to try to minimize the discussion of that here.  A more detailed discussion of the science behind eq might be appropriate at the end of the mixing tutorial.

In short, signal sources have different output levels at different frequencies, and equalization can be used to change the relative balances in level between different frequency ranges of a source.

{Insert link to page on different eq curves' response shape}

The basic nature and types of equalizers thus defined we now move on to basic principles of use:

1.  In general equalization should be used to cut frequencies, rather than boost.  If radical equalization boosts seem to be demanded of a source, then other problems are likely in play.  A different choice of microphone or micrphone placement should be considered, unless the heat of the moment requires the boost.

2.  The human ear is very tolerant of deep and narrow equalization cuts.  A deep narrow "notch" filter to take care of an issue may be very audible to you, but this is usually because you heard the source PREVIOUS the equalization.  These types of equalizations are often necessary to prevent feedback on things like lapel microphones, and largely transparent to the greater audience if done carefully.

3.  The EQ bypass button can be your friend!  When applying equalization its easy to get lost in the details, and miss the bigger sonic picture.  I usually try to bypass the eq (if the board has that option) to make sure my eq has really improved the sound over the raw source.

4.  Proximity effect is always a factor with cardiod microphones, and can be one of the most commonly equalized things.  Proximity effect results in an increase in level of frequencies below about 500hz for most microphones when they are placed close to the source.  For some sources (e.g. guitar) the proximity effect is part of the instrument's sound, but for things like vocals it adds "boominess" or "honkiness" that takes away from the vocal tone.  Cuts in the 100-500hz range for proximity effect are common.

5.  Avoid excessive cuts to vocals in the 2k-4khz range.  This range contains almost all of the consonant energy necessary for speech intelligibility.  If you remove too much of this energy the ability to understand speech is destroyed.

6.  Eqs respond differently.  Some equalization is configured to have a lot of response as soon as the knob is turned off center, and some equalization requires the knob to be turned farther.  Don't panic if it seems like you need to turn the knob pretty far.  This is merely a choice of the console designer, and being more aggressive with the knobs will usually get the desired results.

7.  Two narrow equalizations near each other can usually be successfully replaced by one broader eq.  An exception to this is equalization for feedback, since feedback usually occurs over a very narrow band of frequencies.

8.  Its ok to have most of your eq focused below 5khz!  Most of the instrument tone is at mid and mid-high frequencies.  Gentle shaping of the top octave with a shelving filter is often all a source needs.

9.  Don't be afraid to roll off both the low end and high end of a source, if that is what is required.  My personal example for this is electric guitars.  I will use a low shelving filter on electrics in the 100-200hz octave (depending on genre) to make room for snare, kick, and bass.  On the high end I will either high shelf, or low pass, the top end of electric guitars.  I HATE going to a show where the electric guitars get washed out in a haze of high frequencies "hhhhhhzzzzzzz/hissssss" above about 5k.  That hash ruins the top end of the mix for cymbals/vocals/piano/snare, and I find very fatiguing on the ears.
-----

{Insert a couple of practical eq ancedotes/suggestions}

So, I have this thing called a thesis that kinda needs written  Shocked ETA on remainder of the thread is late summer 2k8...  Sorry folks
Logged
Pages: [1] 2  All   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.031 seconds with 21 queries.