Sound Reinforcement - Forums for Live Sound Professionals - Your Displayed Name Must Be Your Real Full Name To Post In The Live Sound Forums > Archived Threads

Introduction to Mixing-Through Part 10 - Latest Update 12-08

<< < (2/4) > >>

Phillip_Graham:
Introduction to Acoustics Related to Mixing

Acoustics is indeed a massive subject, one that covers well beyond the scope of this tutorial.  I certainly am not qualified to teach a high level course on the topic.

Part 4 endeavors to outline a few basic acoustics concepts that matter to the world mixing in a (hopefully) brief enough way that they can be retained as a mental notes during mixing.

First, sound is a wave-As nice as it is to think of sound as tennis balls bouncing off the walls, sound is really a pressure wave.

What does this mean practically?  It means that sound has a wavelength (distance between high and low pressure points).  Longer wavelengths travel much more easily around large objects.  That is why you can stand behind a pillar in a venue and loose all the high frequency information, but still feel the bass.  The bass is diffracting around the pillar and reaching you.  The mids and highs are either bouncing off the pillar (reflection) or being sucked up (absorption).

In light of this, an important question is when do I need to worry about the wave behavior of sound?  The short answer is in the lower frequencies (below approximately 500hz).  At 500hz the wavelength of sound is about five feet, and this is large enough to successfully bend around pillars, guitar amps, doorways, speaker stacks, etc.

The corollary to this is to realize that when the wavelengths of sound get short enough, they are generally not greatly influenced by the room acoustics.  Above approximately 6khz all sound that hits the walls of a venue is either reflected like a mirror, or absorbed entirely.  This can be different in some studio-type settings that have specific devices to spread out the sound (called diffusors).  But since most live sound gig are performed in real rooms, with little or no acoustic care, the above rule of thumb is reasonable.

Another consequence of sounds waves is what are called "room modes".  The title of "modes" comes from some esoteric mathematics called eigenmodes.  It is not necessary to understand the math to be able to understand the principles.  All you really need is a slinky.  Go find the nearest slinky before reading further:

Take your slinky and hold it at one end, move it up and down, and notice the variety of different waves you can produce in the slinky.  Now grab the slinky by both ends and try the same trick.  You will find that the slinky now wants to make:
1. One big wave
2  Two smaller waves half the size of the first
3. Three smaller waves of equal size
4. Four smaller waves (if you can move the slinky really fast)

So, what's happening here?  Why can't you make 1.3 waves or 2.5 waves show up in the slinky?  The reason is that as the wave travels up and down the slinky and reflects from the end, if the reflection does not match (i.e. is "in phase") with the incident wave, the two waves interfere and cancel out.  In the end the slinky will only support the waves that match each other at the reflection.  These are the eigenmodes of the slinky!

Notice that the waves in the slinky are fixed in both frequency and location!  This is a very important concept.  Room modes (same concept as slinky) aren't just at specific frequencies, they show at specific locations in the room!  Walk a few feet and the mode frequency and amplitude changes.

In architectural acoustics there is a concept called the Schroeder frequency.  This frequency acts a boundary line between where the room modes  are dense and evenly spread in the space, and where they get sparse and spread out-and NOTICEABLE!  The Schroeder frequency for many rooms falls below 200hz.  The smaller the room, the higher it is-smaller wavelengths equal higher frequencies.  Room modes in a slinky are much more pronounced than in real rooms.  This is because your hands holding the ends of the slinky are a much stiffer boundary than the walls of a real room.  It turns out a consequence from the math is that the stiffer the walls, the more discrete the modes.  Rooms with heavy stone or concrete walls are generally going to have more obvious room mode distributions.

Below the Schroeder frequency the room modes are going to play a role in your mixing.  They may give you an exagerated, or depressed, sense of the bass in your mix.  They may make an instrument that is thin sound boomy, or vice versa.  It is critically important to walk around away from the mix position to get a sense of the real low frequency distribution in the
room!

Where does all this technical detail leave us, and what does it mean for mixing?  Here are some practical results of the above:

1.  Room modes are a fact of life-you can't eq them out because they are different every few feet! The most you can hope for is to excite a really bad one less.  Subwoofer positioning can help control what modes you excite, but that is another post.

2.  High frequency eq should be for tone shaping-eq in the last octave and a half is essentially independent of the room acoustics, as you can't tremendously influence them with an eq.

3.  The midrange is very important-the midrange represents the transition between directional sound, and sound that wraps around everything.  Most speakers/guitar amps/etc. have decent directivity at 2khz, but at 500hz have very little directivity.  A speaker pointed at the audience is spitting right on them at 2khz, but at 500hz the sound is wrapping around the speaker, onto the stage, back wall, etc.  The same holds true with amps on stage.  The same holds true with stage monitors.  The highs go at the stage, but the lows and mids spill into the audience.

4.  Longer wavelengths can combine and reinforce each other-this is especially true for speaker stacks.  One speaker by itself will sound balanced, but when you place several together, their low-mid sound interacts with the other speakers in a constructive manner.  The result is a "haystack" of low-mid energy that leads to muddy sound and a lack of clarity.  Many speaker controllers for high quality speaker systems apply filtering in this range to reduce the "mud" from coupling of multiple speakers.  That is why these controllers have presets that depend on the number and arrangement of speaker boxes being used.

5.  Everything interacts with everything, room-wise, in the lows and mids-Sometimes this interaction is constructive, and sometimes it is destructive.  Because you can't magically defy physics and increase the directionality of speakers/amps/etc., you must be willing to include the spill from amps/monitors/back wall into your overall mix picture.  This can removal of instruments from the FOH mix, or drastic equalization to fill in "holes" in the spill.

Now on to part five, and a discussion of how we hear.  Then its back to the task of mixing in part 6.

Phillip_Graham:
Human Hearing and Mixing

The way we hear is perhaps the least well understood part of this entire process.  It is certainly the part where I have the least science knowledge.  A lot of this post is as much an observation of my own hearing, and how to analyze your own hearing, as it is about the science of hearing.  I believe it is important for any mixer to have a sense of their own ears, both good and bad.  You need to learn, or unlearn, your own hearing to have a neutral baseline behind the console.

The first thing I have noticed about my own hearing is that my two ears are different.  I have several more dB of clarity above 5khz in my right ear, than i my left. I am also much more sensitive to the 1-2khz range in my left ear than right.  My Sensaphonic custom ER-15 earplugs clearly tell me that my right ear canal is much smaller than my left, and my experience with in ear monitors tells me that I have very small ears, and ear canals, relative to the general populace.

The next thing I have noticed about my hearing, and indeed about hearing in general, is that it is depressingly nonlinear.  My perception of lows/mids/highs is highly dependent on the volume of the sounds, and the time of exposure to those sounds.  Commercial CDs that are mastered to listening levels between about 80-90dBA are often depressingly shrill, bright, and "sizzly" at live sound concert volume levels.

Two major studies trying to characterize the nonlinearity of human hearing were Fletcher Munson in 1993, and Robinson-Dadson in 1956.  The most recent extension of this work I am aware of this the ISO 226 standard from 2003.  The loudness contours of this standard are shown below:



The way to interpret this graph is that the y (vertical) axis shows the sound pressure level to produce and equivalent perceived volume.  For the 80 phon equivalent loudness curve it takes about 80dB of sound at 3khz to intersect the curve.  At 100hz, however, it takes about 95dB!  This clearly shows are ears are much more sensitive in the 1khz-7khz regime than at lower frequencies, in terms of equal volume perception.

Also notice that as the total volume increases, the volume of low frequencies needed to perceive the same level decreases.  This is why a cd that sounds thin when played quietly gets fuller and thicker on the bottom end simply by turning up the volume.  This phenomenon is very noticeable in the studio mixing setting, where turning up the nearfield monitors a few dB often makes them "come alive" and makes the mix much more impressive.

Practically all of live sound mixing lies above the 80 phon loudness curve.  Unfortunately this means a relative lack of hard data in this regime for live sound mixers.  However, certain trends in terms of perceived loudness should be very obvious quickly from the above graph.

A very obvious question that arises is "why are my ears so sensitive from 1khz to 7khz?"  The answer lies partly in the geometry of our ears, and partly in the nature of the perception of human speech.  Your ear canal cavity forms a natural resonator, and the resonance frequency is approximately 2700hz.  This coincides well the range in which human consonant sounds are formed.

Speech (and singing) can be roughly split into two types of sounds.  The first are consonants, and the second are vowels.  Consonants carry the information of speech, and vowels carry the power.  Your ears' resonance is tuned to help pick up the information component of speech, so it makes sense that your ears would be most sensitive in this range.  Too much cutting equalization in this band can destroy the audience's ability to discern the meaning of the words being spoken or sung.

The vowel range can traverse the range from about 150hz to 1khz for most singing.  Somewhere in this range many singers, especially those with little formal training, or a specific accent, will often have a pretty specific sinus cavity resonance.  This is the "nasal" or "whiney" tone that is ascribed to singers, especially in rock music.  It is a fairly safe bet that this nasal cavity resonance lies in the octave between 400-800hz.  This resonances is often made even more prominent by the close proximity of the vocal mic to the singer's face.  Cutting equalization in this octave can reduce the nasal quality of a singer's voice.

While I am not familiar with the Asian languages, the latin-based languages of the world have a good degree of uniformity in the nature of the speech content frequencies.  Keeping the above information in mind can help you mix effectively in a language in which you do not understand the words, simply by listening to the consonant/vowel balance.

Another thing I have noticed about my own hearing is that the more familiar I am with the words, the better I perceive them.  This can result in mixing the vocals progressively lower over time as familiarity with the songs increases.  While this remains sufficient for my personal vocal intelligibility, it can strand the audience who is less familiar with the material.

Now that we have spent a substantial amount of time discussion how our ears perceive level and speech, I now turn to a discussion of tone.  The classic thirty-one band equal has equal octave spacing of its frequencies.  That means the upper sliders influence a much larger swath of frequencies than the lower frequencies.  This is also a fair analog for human hearing; at low frequencies we can generally readily distinguish between tones only a few hertz apart.  People can also do the same at higher frequencies, but usually only under laboratory conditions.  As a rule of thumb, as frequency increases, our ability to identify the specific tone from a tone at a nearby frequency decreases.  A little shelving "air" may be enough in the last octave, but low and mid frequencies usually demand more discriminating equalization.

This is compounded by the reality that the fundamental tones of almost all instruments lie between 100hz and 5khz.  Now this is not a universal rule, but it is often reflected practically when behind the mixing board.  The typical high-end analog mixing console will have an adjustable highpass filter, a low shelf/parametric, two mid parametric eqs, and a high shelf/parametric.  Three or four of those five eq implements are common targeted at the midrange band between 100hz and 5khz!  Now, obviously many signals may need shaping above 5khz, but this shaping is of overtones, and not the raw notes/chord/tone.

I find a common mistake of starting mixers, and one that I made, was to assume that frequency of a tone was much HIGHER than it actually was.  A musician may consider A440 on a piano a fairly high tone, in reality 440hz is squarely in midrange from a mixing and equalization perspective.  If you will recall the previous post, it is also in a range where most speakers have only moderate directivity control.  Taking this reminder into the mixing environment can improve the ability to quickly identify the problem range of frequencies.

A final point that needs covered on the nature of human hearing is what I call "threshold shift."  This is the activation of the muscles in the middle ear as a built in compressor to protect our hearing from continuous loud sounds.  If you have ever been to a loud concert than started out unbearable, and then became "glassy" and ok volume wise, and then noticed that the world was REALLY quiet after the show, you have experienced threshold shift.

One thing that, unfortunately, often accompanies the "really quiet" phase is ringing in the ears.  Ringing in the ears comes from leaking calcium ducts in the ears, ducts that have been damaged by excessive vibration of the inner ear hairs.  When these ducts leak, the brain falsely perceives that the tone range of that particular hair is happening continuously.  Ringing in the ears is a clear indication that you have caused acute trauma to your inner ear!  Whether or not these trauma repair themselves is a matter of debate, but the damage has been done at least temporarily.

Threshold shift typically takes a matter of minutes to set in.  For me it is about 5-10minutes, and typically releases between fifteen and thirty minutes after the exposure.  The louder the sound, and the longer the exposure, I find increases the release time.

Threshold shift is the death of good mixing for me.  My ability to judge mix balance, and quickly pick up problems frequencies greatly diminishes.  Today I threshold shift about 97dBA slow.  If I am asked to mix above this level I typically have to alternate songs with my earplugs in, and then out, just to keep my ears out of threshold shift.  I suspect that i threshold shift at a lower level than most people, as I am relatively young, and have taken very good care of my ears.  Threshold shift is big motivating factor for me to try to mix at moderate levels.

I also suspect that threshold shift is one of the biggest problems between musicians and monitor engineers.  A brief soundcheck, short enough to make the monitors seem loud and clear, and not long enough to cause threshold shift, goes well for band an monitor engineer.  However, once the show is going, and the band has been subjected to high levels for a more extended period, threshold shift sets in, the sound from the wedges goes to mush in the muso's ears, and all they know to do is ask for more level!  This is the crux of so many soundcheck vs. show moments, that I feel this must be a major part of the underlying cause for the problems the musicians experience during the show.  Food for thought as we head to part six...

Part six will be much shorter, and less theory in nature.  In part six I discuss the need to be able to evaluate both the full mix, and the individual instruments, and methodologies of practice to learn this art.

Phillip_Graham:
BOTH the Forest and the Trees

Most people who have worked for a regional sound company can relate to the experience of watching a band engineer obsess over one input for an entire show.  If it was a young skinny kid in Florida it might have been me, lol.

Simply put, this does not work for mixing.  Not only is it overly tweaky given the ephemeral nature of live sound, it ignores the fundamental difference between how the live audience is perceiving the sound versus the mixer person.  Very few people in your audience have any ability to pick apart your mix!  most only know whether it is too loud or not, whether it is "exciting," and select group will notice whether or not they can hear all the instruments.  Mixing for the individual instrument is antithetical to what matters for your audience.  Their perceptions lie on the "big picture" level, and they are largely ignorant and unconcerned with the steps necessary to get there!

I feel that learning to mix means learning to step away from the PA as your own personal, private stereo system, and also stepping away from it as a means to get the "most awesome {insert signal here}".  Ideally all signals would be most awesome, but that is often not the case, and it can't short circuit the global mixing process.

This is the first point in this tutorial where I am going to branch into something thats somewhat abstract, and that I can't give a definite means of success.  I can't tell you how I learned to pick out a voice in a crowd, but I know that I can do it, and I know that if I practice it, I get better at it.  Learning to focus on mix elements has the same vibe for me.

I personally suggest learning how to pick a mix apart before learning how to put it back together.  The opposite approach may work better for other people, but that is the direction I am approaching from here.

Pick out a good quality recording of a piece of music that you know well, and that you enjoy the sound of the production on.  I would say that the genre does not matter terribly, but I feel it should relate to the types of music you find yourself mixing.  I also believe that it should include vocals, so no instrumental jazz please  

Place a particular track on repeat in headphones, or on a stereo whose tonal balance you trust.  Now try to focus on the vocals, but without processing the meaning of the words.  Focus until the words are more like notes from another instrument than conveying a specific meaning.  Here I can't tell you exactly how to this more than focus only on the ebb and flow of tones, like humming the melody in your head to a song where you can't remember the lyrics clearly.  With practice I have found the word meanings can be separated from the sounds behinds the words.  Balancing the vocals in the mix then becomes easier.  You will start to notice the loud breaths between words (from compression), or the lack thereof (because of studio editing).  You will notice how the last couple words on phrases often have several vocal overdubs, and that choruses may have different reverb than verses.

Next, focus on a percussive sound (e.g. hi hat, snare, etc.)  Focus on how the beginning and end of the sound move, and also the frequency balance.  Is the snare a bright "crack" of jazz, or is it the "thunk" of compressed modern rock.  When you hear the snare, what does it cover up in the mix?  Does it step on vocals, or other instruments?  Does it do this in a manner that drives the mix, or muddies it up?  If it is dense rock mix, does the snare feel forced in, or does it have room to snap and decay?  Can you hear the release of the compressor on the decay.  Does the snare bloom up and drive the mix, is their weird "trash" from the release time being too short?  I realize the words in this paragraph may not make complete sense to you, as its very hard to describe some of these behaviors without hearing them directly, so worry less about my descriptors, and more about paying attention to whatever pops out at you.  You will start hearing things consistently, even if you can't put a name to them.

This process of deconstructive listening can be applied to any and all signals in a mix.  Steady practice makes it clearer, and faster.  I would sugggest listening to the same song three or four times in a row in a session, then stop, and try again the next day with a different song.  Take it a little at a time, and it will become more automatic.

On the opposite end of the spectrum, it pays to listen to the whole mix picture.  What is the panning like, is everything in the center, or spread widely?  What background percussion is there driving the song-shakers and tambourines and the like are very common in pop.  Is the chorus actually louder than the verses, what things come in and out the mix across the song?  Is the song very bright or unbalanced tonally?  Are the bass instruments well placed and clean, or is the bottom end "tubby"?  Is the mix very dry, or does it have lots of effects?  Are the drums up on top, or tucked in back?  Is the vocal right in the middle of the mix, or riding on top?

The things in the last paragraph are what truly set the sonic palette of the song, they were the goals the mix engineer had in mind when he used compression, eq, and other fx to influence each instrument.  Some instruments may sound very "weird" or "wrong" when focused on individually, but then actually fit well in the mix context.
 Guitars or drums may seem small, dull, or thin at certain places in a song in isolation, but they may have been trimmed back by the mix engineer to let the singer howl the emotion of the chorus, or let the piano ambiance drive the song.

Sometimes the best way to make the mix seem bigger is to reduce the scope of each individual element in the mix.  There is only a fixed amount of sonic landscape to fill, and trimming something to the point of being "weak" "thin" or "dull" may indeed be just the medicine the whole mix needs.  If you can learn pick out an individual instrument clearly, you can quickly appraise what about it is not enough/too much for the mix.  Then you can apply the needed change in eq, level, or other fx, and quickly return to the big picture.
------

Some are now probably pretty frustrated with me, shifting from something so technical to something much less concrete.  Let me try to give a practical example of something that I did while mixing that will give you an idea of what I am talking about.  Several years ago I was mixing for a regular religious event.  The music was pop rock, the levels in the mid 90s, and the band pretty cooperative, so I had lots of freedom as a mixer.  The band leader played acoustic guitar, and a fair number of songs had acoustic guitar intros.

The "tree-level" view of his acoustic guitar was that it was a piezo pickup, and had a LOT of picking noise at about 2.5khz.  It was also wonky around 400hz, and thin about 150hz.  My channel equalization was a bit of low boost, some cut around 400hz, and a pretty steep eq cut at 2.5khz.

With that as the tree level, here is the forest level mix decision;  The guitar introductions were usually simple chord progresssions, or perhaps an arpegiated chord, but they got the songs going.  Once the songs were in full swing, though, the acoustic was stuck strumming and playing on top of the main melody, or otherwise not adding to the arrangement of the song.  I would intentionally let the band instruments overwhelm the acoustic guitar after they kicked in, letting the acoustic fall to the back of the mix.  But, I would add BACK some of the picking noise at 2.5k, by reducing the eq cut!  Therefore the picking of the acoustic was still in the mix enough to give a sense that the guitar was still there, but it really wasn't a factor volume-wise, and therefore left room for the rest of the band.  If the song ending required the return of the solo acoustic, I simply reached up and recut the 2.5khz out of the acoustic.

That was one of my better moments as a mixer, and a good example of using a known weakness with an instrument to keep that instrument in play.  I had to be able to characterize the raw acoustic sound by itself to be able find the weakness, but I did not let fixing that weakness get in the way of the total mix picture, and even used it to my advantage.

Now on to Part 7, and how to keep everything straight during the show.

Phillip_Graham:
Keeping Track Behind the Console

Grabbing the wrong fader, One channel off from the one I intend, remains a struggle for me behind the live sound board.  I don't have  large hands, and on many analog boards I can't easily control three or four at the same time.  Part 7 strikes very close to home for me.

The problem with trying to define a specific path around the board is that no board is ever laid out in exactly the same way, no input list is exactly the same, and there is no guarantee that a random input won't flake out at the worst time, etc.

So rather than a specific path, I want to give provide tips for workflow and "anchoring" yourself in the the heat of the battle.

1. Place most touched channels near the master section-all the stuff you will fiddle with the most needs to be near the middle of the console.

2. Group inputs by type-guitars together, percussion together, vocals, etc.

3.  Group inputs by a personal or visual flow-I use left to right across the front of the stage (house perspective) within each section of inputs.

4.  Return effects to channels-having channel control over effect levels on the board will keep your head out of the FX rack.

5.  Anchor your non dominant hand-I am left handed, and as much as possible my right hand stays within the boundaries of the console master section during a show.  If I am turning an aux or eq with my right hand, I still try move it back to the master section when I am done.

6. Scan down a channel strip before grabbing anything-sometimes that extra half second will save you a lot of embarrasment.

7. Label your console as much as possible-I prefer to label all channels and aux sends.  I would love if consoles allowed labeling both the top and bottom of the channel strip.

8.  Don't Fiddle-If it's a rock show, don't fret if a rack tom is a little quieter than it should be.  If it is WAY too quiet, guess at a fader boost and leave it.

9.  Stop and listen to at least half a chorus every song-this may not be an option on the first two songs or so, but after that stop moving around enough to actually listen to the whole mix picture.

10.  If you lose your place, go back to the master section-ask yourself "it is too loud" then "can I hear the vocals" then "what about the guitars/piano/etc.?"  This will return you to a rhythm for mixing.

11.  Don't bury yourself in your headphones-if you need them at the beginning to solo instruments, I totally understand, and I do exactly the same, but they need to be around your neck before the middle of the set.

12.  A late addition, but a worthy one to me.  Don't bother with gates, especially for one-off events.  They are too picky to set effectively and quickly.  They have a place, but can be a major distraction from other parts of mixing.
-----

Now is the appropriate time to discuss the setting of levels and fader positions.  I firmly believe there is a correct way to do this, especially in the modern era of quiet mic preamps.  First, always mix with faders, never on trim pots.  Trim pots are left alone, unless they are clipping.

Second, set your trim pot levels with all faders on the board at unity.  Shoot for a mix that is close to correct when all the board faders are at unity.  Some inputs will obviously have less gain on the preamps than they might otherwise, but that is ok!  You may have to throw this method out in a festival setting, where the gains are going to have to be modified from the previous bands' inputs, but in general faders at unity is my preferred method.

I used to be someone who set every input right below clip on the trim pots at the soundcheck to maximize signal to noise ratio.  This sounds good in theory, but it is lousy in practice.  Everyone plays louder in the real set, and now you have a ton of clipping channels--OOPS!  Also the taper of the faders on modern mixing boards gives them the most control about the unity gain mark.  Try running a quiet input super hot, and then mixing smoothly on the faders around -25dB, it not a pleasant experience!

Does the fader at unity method make the console noise a little more apparent?  The answer is of course yes, but I have never had it be a problem relative to the ambient noise on a properly set up sound system.  If a sound sytem has very loud "hiss" problems, then the problems are more than likely elsewhere in the gain structure.

Also, the "faders at unity" method allows a known baseline on analog consoles if you work yourself into a corner.  Placing the fader back to unity returns your levels to what you had at the outset.

If you are regularly on a digital board, the faders at unity method looses some of its gains in simplicity, but it will still help with the fader taper behavior, as the digital consoles mimic analog consoles in their fader laws.
-----

I can't guarantee my methods will give you the best rhythm here, but it is important to realize that you need a rhythm and structure behind the console, and to quickly piece together one based on your natural movement patterns.

In Part 8 we will zoom in on a channel strip, and talk about the larger picture of equalization of sources in the mix.

Phillip_Graham:
"Tuck in the Corners" of Your Mix

I started writing this tutorial with a commitment to not make it specific to any specific genre or type of event.  However, use of equalization demands at least some ability for identifying the frequencies in question.  You are not going to be able to read this post and immediately improve knowing what frequencies need equalized, but you should have an improved sense about how to equalize the source.

The most basic of channel eq is usually a simple high/low tone control.  These are seen on guitar amps, low end mixers, home stereos etc.  These types of eq circuits are typically of the "Baxandall" implementation type.  These most basic tone controls are largely useless in the world of live sound.  They simply paint with too large a brush stroke.

Above this level of equalization are many different levels of eq amount and flexibility.  A good quality analog or digital board will typically have a low shelf eq, a high shelf eq, two mid range sweepable semiparametric/parametric eqs, and a high pass filter on each channel.  There are many variations on this theme, such as adjustable frequency high pass filters, and adjustable width (i.e. Q) sweepable filters.  Some boards let you pick whether the low and high equalizers are parametric or shelving.  Regardless, four equalization filters plus a high pass filter is fairly standard these days.  This is the level of equalization I am going to assume for the duration of this section.
----

Equalization is a difficult topic.  It requires a pretty fair amount of basic physics understanding to grasp what it does, and why it can be necessary.  I am going to try to minimize the discussion of that here.  A more detailed discussion of the science behind eq might be appropriate at the end of the mixing tutorial.

In short, signal sources have different output levels at different frequencies, and equalization can be used to change the relative balances in level between different frequency ranges of a source.

{Insert link to page on different eq curves' response shape}

The basic nature and types of equalizers thus defined we now move on to basic principles of use:

1.  In general equalization should be used to cut frequencies, rather than boost.  If radical equalization boosts seem to be demanded of a source, then other problems are likely in play.  A different choice of microphone or micrphone placement should be considered, unless the heat of the moment requires the boost.

2.  The human ear is very tolerant of deep and narrow equalization cuts.  A deep narrow "notch" filter to take care of an issue may be very audible to you, but this is usually because you heard the source PREVIOUS the equalization.  These types of equalizations are often necessary to prevent feedback on things like lapel microphones, and largely transparent to the greater audience if done carefully.

3.  The EQ bypass button can be your friend!  When applying equalization its easy to get lost in the details, and miss the bigger sonic picture.  I usually try to bypass the eq (if the board has that option) to make sure my eq has really improved the sound over the raw source.

4.  Proximity effect is always a factor with cardiod microphones, and can be one of the most commonly equalized things.  Proximity effect results in an increase in level of frequencies below about 500hz for most microphones when they are placed close to the source.  For some sources (e.g. guitar) the proximity effect is part of the instrument's sound, but for things like vocals it adds "boominess" or "honkiness" that takes away from the vocal tone.  Cuts in the 100-500hz range for proximity effect are common.

5.  Avoid excessive cuts to vocals in the 2k-4khz range.  This range contains almost all of the consonant energy necessary for speech intelligibility.  If you remove too much of this energy the ability to understand speech is destroyed.

6.  Eqs respond differently.  Some equalization is configured to have a lot of response as soon as the knob is turned off center, and some equalization requires the knob to be turned farther.  Don't panic if it seems like you need to turn the knob pretty far.  This is merely a choice of the console designer, and being more aggressive with the knobs will usually get the desired results.

7.  Two narrow equalizations near each other can usually be successfully replaced by one broader eq.  An exception to this is equalization for feedback, since feedback usually occurs over a very narrow band of frequencies.

8.  Its ok to have most of your eq focused below 5khz!  Most of the instrument tone is at mid and mid-high frequencies.  Gentle shaping of the top octave with a shelving filter is often all a source needs.

9.  Don't be afraid to roll off both the low end and high end of a source, if that is what is required.  My personal example for this is electric guitars.  I will use a low shelving filter on electrics in the 100-200hz octave (depending on genre) to make room for snare, kick, and bass.  On the high end I will either high shelf, or low pass, the top end of electric guitars.  I HATE going to a show where the electric guitars get washed out in a haze of high frequencies "hhhhhhzzzzzzz/hissssss" above about 5k.  That hash ruins the top end of the mix for cymbals/vocals/piano/snare, and I find very fatiguing on the ears.
-----

{Insert a couple of practical eq ancedotes/suggestions}

So, I have this thing called a thesis that kinda needs written   ETA on remainder of the thread is late summer 2k8...  Sorry folks

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version