By Ernie Rideout
It feels great to finish writing a song, right? It feels even better when your band learns the song well and starts to sound good performing it. And it’s even more exiting to get in a studio and record your song! What could possibly be better?
Mixing your song, of course. Nothing makes you feel like you’re in control of your creative destiny as when you’re in front of a mixing board — virtual or physical — putting sounds left, right, and center, and throwing faders up and down.
Yeah! That’s rock ’n’ roll production at its best!
Except for one or two things. Oddly enough, it turns out that those faders aren’t meant to be moved all over the place. In fact, it’s best to move them as little as possible; there are other ways to set the track levels, at least initially. And those pan knobs are handy for placing sounds around the sound stage, but there are other ways to get sounds to occupy their own little slice of the stereo field that are just as effective, and that should be used in conjunction with panning.
Here at Record U, we’re committed to showing you the best practices to adopt to make your recorded music sound as good as it possibly can. In this series of articles, we draw upon a number of tools that you can use to make your tunes rock, including:
Faders: Use channel faders to set relative track levels.
Panning: Separate tracks by placing them in the stereo field.
EQ: Give each track its own sonic space by shaping sounds with equalization.
Reverb: Give each track its own apparent distance from the listener by adjusting reverb levels.
Dynamics: Smooth out peaks, eliminate background noises, and bring up the level of less-audible phrases with compression, limiting, gating, and other processes.
In this article, we’re going to focus on two of these tools: gain staging (including setting levels with the faders) and panning. As with other tools we’ve discussed in this series, these have limitations, too:
- They cannot fix poorly recorded material.
- They cannot fix mistakes in the performance.
- Any change you make with these tools will affect changes you’ve made using other tools.
It’s really quite easy to get the most out of gain staging and panning, once you know how they’re intended to be used. As with all songwriting, recording, and mixing tools, you’re free to use them in ways they weren’t intended. In fact, feel free to use them in ways that no one has imagined before! But once you know how to use them properly, you can choose when to go off the rails and when to stay in the middle of the road.
Before we delve into levels, let’s back up a step and talk about how to avoid the pitfall of poorly recorded material that we at Record U keep warning you about.
Before your mixing board can help you to make your music sound better, your music has to sound good. That means each instrument and voice on every track must be recorded at a level that maximizes the music, minimizes noise, and avoids digital distortion.
If you’re a guitarist or other tube amp-oriented musician, you’re likely to use digital distortion every day, and love it. That’s modeled distortion, an effect designed to emulate the sound of overdriven analog tube amplifiers — which is a sound many musicians love.
The kind of digital distortion we don’t want is called clipping. Clipping occurs when a signal is overdriving a circuit, too, but in this case the circuit is in the analog-to-digital converters in your audio interface or recording device. These circuits don’t make nice sounds when they’re overdriven. They make really ugly noise — garbage, in fact. It can be constant, or it can last for a split-second, even just for the duration of a single sample. If that noise gets recorded into your song, you can’t get rid of it by any means, other than to simply delete the clipped sections. Or re-record the track.
The way to avoid clipping is to pay close attention to the sound as it’s coming in to your recording device or software, and then act to eliminate it when it occurs. There are several things that can indicate a clipping problem:
1. Your audio interface may have input meters or clipping indicators; if these go in the red, you’ve got a problem. Clip indicators usually stay lit up once they’re been triggered, so you know even if you’ve overloaded your inputs for a split second.
Fig. 1. Having multi-segment meters on an audio interface is handy, like these on the MOTU Traveler; if they look like this as you’re recording tracks, though, you probably have a serious clipping problem.
Fig. 2. Many audio interfaces have a simple LED to indicate clipping, as on the Line 6 Toneport KB37; here the left channel clip indicator is lit, indicating clipping has occurred. Bummer.
2. Your recording device or software probably has input meters; if the clipping indicator lights up on these, you’ve got a problem. In Reason, you have two separate input meters with clip indicators.
Fig. 3. In Reason, each track in the Sequencer window has an input meter. As with clip indicators on your recording hardware, these clip indicators stay lit even if you’ve gone over the limit for a split second — this input looks kind of pegged, though.
Fig. 4. The Transport Panel in Reason has a global input meter with a clip indicator as well.
3. The waveform display in your recording software’s track window can indicate clipping. If your waveforms look like they’ve gotten a buzz haircut, you may have a problem.
Fig 5. If you see a waveform in a track that resembles the one on the left, you probably have a clipping problem. But if it’s this bad, you’ll probably hear it easily.
These are helpful indicators. But the best way to avoid clipping is to listen very carefully to your instruments before you record, or right after a recording a sound check. Sometimes clipping can occur even though your input meters and audio waveforms all appear to be fine, and operating within the boundaries of good audio. Other times you may see your clip indicators all lit up, but you might not be able to detect the clipping by ear; this can happen if the clipping was just for an instant. It’s worth soloing the track to see if you can locate the clipping; if you don’t, it may turn up as a highly audible artifact when you’re farther along in your mixing process, like when you add EQ.
How do you crush clipping? If you detect clipping in a track during recording, eliminate it by doing one of the following:
- Adjust the level of the source. Lower the volume of the amplifier, turn down the volume control of the guitar or keyboard.
- If the tone you’re after requires a loud performance, then lower the levels of the input gain knobs on your audio interface until you’re getting signal mixer that is not clipping.
- Use the pad or attenuator function on your audio interface. Pads typically lower the signal by -10, -20, or -30dB, which might be enough to let you get your excellent loud tone but avoid overloading the inputs. Usually the pad is a switch or a button on the front or back panel of the audio interface.
- Sometimes the overloading is at the microphone itself. In this case, if the mic has a pad, engage it by pushing the button or flipping the switch. This will usually get you a -10dB reduction in signal.
- Sometimes the distortion comes from a buildup of lower frequencies that you may not need to make that particular instrument sound good. In this case, you can move the mic farther away from the source, which will bring down the level. If the mic has a highpass filter, engage it, and that will have a similar effect.
The reverse problem is just as bad: an audio signal that’s too quiet. The problem with a track that’s too soft is that the ratio between the loudest part of the music and the background noise of the room and circuitry of the gear isn’t very high. Later, when you’re running the track through the mixer, every stage that amplifies the track will also amplify the noise.
The fix for this is simpler: Move the mic closer to the source, turn the source up, turn off the pad and highpass filters on the mic, or turn up the gain controls on your audio interface.
The goal is to make the loud parts of each track as loud and clean as possible as you record, while avoiding clipping by any means. That doesn’t mean the music has to be loud; it just means that the loudest part of each track should get into the yellow part of the input meters.
Now that you’ve spent all that time making sure each track of your song is recorded properly, you’d think the next thing we’d tell you is to start adjusting the relative levels of your tracks by moving those gorgeous, big faders. They’re so important-looking, they practically scream, “Go on, push me up. Farther!”
Don’t touch those faders. Not yet. You heard me.
At this point in the mixing process, those big, beautiful faders are the last things you need. What you need is far more important: the gain knob or trim control. And it’s usually hidden nearly out of sight. It certainly is on many physical mixers, and it is on the mixer in Reason as well. Where the heck is it?
Fig. 6. The gain knob or trim control is often way at the top of each channel on your mixer. In Reason, it’s waaaaaaaay up there. Keep scrolling, you’ll find it.
This little dial is usually way the up at the top of the channel strip. Why is way the heck up there, if it’s so important?
It has to do with signal flow, for one thing. When you’re playing back your recorded tracks, the gain knob is the first stage the audio passes through, on its way through the dynamics, EQ, pan, insert effects, and channel fader stages.
You use the gain control to set the levels of your tracks, prior to adding dynamics EQ, panning, or anything else. In fact, when setting up a mix, your first goal is to get a good static mix, using only the gain controls. A static mix is called that because the levels of all the track signals are static; they’re not being manipulated as the song plays, at least not yet.
Those beautiful big channel faders? They should all be lined up on 0, or unity gain. All tidy and shipshape.
Instead of using the faders, use the gain controls to make the level of each track sound relatively equal and appear in the channel meter as though it’s between -5 dB and -10 dB below 0. Using your ears as well as the meters, decrease or increase the gain of each track until most of its material is hitting around -7 dB. You can do this by soloing tracks, listening to tracks in pairs or other groups, or making adjustments while you listen to all the tracks simultaneously.
The gain control target is -7 dB for a couple reasons. Most important, as you add EQ, dynamics, and insert effects to each track, the gain is likely to increase, or at least has the potential to increase. Starting at -7 dB gives each track room to grow as you mix. Even if you don’t add any processing that increases the gain, the tracks all combine to increase the gain at the main outputs, and starting at this level helps you avoid overloading at the outputs later.
Why shouldn’t you move the faders yet? After all, they sure look like they’re designed to set levels!
Hold on! The faders come in later in the mixing process, and we want them to all start at 0 for a couple reasons. The scale that faders use to increase or decrease gain is logarithmic; you need larger increases in dB at higher volume levels to achieve apparent increases that equal the apparent increases made at lower levels. In other words, if your fader is down low, it’s difficult to make useful adjustments to the gain of a track, since the resolution at that end of the scale is low. If the fader is at 0, you can make small adjustments and get just the amount of change you need to dial in your mix levels.
The other reason is headroom. You always want to have room above the loudest parts of your music, in case there are loud transients and in case an adjustment made to EQ, dynamics, or effects pushes the track gain up. Plus, moving a fader up all the way can increase the noise of a track as much as the music; using EQ and dynamics on a noisy track can help maximize the music while minimizing the noise; the fader can stay at 0 until it’s needed later.
Once you have each track simmering along at -7 dB, you’re ready to move on to the other tools available for your mix: EQ, dynamics, effects, and panning. As you make changes using any of these tools, you may want to revise your static mix levels. And you should; just keep using the Gain control rather than the faders, until it’s time to begin crafting your final mix.
It’s more than a phase
As you’re checking the level of each track, you may find the little button next to the gain control useful: It’s the invert phase control. In Reason, this button says “INV,” and by engaging it, you reverse the phase of the signal in that channel. It’s good that this button is located right next to the gain knob, because it’s during these first step that you might discover a couple tracks that sound too quiet, softer than you remember the instrument being. Before you crank the gain knob up for those tracks, engage the INV, inverting the phase, and see if the track springs back to life.
Fig. 7. It’s small, but it comes in handy! The invert phase control can solve odd track volume problems caused by out-of-phase recording.
If so, it’s because that particular sound was captured by two or more mics, and because of the mic location, they picked up the waveform at different points in its waveform cycle. When played back together, these tracks cancel each other out, partially or sometimes entirely, since they play back the “out of phase” waveforms. The INV button is there to make it easy to flip the phase of one of the tracks so that the waveforms are back in phase, and the tracks sound full again.
All of the tools available to you for mixing your song are there to help you craft each track so that it serves the purpose you want it to for your song. For some tracks, this means creating a space just for that particular sound, so it can be heard clearly, without the interference of other sounds. For other tracks, that might mean making the sound blend in with others, to create a larger, aggregate sound.
Just as with EQ, dynamics, and effects, panning is one of the tools to help you achieve this. And just as when you use EQ, dynamics, or effects, any change you make in panning to a track can have a huge effect on the settings of the other tools.
Unlike the other tools, though, panning has just one control, and it only does one thing. How hard could it be to use?
As it happens, there are things that are good to know about how panning works, and there are some things that are good to avoid.
The word “panning” comes from panorama, and it means setting each sound in its place within the panorama of sound. It’s an apt metaphor, because it evokes a wide camera angle in a Western movie, or a theater stage. You get to be the director, telling the talent — in this case, the tracks — where to stand in the panorama; left, right, or center, and downstage or upstage.
Panning handles the left, right, and center directions. How can two-track music that comes out of stereo speakers have anything located in the center? It’s an interesting phenomenon: When the two stereo channels share audio information, our brains perceive a phantom center speaker between the two real speakers that transmits that sound. Sometimes it can seem to move slightly, even when you haven’t changed the panning of any sounds that are panned to the center. But usually it’s a strong perception, and it’s the basis of your first important decisions about where to place sounds in the stereo soundscape.
Once you’ve established the sounds you want to be in the center, you’ll walk straight into one of the great debates among producers and mixing engineers, which is where to place sounds to the left and right. The entire controversy can be summed up in this illustration courtesy of the amazing Rick Eberly:
Fig. 8. This is controversial? Whether the drums face forward or back? You’re kidding, right?
Couldn’t be more serious.
Well, more to the point, the debate is about whether you want your listeners to have the perspective of audience members or of performers. This decision is kind of summed up in the decision of where to place the hi-hat, cymbals, and toms in your stereo soundscape. Hi-hat on the left, you’re thinking like a drummer. Hi-hat on the right, you’re going for the sound someone in the first row would hear.
You really don’t need to worry about running afoul of the entire community of mixing engineers. You can do whatever you want to make your music sound best to you. But it’s good to keep in mind the concept of listener perspective; being aware of where you want your audience to be (front row, back row, behind the stage, on the stage, inside the guitar cabinet, etc.) can help you craft the most effective mix.
Just as important as perspective is the related concept of balance. In fact, many mixing engineers and producers refer to the process of placing sounds in the stereo soundscape as “balancing,” rather than “panning.” Of course, they include level setting in this, too. But for now, let’s isolate the idea of “balance” and apply it to the placing of sounds in the stereo soundfield.
Here’s the idea. In a stereo mix of a song or melodic composition, the low frequency sounds serve as the foundation in the center, along with the main instrument or voice. On either side of this center foundation, place additional drums, percussion, chordal accompaniment, countermelodies, backing vocals, strumming guitars, or synthesizers. Each sound that gets placed to one side should be balanced by another sound of a similar function panned to a similar location in the opposite side of the stereo field.
Here’s one way this could look.
Fig. 9. There are many ways to diagram sounds in a mix. This simple method mimics the pan pot on the mixer channels in Reason. At the center: you (sorry about the nose). In front of you are the foundation sounds set to the center. To either side of you are the accompanying sounds that have been placed to balance each other. In this song, we’ve got drums, bass, two electric guitars playing a harmonized melody, organ, horns, and a couple of percussion instruments. Placing sounds that function in a similar way across the stereo field equally (snare and hi-hat — cymbal and tom; horns — organ; shaker — tambourine) make this mix sound balanced from left to right; when we get to setting levels, we might choose to reinforce this by matching levels between pairs.
And now you know where we stand on the perspective debate . . . at least for this clip.
Fig. 10. Here’s another approach to a mix. We’ve put the horns and organ in the center. This is still balanced, but this approach may not give us one critical thing we need from everything we do in a mix: a clear sonic space for each instrument. We’ll hear how this sounds in a bit.
Fig. 11. Here’s yet another balanced approach, this time putting the horns as far to the left and right as possible. Though valid, this also presents certain problems, as we’ll hear shortly.
Fig. 12. This diagram represents a mix that looks balanced, but when you listen to it, you’ll hear that it’s not balanced at all. The foundation instruments are not centered, for one thing, and this has a tremendous impact. For most studio recordings, this approach might be disconcerting to the listener. But if your instrumentation resembles a chamber ensemble or acoustic jazz group and you’re trying to evoke a particular relationship between the instruments, this could be just the approach your composition needs. We’ll see how it works out in the context of a rock tune a little later.
Set up a static mix
Let’s go through the process of setting up a static mix of a song, using the steps and techniques we’ve talked about to set the gain staging, the levels, and the balance. As it happens, the song we’ll work on is just like the hypothetical song on which we tried several different panning scenarios.
A couple of interesting things to know about this song. All the instruments come from the ID8, a sound module device in Reason, except for the lead guitars, which are from an M-Audio ProSessions sample library called Pop/Rock Guitar Toolbox. The drum pattern is from the Record Rhythm Supply Expansion, which is available at no charge in the Downloads section of the Propellerhead website —click here to get it.
The Rhythm Supply Expansion contains Reason files each with a great selection of drum patterns and variation in a variety of styles, at a range of tempos. The really cool thing about the Expansion files is that they’re not just MIDI drum patterns, they include the ID8 device patches, too — just select “Copy Device and Track” from the Edit menu when in Sequencer view, then paste the track and ID8 into your song file. With the Rhythm Supply Expansion tracks, your ID8 becomes a very handy songwriting and demoing tool.
All right. We’ve completed our tracking session, and we’re happy with the performances. We’re satisfied that we have no clipping on any of the tracks, since we had no visual evidence of clipping during the tracking (e.g., clip LEDs or pegged meters on the audio interface, clipped waveforms in the Sequencer view) and we heard no evidence of clips or digital distortion when we listened carefully. Now let’s look at our mixer and see what we need to do to set the gain staging and levels.
Fig. 13. First things first, though: Bypass the default insert effects in the master channel to make sure you’re hearing the tracks as they really are (click on the Bypass button in the Master Inserts section).
Fig 14. While the levels of each track seem to be in the ballpark, it’s clear that there is some disparity between the guitars (too loud, what a surprise) and the rest of the instruments. Quick! Which do you reach for, the faders or the gain knobs? Let’s collapse the mixer view by deselecting the dynamics, EQ, inserts, and effects sections of the mixer in the channel strip navigator on the far right of the mixer. Now we can see the Gain knobs and the faders at the same time. Still confused about which to reach for to begin adjusting these levels? Hint: Leave the faders at 0 until the final moments of your mixing session! Okay, that was a little more than a hint. Have a listen to the tracks at their initial levels:
Fig. 15. Using only the Gain knobs, we’ve adjusted the level of each track so that, a) we can hear each instrument equally, and b) the levels of each track centers around -7 dB on its corresponding meter. Even though we brought up the levels of the non-guitar tracks, the overall master level has come down, which is good because it gives us more headroom to work with as we add EQ, dynamics, and effects later. Ooh, and look at how cool those faders look, all centered at 0! Let’s hear the difference now that the levels are all in the ballpark:
Since a bit part of this process is determining exactly where each drum and percussion sound is to go, let’s take that Rhythm Supply Expansion stereo drum track and explode it so that each instrument has its own track. This is easy to do: Select the drum track in Sequencer view, open the Tool window (Window > Show Tool Window), click on Extract Notes to Lanes, select Explode, and click Move. Presto! All your drum instruments are now on their own lanes (watch out, your hi-hat has been separated into two lanes, one containing the closed sound, and the other containing the open sound, just something to keep in mind or combine them into a single track). Copy each lane individually, and paste them into their own sequencer tracks. Now each drum instrument has its own track, and you can pan each sound exactly as you want.
Let’s listen to the process of balancing. We’ll build the foundation of our mix first, starting with the drums, then adding the bass, then the lead instruments, which are the guitars. Let’s mute all tracks except the drums and then pan the drum tracks to the center:
Sounds like a lot of drums crammed into a small space. There is a shaker part that you can’t even hear, because it’s in the same rhythm as the hi-hat. Let’s pan them, as you see in Fig. 9. Hold on tight, we’re taking the performer’s perspective, rather than the audience’s:
That opens up the sound a great deal. You can hear the shaker part and the hi-hat clearly, since they’re now separated. Even a very small amount of stereo separation like this can make a huge difference in the audibility of each instrument. Now let’s add the bass, panned right up the center, since it’s one of our foundation sounds:
The bass and kick drum have a good tight sound. Now let’s un-mute the two lead guitar tracks. We’ll pan these a little to the left and right to give them some separation, but still have them sound clearly in the center:
So far, we’ve got a nice foundation working in the center. All the parts are clearly audible. Sure, there’s a lot of work we could do even at this stage with EQ, dynamics, and reverb to make it sound even better. Let’s resist that urge and take the time to get all the tracks balanced first. Now we’ll un-mute the organ and horn tracks. These two instruments play an intertwining countermelody to the lead guitars. They’re kind of important, so let’s see what they sound like panned to the center, as in Fig. 10:
Wow. There is a lot going on in this mix. The parts seem to compete with each other to the point where you might think the horn and organ parts are too much for the arrangement. Let’s try panning the horns and organ hard left and hard right — in other words, all the way to the left and right, respectively:
Well, we can hear the horns and organ clearly, along with all the other parts. So that’s good. But they sound like they’re not really part of the band; it sounds unnatural to have them panned so far away. Let’s try panning them as you see in Fig. 9:
Fig. 16. This screenshot shows our static mix, with all track levels adjusted, all sounds balanced, and all faders still at 0. Now we’re ready to clarify and blend the individual sounds further with EQ, dynamics, reverb, and level adjustments, which you can read about in the other articles here at Record U.
Wait! What about the balancing example that was out of balance, in Fig. 12? How does that sound? Check it out:
The big problem is that the foundation has been moved. In fact, it’s crumbled. The sound of this mix might resemble what you’d think a live performance would sound like if the performers were arranged across the stage just like they’re panned in this example. But in reality, the sound of that live performance would be better balanced than this example, since the room blends the sounds, and the concert goers would perceive a more balanced mix than you get when you try to emulate a stage setup with stereo balancing.
That’s not to say you shouldn’t take this approach to balancing if you feel your music calls for it. Just keep in mind what you consider to be the foundation of the composition, and make sure to build your mix from those sounds up.
And don’t touch those faders yet!
Based in the San Francisco Bay Area, Ernie Rideout is Editor at Large for Keyboard magazine, and is wr