Book Review: Mixing Secrets For the Small Studio

This book, written by producer Mike Senior, is fantastic:

If only Mixing Secrets For The Small Studio had existed 10 years ago, my music would have been impeccable! (Well, I like to think so.) This book is a magical tome for anyone who records or produces music on a budget. It’s packed with big reveals, and explains the science behind each mixing technique. Forget the accumulated hit-or-miss wisdom of the internet; after reading this book, I found that I could produce substantially better mixes immediately. That’s amazing. (My mixes still aren’t great, but I’m working on it!)

Here are my favorite takeaways from the 20 chapters. I’m writing this to lock these concepts in my head. I’m skimming lots of material, because there’s so much valuable information packed into this book, I can’t possibly recap all of it.

Part 1: Hearing and Listening

My Rokit KRK-8 studio monitors are distorting my mixes. They’re “ported,” meaning they have an open-air port cut into the body. The manufacturer claims that this helps the air move freely. As a consequence, this creates noisy air turbulence, and muddies the bass frequencies. That’s bad. The book suggests plugging the ports with socks; the sound might become clearer! Try it, listen carefully, trust your ears…

The goal of soundproofing your studio is to tame reverberant frequencies. You don’t want an echoless space. Also, stereo is overrated, try mixing in mono. If this advice sounds heretical, consider the typical consumer’s listening environment: Computer speakers, stuffed in a bookcase, clear across the room. Car speakers, blaring inside a noisy, reflective space. Restaurant speakers, bolted to the ceiling in semi-random locations. Your mix should sound good in mono, because most people will never give it a proper stereo listening. And there will always be room reflections in any listening space, so don’t completely Nerf your room. Allow your walls to sweeten the sound.

Headphones are a special exception. The book cautions that some headphones have inadequate bass response (as do some studio monitors, of course). Music may sound oddly “clean” without the natural reflections of a room. Personally, I love modern headphones. I bought a pair of Beyerdynamic DT 770 headphones this summer. It was the single best musical purchase I’ve ever made. They have clear, deep bass response, which rattles my skull a bit. The treble is impressive, and it reaches very high into the range of human hearing. I can finally hear the full audio spectrum (unlike running my Rokit KRK-8’s through a Mackie mixer), so the headphones have been a godsend for me. I can’t imagine returning to my KRK-8’s in my oddly-shaped studio space.

The biggest takeaway from this section is learning how to use other songs as reference material. Create a workflow where you can rapidly switch between your mix, and several reference tracks, to keep your ears fresh. Listen to how your mix compares with the professional’s. The author repeats this throughout the book: It is possible to create stunning mixes in a small studio environment. Checking your work with professional mixes is crucial.

Part 2: Mix Preparation

It’s okay to use pitch correction, rhythm correction (such as Ableton Live’s time markers), recreate the bassline on a synthesizer and mix it in, or rearrange the band’s song. Oh, yes indeed. They’re paying you to produce good music, and if that means fixing their mistakes, and using smoke and mirrors, then so be it!

Consider the overall flow of the song. Each instance of the chorus could have a different intensity. Perhaps they’ll get (perceptably) louder, except for the “drop chorus” at the end: Play the chorus once, with some instruments cut out, then play another chorus with everything driving the song home.

You may need to “mult” recorded tracks, which means splitting them up into separate layers, with different processing for each layer. The rhythm guitar should subtly slip into the background when the vocalist is singing, for example. Now that I’m aware of multing, I can’t help but hear it used everywhere!

Part 3: Balance

Every layer will need a high-pass filter, probably no steeper than 18db/octave. Raise the filter frequency until the track feels like something is missing, then lower the frequency a bit. This will liberate the precious low-frequency spaces, so the kick drum and bass won’t have to fight the other instruments in a bath of bass mud.

Now it’s time to balance the levels. Mute every track. Slowly unmute the trocks, one at a time. Start with the most important section; start with the most important instrument. Adjust the track’s volume so it fits with the mix, then don’t touch it afterwards. After you’ve leveled every track, your mix should now sound pretty good.

Compression is primarily used to stabilize the balance of individual layers. Some tracks will have a steady balance without compression, and that’s okay! Leave them alone. Other tracks may need one compressor to reduce momentary spikes in amplitude; another compressor to help even out the amplitudes of individual notes; etc. Try using parallel compression (aka “New York compression”), sending a track through a heavy compressor on a second channel, which will help even out the sound yet preserve its original character.

Expanders (and gates) are typically not essential. If they’re used, put them before compressors in your effects chain. If your drum beat needs more punch, try sending the track to a parallel channel, gating it (to isolate just the attacks), and applying EQ and a little distortion before mixing it back in.

Equalization (“EQ”) is a broad topic. Before delving into it, I’ll disagree with one point up front: The book claims that any EQ plugin is about as good as any other. Most modern DAWs are equipped with plugins that provide low-pass, high-pass, shelving, peaking, and notch filters. My opinion is that some EQ plugins sound substantially better than others. I have never enjoyed the sound of Ableton Live’s EQ 8, I feel that it sucks precious high-end frequencies and detail out of anything it touches. EQ 3 is better, although it’s less flexible, and the filter curves are steeper, which is not always what you want. Shameless product placement: My new favorite is Tone2’s FilterBank3.

To EQ the mix, mute every track, then unmute and EQ them in order of importance. You’re trying to avoid frequency masking, a phenomenon where layers obscure each other. Avoid graphic EQ, and try shelving filters before using peaks. Use low “Q” values, so your filters will have wide, smooth curves.

Bass instruments: Try adding high-end brightness to the bass guitar. Try reducing the kick drum around 400hz, creating a hole for the bass guitar to punch through.

If a track’s dynamics aren’t doing a good job, apply EQ before the dynamics. (It’s not uncommon to use both pre- and post-dynamics EQ, although this is more difficult to wrap one’s head around.)

Linear EQ should generally be avoided, since it smears the clarity & definition of transients (momentary bursts of sound, such as drum attacks).

And, this chapter answered an old riddle for me: Why are we told to use EQ cuts, and avoid EQ boosts? Because when you apply EQ to a range of frequencies, that frequency range suffers phase alterations. This can rob sounds of their clarity. As you boost phase-altered frequencies, you’re emphasizing muddier, out-of-phase frequencies. So a high-shelf cut (with a gain after it) really is preferable to the “equivalent” low-shelf boost. In a multi-mic environment, EQ boosts can be catastrophic, since instruments will leak into neighboring microphones with slight delays (i.e. different phases!), and may induce bad comb filter effects. Thank you, Mike Senior. I salute you for your clear explanation.

This surprised me: distortion is often used as a mixing tool! EQ can only modify frequencies that already exist, whereas distortion filters can add brightness and sparkle to the high-end. If your distortion plugin lacks a wet-dry control, the book suggests using parallel processing, by sending the track to an effects channel, and mixing it back in.

Multiband dynamics plugins have their uses, but are “mostly just an extension of what we’ve already covered in previous chapters.” If you’re clever, there are ways to achieve multiband dynamic effects without a dedicated plugin. For instance, if your bass guitar notes have uneven sustains, try extracting the lowest frequencies using a low-pass filter, and then compressing those, and mixing them back in. Watch out for a “hollow” quality, which indicates that phase artifacts are entering the mix.

Side-chained dynamics can be useful. The book explains “ducking,” which is akin to extreme side-chained compression: When the vocalist is singing louder than a certain threshold, the guitar “ducks” a fixed amount. The brilliant thing is, if you don’t have a ducking filter, you can create one using a side-chained gate: Send the guitar through a gate (side-chained to the vocals), and then invert the output. When gate opens, the inverted signal mixes with the original guitar channel and reduces the overall amplitude. Amazing. You can get creative, too: Try adding a linear-phase high-pass filter to the gate’s output, so only the high frequencies duck out when the vocals are active. You get the idea.

Part 4: Sweetening to taste

Reverb, chorus, and delays cannot salvage a bad mix. However, they can sweeten a good mix, and make it great.

My big takeaway from the chapter on reverbs is that you can move instruments away from the foreground, into the background by sending them through a common “blend reverb.” The reverb should have about 10-20ms of predelay. Reverbs should be CPU-intensive, as the better ones use lots of simulated reflections, and this requires more CPU power.

My big takeaway from the chapter on delays is that reverbs are overrated! Delays offer the same benefits as reverbs, but occupy less “space” in the mix.

The chapter on stereo enhancement is extensive, but the main point is that stereo width is more important than stereo panning. There are a number of ways to achieve stereo width effects using plugins (even rotary speaker plugins are discussed!). If the guitarist plays the same riff repeatedly, try hard-panning two instances of the riff, left and right. Try an M&S plugin, which splits a layer into the sum and stereo difference; pan the sum to the center, and apply all the crazy effects you want to the difference. Try adding some stereo “room tone,” which can be a recording of tape hiss, background noise in a room, or the crackling of a vinyl record. Try duplicating the vocals, pitch shifting them a few cents apart, and panning them hard left & right.

Finally, there’s a chapter on the “endgame,” which is final compression, equalization, and automating individual notes & transients by hand. The big shocker for me was that a little hard clipping is acceptable. This is used in commercial music, and it can add a little sparkle to the sound. That’s wild.

Definitely, definitely reference your mix with other people’s tracks. Try sitting in silence for a minute, and hearing the song in your mind. Now play your mix; does it sound like what’s in your head?

The book recommends a myriad of useful plugins, including shareware and freeware! The pages are also sprinkled with creative advice from professional producers.

Really, if you produce music at home, you’re insane if you skip this book. It really is that enlightening. Get it! Use it! Everyone deserves to make high-quality music, and it can be done on the cheap with today’s technology.

4 thoughts on “Book Review: Mixing Secrets For the Small Studio

  1. “This surprised me: distortion is often used as a mixing tool!”

    This surprises you? YOU? After all the drum loops we’ve stolen from Muslimgauze that sounded awesome mostly because they were rife with hard-clipping (and delay effects)?


  2. Oh sure, we’re all familiar with distortion used to color individual instruments (usually in a big way!) But I never thought about using distortion to enhance vocals, for instance. I figured that would mangle the sound beyond recognition, but a gentle touch of distortion can work wonders. I believe the harmonics of the human voice taper off around 5khz, so that leaves a broad range of treble frequencies that can be filled in using distortion. The More You Know™…

  3. Hey Zach. Awesome summary, of what looks like an awesome book. Since I use self-written code to mix and master songs, I don’t have sophisticated enough tools to do a lot of the stuff you describe here – side-chaining, adding effects to a difference channel, etc. – but this inspires me to build that stuff sometime. I just added it to my future projects list.

    I’ve been told professional-sounding EQs need to use FIR, not IIR filters, because IIRs will mess up the phase. Do you think that’s good advice? Writing an EQ will probably be one of my first tasks when I get serious about my mix environment; currently I only have a set of filters that are all IIR.

  4. Hey Scott! I’m not an EQ professional, but perhaps my advice can help…

    For starters, the book devotes entire chapters to IIR filters, and FIR. Generally, electronic musicians don’t need to worry about “phase artifacts,” as they’re called. It’s true that IIR filters alter the phase of different frequencies. (Digital IIR filters use very short delays, essentially reducing the crests and valleys of a waveform, then letting the signal pass through with different amplitudes & feedback. This is a VERY simplistic description; do not attempt to impress anyone with this analogy!)

    In a live studio recording, there may be many microphones capturing the audio at several locations in the room. Every mic hears every instrument (sometimes very quietly, but the leakage is still there!) And, every instrument is located different distances from each microphone, and this can cause phase artifacts. Imagine someone playing a Sine Wave Horn™, which is an imaginary brass instrument that emits a single sine wave. The mic closest to the horn records a nice, clear sine wave. However, the mic across the room records the sine wave with a small delay, out of phase with the other microphone’s signal. When the two signals are mixed together, the sine wave might cancel out!

    Anyway, the book recommends EQ cuts (not boosts) when using IIR filters, which helps mitigate phase artifacts. Explanation: IIR filters alter the phase of the frequencies they change; therefore, use EQ cuts, because the frequencies with phase artifacts are also being reduced in the mix.

    FIR: The book advises that FIR filters are best used at the very end of the mix, as a “final” equalization of sorts. They impart twice the level of “smearing” that IIR filters do (according to my research), so they can rob drums of their punch, and do weird things with vocal sibilants, etc. My impression is that FIR filters are very accurate, almost transparent sounding, but they do rob mixes of their dynamic range, and tend to smooth things out (which is not always what you want!) Let your ears be your guide.

    IIR or FIR, there are filters designed for coloration, transparency, etc… Tiny changes in your signal processing (integer vs floating point numbers, rounding errors, dithering, subtle noise injection?, etc) contribute to how a filter “feels.” The trick is to find (or write) filters that you like, and use them effectively.

Leave a Reply

Your email address will not be published.