So, I joined this forum to both learn as much as possible from the experienced and knowledgeable people around here and to impart anything I've picked up over the years if anyone at all can benefit from it. I'm creating this post in case someone who is starting on the music recording path can use it. If not, that's OK. I'm also very happy to take all comments and criticism of what I'm about to write and show, because I have much to learn.
I think the last five years have, for me, been the time when I've learned the most about my instrument, writing and making music and then ultimately recording. Though I'd recorded songs and parts many times before, I'd never sat on the other side of the studio window. In fact, I'd never so much as pointed a mouse at a DAW before late 2019.
First, the context:
In late 2019 I wanted to learn how to record at home. A friend of mine had given me his old interface about a year before, so I bought some microphones and cables and downloaded Audacity. I was confronted with what everyone goes through when they're new to recording software - a user interface that is impenetrable and scary. My experience in studios as a player meant I had an idea of workflow and setup, but I didn't know where to find the commands, etc. Frankly, there's no better way than to just play around and search the menus for what you need. Within a few days, creating a track, recording to it and then editing it to get a clean and useable result was possible. YouTube is a valuable resource: I learned how to avoid clipping, manage my gainstaging, create width in basic mixes, etc.
The reason I was doing this was to record my band as well as to just learn (with open-ended possibilities of remote session work, which ultimately came to pass especially during lockdown) more about this side of music. Over the course of recording myself and my bandmates (vocals and guitar) I invested in a condenser mic, two outboard preamps (one with tubes, dual-channel, one single-channel) and an upgraded interface. The problem with upgrading my interface was that the ASIO driver issue arose (Audacity can't use these drivers because it's open-source). I looked at my options and bought a licence for Reaper, which once again set me back a bit in learning about its workflow. I'm actually really glad I did go the Reaper route - first, I'd always known Audacity would be training wheels for me, and its destructive editing ruled it out for "real" projects, and, second, Reaper is powerful. I'm not calling it the best out there, but it is powerful enough for my needs. And it's still incredibly cheap for what you get.
OK, so, tracking:
I'm going to use the first single from this album as a case study and look at how I recorded, edited and mixed it - there's just too much information to dredge up for the whole album (11 songs). Here's how it sounds:
[
Part I : Breaking it down by track grouping:
Vocals:
One track for backing vocals, imported as a single WAV of about three tracks of backing vocals from Audacity to Reaper. I had balanced them and panned them in Audacity before exporting, so I was pretty confident all I needed to do was set a level in the mix. Then I had a main vocal track which had been edited to clean it up. Ultimately, I decided to add a delay throw to the last words of phrases in the verses, so I set up a send to a track with delay on it, copied the main vocal and set it up so that the main vocal would run from the first track, then I'd mute the last word on the main track and mute everything but that word on the track with the send so that the delay would catch that. Also, I applied some manual autotune here and there with about 150ms pre-delay to get things to where they should be.
Guitars: Verse electrics
I applied a tremolo effect in the DAW, rather than at source, because it was actually an afterthought once I'd decided to get some organ onto the track. More of that below. Chorus acoustics were two tracks of two separate takes to create width, and panned a bit either side. I added chorus electric guitar stabs, with the other guitarist playing some arpeggiated runs in the choruses. In general, I played with panning of all guitars until I thought the mix sounded balanced, always keeping an eye on my level meters to confirm. I'm happy with different parts on different sides - as long as the whole mix isn't lopsided.
All guitar tracks involved an SM57 in front of my combo amp, but I was lucky enough to have another cab with a different speaker in it, so I'd plug my amp into that for varied tones, or I'd use my little Vox MV50 through either speaker to capture different tones, and then mix to taste. I did sometimes use a condenser mic for guitar tones, but the SM57 did most duty. Nothing too loud, either, so all driven tones I got were from pedals.
Bass:
I was sent WAVs, which I would duplicate (for all songs) and separate sonically using high-pass and low-pass EQ filters. This meant I'd land up with a "high bass" track and a "low bass" track to process as I wanted, with differing levels of compression and different EQ to get the bass to sit properly. I'm sure there are many ways of doing this, but I quite liked this approach because I found it conceptually simple. Bass down the centre.
Percussion:
I bought an egg shaker and jingle stick before tracking and used them liberally on the album. They just add something, but the editing to be perfectly in time can be a pain in the ass.
Drums:
These were tracked at a studio elsewhere, once lockdown restrictions eased, so I could sit in as producer. Before sending me the WAVs the engineer on that side added a gate to clean up the tracks from noise and leakage, and added a bit of compression so I would have less work to do (for which I am thankful). I received snare, kick, hi-hat, overheads right and left, room mic, tom (only one tom really used on this track because it's quite simple). I had zero experience mixing drums so YouTube was my friend here, and it took time and a lot of listening to EQ them to taste. Also, in terms of soundstaging, I did it according to visual representation of a kit: pretty sure I set the hi-hat and crash slightly to right, then all the other drumkit components spreading out in small % increments to the left from there.
Piano and Hammond organ:
When the track was finished it still needed something, so I got on the phone to a guy recommended to me and I told him what I wanted - he absolutely killed it. He sent me Roland piano in the verses, Hammond chorus parts and a track with some Hammond fills. I think I kept these pretty straight down the centre.
Guitars:
I set up a buss for all guitars so I could process them with some compression, EQ, etc to get them to hang together more.
Moar compression:
Then, because I'd seen a cool video of someone doing this, I set up a kind of preliminary compression buss for all instruments except drums, so my sends reaching the master buss would be this compression buss parallel to the sends from guitars, drums and percussion (I'd applied some light compression and EQ to drumkit components individually) to the master. This had the effect, when all was run through the master buss with some more light mix compression, of getting everything to glue together more.
When it came to processing, I had no formula and - something I consider to be a mistake - I didn't work from any reference tracks. I just went ahead and mixed to get to a point where I was happy, trying to let things breathe as much as possible and apply as little processing as possible. I was of the belief that if you captured decent sounds right at the beginning, the mix could largely take care of itself, save for making things more interesting for the listener by panning and automating some interesting movement. I mixed on headphones so the mastering engineer sent all tracks back to me and told me to boost the bottom end. I can't really recommend mixing this way because I ended up listening back and forth on car speakers, Bluetooth speakers and everything I could find to find middle ground but it was still a challenge.
Right at the end - in fact moments before I sent for mastering - I thought the breakdown in the middle of the song needed to be accentuated and made more interesting, so I applied a vinyl emulator plugin (on everything but the drumkit), using Tanerelle's Love from NGC 7318 (great song!) as a model.
Ultimately I guess it came out OK - this has been picked up by radio stations around SA and we're charting on one of them. Radio is a lottery and there are so many variables, so I don't take that as validation of my mixing skills at all. What I can say is I made mistakes, they irritate me when I listen back on the album, but I wouldn't trade the experience and the learning curve was simply massive. YouTube (and other resources), while helpful and full of content, can be a bewildering place full of conflicting opinions and advice. And so many guys are just punting plugins you probably don't need. The only real way to learn is to do it yourself, make mistakes and find ways to overcome them. When I watch videos of the famous mix engineers at work I realise just how much I still don't know, which is depressing and exciting at the same time. I'm keen to learn.
Want more?, Part 2 is below!
Want to know what gear we used?