This post is a work in progress and will be updated over time.
Here’s my process. A lot of this comes from a video from the Atheist Nomads. I’ve never been traditionally trained in audio editing. I’ve learned everything from the internet and from teaching myself.
Editing is a very personal process. You have to balance your time with the value you’re getting out of it. Audio quality is very important to me so I spend a lot of time editing and improving my editing process. But if audio quality is less important to you, save some time by doing less editing.
What’s the Point?
The first goal of editing your podcast is to make it more enjoyable to listen to. After editing, your audience should be able to easily listen to and understand your podcast.
The second goal of editing is to reduce your podcast’s impact on your audience’s resources. Those resources include time and storage space. After editing, your podcast should not waste time or space.
You should record good. Prevent mic bleed. one mic per person. watch out for echoes in corners
Noise is any audio that you don’t want your audience to hear. Sometimes it’s the background hum of the air conditioner, sometimes it’s breath sounds.
You want to reduce noise as much as you can. Any time the audience has to pull your intended audio out of some noise it takes mental effort. Spending mental effort makes listening to the podcast less enjoyable so we don’t want that.
I find the interface for Noise Reduction in Audacity confusing, so don’t feel bad if you find it confusing too. It involves two steps:
- Sample some noise.
- Select some noise
- Effects > Noise Reduction
- Hit the top button
- Remove the noise.
- Select the whole area to remove the noise from
- Effects > Noise Reduction
Here are the settings that I use for Noise Reduction. I started by copying the values from the Atheist Nomads video linked above and then tweaking from there.
This is the portion that takes the longest. Listen to your entire recording, cut any audio you don’t want and reorder any sections that don’t make sense. I’ve never gotten the label track in Audacity to work how I expect it to, so I find it very confusing to reorder audio, so I rarely do it. I do cut a ton of audio though.
[Example of content edited audio]
Audio I look for to cut:
- Any mouth sounds like loud breaths or the darn clicking sound I make with my tongue before I speak.
- Ums, uhs and ahs.
- False starts and stutters.
- Meta discussion such as asking someone to repeat a phrase for a better recording.
Cutting these will make your speakers sound more intelligent and professional, while also saving your listeners time.
[Example of desynced audio]
Do not desynchronize your recorded tracks. If you have a local recording and the tracks are out of time at all it will be audible as an echo. Any cut that you make to one track should be done to all the tracks. If you just want to cut the audio from one track while maintaining the others, silence a portion of that track. (ctrl + L).
[Example of truncated silence]
Once you’ve done content editing you’re done with the hard part. Select everything and then go to Effects > Truncate Silence. I truncate any silence down to .5 seconds. This is a bit of a sledgehammer solution, and sometimes I wish I would leave more of a gap between sections, but Truncate Silence is very easy to run and it reduces a lot of useless audio from the podcast, saving your audience time.
Equalization (or “EQ”) is the process where you can boost some frequency ranges and reduce other frequency ranges. EQ is where you can give yourself that radio announcer voice.
EQ is where I feel like my editing skills are weakest. If you have any tips or critiques, hit me up on twitter.
The best resource I’ve found for learning EQ is this interview with Rob Williams. The website hosting the vocal EQ cheatsheet mentioned in the interview is defunct, but you can find quite a few similar resources by searching online for “vocal EQ cheatsheet”.
Here are the settings that I use for EQ. If you’re just starting out I recommend starting by just cutting the very low end and the very high end since those are outside the human vocal range anyway so they’re just noise.
And here that is in XML if you want to import it:
<equalizationeffect> <curve name="Podcast Vocal5"> <point f="20.000000000000" d="-80.000000000000"/> <point f="49.237316986327" d="-33.107692718506"/> <point f="54.196034330446" d="-29.553844451904"/> <point f="88.033573501041" d="-6.923076629639"/> <point f="95.871851182279" d="-4.523078918457"/> <point f="108.957037410504" d="-1.938461303711"/> <point f="132.599316556226" d="2.445087432861"/> <point f="156.339334382973" d="2.445087432861"/> <point f="248.195108586157" d="-3.046241760254"/> <point f="505.708456346672" d="-2.771678924561"/> <point f="1016.395768252276" d="-0.300577163696"/> <point f="1971.410215909012" d="4.367052078247"/> <point f="5041.428276830616" d="4.916185379028"/> <point f="10132.490968285008" d="4.367052078247"/> <point f="14864.778932891884" d="-1.124279022217"/> <point f="23998.298441881070" d="-24.736995697021"/> <point f="23999.149205860322" d="-64.000000000000"/> </curve> </equalizationeffect>
[Embed image of frequency analysis]
Audacity has a frequency analysis tool but I’ve never been able to get any usable information from it. I’ve tried to sweep through and reduce the spikes at certain frequencies and it just makes it sound worse.
Compression is an audio process to reduce the range of volumes for sections of your podcast while keeping your quiet parts quiet. For example, if you have a moment where everyone laughed and the volume is a lot higher than the average volume, running compression will bring that volume back down towards the rest of the audio. This video helped me understand compression.
Compression is very important for reducing the difficulty of listening to your podcast. Have you ever watched a movie where the quiet parts were too quiet, but the loud parts were too loud? Without compression, your podcast may end up like that. Your audience will have to constantly adjust their volume to maintain a comfortable and clear output. That’s a lot of effort and you want to reduce the effort to listen to your podcast.
Before running Compression I will usually run a limiter to reduce the damage from any egregious sections. For example if someone clapped their hands it’ll often peak the recording for a moment. A limiter will reduce that a bit. A limiter puts a hard cap on the volume of a section, clipping the audio. This clipping adds a bit of distortion, but these segments are usually short and the distortion is usually pretty minimal. I will usually run a limiter at -2 or -3 db.
Then I run compression. Click compress based on peaks. I don’t entirely know what that checkbox does but the results are better with it clicked. The Audacity Wiki says that it applies upward compression instead of downward compression but I don’t know why that would affect the result.
Normalize to -3. This makes it fairly loud while not getting too close to the max. My least favorite thing is when a good podcast is too quiet. I listen to podcasts while commuting and if I can’t hear your podcast over passing cars with my phone volume all the way up I get sad.
Normalization is actually not the most correct way to finalize audio. The most correct way would be to normalize “loudness” based on a values like LUFS. I haven’t gotten around to looking into how difficult this is to do in Audacity. Hit me up if you’ve got it figured out.
Mono. mp3. 86 kbps. Download some podcasts you like and take a look at their files.