In part 1 I detailed how I built the envelope detector that I will now use in my Unity compressor/limiter. To reiterate, the envelope detector extracts the amplitude contour of the audio that will be used by the compressor to determine when to compress the signal’s gain. The response of the compressor is determined by the attack time and the release time of the envelope, with higher values resulting in a smoother envelope, and hence, a gentler response in the compressor.
The compressor script is a MonoBehaviour component that can be attached to any GameObject. Here are the fields and corresponding inspector GUI:
public class Compressor : MonoBehaviour { [AudioSlider("Threshold (dB)", -60f, 0f)] public float threshold = 0f; // in dB [AudioSlider("Ratio (x:1)", 1f, 20f)] public float ratio = 1f; [AudioSlider("Knee", 0f, 1f)] public float knee = 0.2f; [AudioSlider("Pre-gain (dB)", -12f, 24f)] public float preGain = 0f; // in dB, amplifies the audio signal prior to envelope detection. [AudioSlider("Post-gain (dB)", -12f, 24f)] public float postGain = 0f; // in dB, amplifies the audio signal after compression. [AudioSlider("Attack time (ms)", 0f, 200f)] public float attackTime = 10f; // in ms [AudioSlider("Release time (ms)", 10f, 3000f)] public float releaseTime = 50f; // in ms [AudioSlider("Lookahead time (ms)", 0, 200f)] public float lookaheadTime = 0f; // in ms public ProcessType processType = ProcessType.Compressor; public DetectionMode detectMode = DetectionMode.Peak; private EnvelopeDetector[] m_EnvelopeDetector; private Delay m_LookaheadDelay; private delegate float SlopeCalculation (float ratio); private SlopeCalculation m_SlopeFunc; // Continued...
The two most important parameters for a compressor are the threshold and the ratio values. When a signal exceeds the threshold, the compressor reduces the level of the signal by the given ratio. For example, if the threshold is -2 dB with a ratio of 4:1 and the compressor encounters a signal peak of +2 dB, the gain reduction will be 3 dB, resulting in the signal’s new level of -1dB. The ratio is just a percentage, so a 4:1 ratio means that the signal will be reduced by 75% (1 – 1/4 = 0.75). The difference between the threshold and the signal peak (which is 4 dB in this example) is scaled by the ratio to arrive at the 3 dB reduction (4 * 0.75 = 3). When the ratio is ∞:1, the compressor is turned into a limiter. The compressor’s output can be visualized by a plot of amplitude in vs. amplitude out:
When the ratio is ∞:1, the resulting amplitude after the threshold would be a straight horizontal line in the above plot, effectively preventing any levels from exceeding the threshold. It can easily be seen how this then would exhibit the behavior of a limiter. From these observations, we can derive the equations we need for the compressor.
compressor gain = slope * (threshold – envelope value) if envelope value >= threshold, otherwise 0
slope = 1 – (1 / ratio), or for limiting, slope = 1
All amplitude values are in dB for these equations. We saw both of these equations earlier in the example I gave, and both are pretty straightforward. These elements can now be combined to make up the compressor/limiter. The Awake method is called as soon as the component is initialized in the scene.
void Awake () { if (processType == ProcessType.Compressor) { m_SlopeFunc = CompressorSlope; } else if (processType == ProcessType.Limiter) { m_SlopeFunc = LimiterSlope; } // Convert from ms to s. attackTime /= 1000f; releaseTime /= 1000f; // Handle stereo max number of channels for now. m_EnvelopeDetector = new EnvelopeDetector[2]; m_EnvelopeDetector[0] = new EnvelopeDetector(attackTime, releaseTime, detectMode, sampleRate); m_EnvelopeDetector[1] = new EnvelopeDetector(attackTime, releaseTime, detectMode, sampleRate); }
Here is the full compressor/limiter code in Unity’s audio callback method. When placed on a component with the audio listener, the data array will contain the audio signal prior to being sent to the system’s output.
void OnAudioFilterRead (float[] data, int numChannels) { float postGainAmp = AudioUtil.dB2Amp(postGain); if (preGain != 0f) { float preGainAmp = AudioUtil.dB2Amp(preGain); for (int k = 0; k < data.Length; ++k) { data[k] *= preGainAmp; } } float[][] envelopeData = new float[numChannels][]; if (numChannels == 2) { float[][] channels; AudioUtil.DeinterleaveBuffer(data, out channels, numChannels); m_EnvelopeDetector[0].GetEnvelope(channels[0], out envelopeData[0]); m_EnvelopeDetector[1].GetEnvelope(channels[1], out envelopeData[1]); for (int n = 0; n < envelopeData[0].Length; ++n) { envelopeData[0][n] = Mathf.Max(envelopeData[0][n], envelopeData[1][n]); } } else if (numChannels == 1) { m_EnvelopeDetector[0].GetEnvelope(data, out envelopeData[0]); } else { // Error... } m_Slope = m_SlopeFunc(ratio); for (int i = 0, j = 0; i < data.Length; i+=numChannels, ++j) { m_Gain = m_Slope * (threshold - AudioUtil.Amp2dB(envelopeData[0][j])); m_Gain = Mathf.Min(0f, m_Gain); m_Gain = AudioUtil.dB2Amp(m_Gain); for (int chan = 0; chan < numChannels; ++chan) { data[i+chan] *= (m_Gain * postGainAmp); } } }
And quickly, here is the helper method for deinterleaving a multichannel buffer:
public static void DeinterleaveBuffer (float[] source, out float[][] output, int sourceChannels) { int channelLength = source.Length / sourceChannels; output = new float[sourceChannels][]; for (int i = 0; i < sourceChannels; ++i) { output[i] = new float[channelLength]; for (int j = 0; j < channelLength; ++j) { output[i][j] = source[j*sourceChannels+i]; } } }
First off, there are a few utility functions that I included in the component that converts between linear amplitude and dB values that we can see in the function above. Pre-gain is applied to the audio signal prior to extracting the envelope. For multichannel audio, Unity unfortunately gives us an interleaved buffer, so this needs to be deinterleaved before sending it to the envelope detector (recall that the detector uses a recursive filter and thus has state variables. This could of course be handled differently in the envelope detector, but it’s simpler to work on single continuous data buffers).
When working with multichannel audio, each channel will have a unique envelope. These could of course be processed separately, but this will result in the relative levels between the channels to be disturbed. Instead, I take the maximum envelope value and use that for the compressor. Another option would be to take the average of the two.
I then calculate the slope value based on whether the component is set to compressor or limiter mode (via a function delegate). The following loop is just realizing the equations posted earlier, and converting the dB gain value to linear amplitude before applying it to the audio signal along with post-gain.
This completes the compressor/limiter component. However, there are two important elements missing: soft knee processing, and lookahead. From the plot earlier in the post, we see that once the signal reaches the threshold, the compressor kicks in rather abruptly. This point is called the knee of the compressor, and if we want this transition to happen more gently, we can interpolate within a zone around the threshold.
It’s common, especially in limiters, to have a lookahead feature that compensates for the obvious lag of the envelope detector. In other words, when the attack and release times are non-zero, the resulting envelope lags behind the audio signal as a result of the filtering. The compressor/limiter will actually miss attenuating the peaks in the signal that it needs to because of this lag. That’s where lookahead comes in. In truth, it’s a bit of a misnomer because we can obviously not see into the future of an audio signal, but we can delay the audio to achieve the same effect. This means that we extract the envelope as normal, but delay the audio output so that the compressor gain value lines up with the audio peaks that it is meant to attenuate.
I will be implementing these two remaining features in a future post.