Tag Archives: delay

Dynamics Processing: Compressor/Limiter, part 3

In part 1 of this series of posts, I went over creating an envelope detector that detects both peak amplitude and RMS values. In part 2, I used it to create a compressor/limiter. There were two common features missing from that compressor plug-in, however, that I will go over in this final part: soft knee and lookahead. Also, as I have stated in the previous parts, this effect is being created with Unity in mind, but the theory and code is easily adaptable to other uses.

Let’s start with lookahead since it is very straightforward to implement. Lookahead is common in limiters and compressors because any non-zero attack/release times will cause the envelope to lag behind the audio due to the filtering, and as a result, it won’t attenuate the right part of the signal corresponding to the envelope. This can be fixed by delaying the output of the audio so that it lines up with the signal’s envelope. The amount we delay the audio by is the lookahead time, so an extra field is needed in the compressor:

public class Compressor : MonoBehaviour
{
    [AudioSlider("Threshold (dB)", -60f, 0f)]
    public float threshold = 0f;		// in dB
    [AudioSlider("Ratio (x:1)", 1f, 20f)]
    public float ratio = 1f;
    [AudioSlider("Knee", 0f, 1f)]
    public float knee = 0.2f;
    [AudioSlider("Pre-gain (dB)", -12f, 24f)]
    public float preGain = 0f;			// in dB, amplifies the audio signal prior to envelope detection.
    [AudioSlider("Post-gain (dB)", -12f, 24f)]
    public float postGain = 0f;			// in dB, amplifies the audio signal after compression.
    [AudioSlider("Attack time (ms)", 0f, 200f)]
    public float attackTime = 10f;		// in ms
    [AudioSlider("Release time (ms)", 10f, 3000f)]
    public float releaseTime = 50f;		// in ms
    [AudioSlider("Lookahead time (ms)", 0, 200f)]
    public float lookaheadTime = 0f;	// in ms

    public ProcessType processType = ProcessType.Compressor;
    public DetectionMode detectMode = DetectionMode.Peak;

    private EnvelopeDetector[] m_EnvelopeDetector;
    private Delay m_LookaheadDelay;

    private delegate float SlopeCalculation (float ratio);
    private SlopeCalculation m_SlopeFunc;

    // Continued...

I won’t actually go over implementing the delay itself since it is very straightforward (it’s just a simple circular buffer delay line). The one thing I will say is that if you want the lookahead time to be modifiable in real time, the circular buffer needs to be initialized to a maximum length allowed by the lookahead time (in my case 200ms), and then you need to keep track of the actual time/position in the buffer that will move based on the current delay time.

The delay comes after the envelope is extracted from the audio signal and before the compressor gain is applied:

void OnAudioFilterRead (float[] data, int numChannels)
{
    // Calculate pre-gain & extract envelope
    // ...

    // Delay the incoming signal for lookahead.
    if (lookaheadTime > 0f) {
        m_LookaheadDelay.SetDelayTime(lookaheadTime, sampleRate);
        m_LookaheadDelay.Process(data, numChannels);
    }

    // Apply compressor gain
    // ...
}

That’s all there is to the lookahead, so now we turn our attention to the last feature. The knee of the compressor is the area around the threshold where compression kicks in. This can either be a hard knee (the compressor kicks in abruptly as soon as the threshold is reached) or a soft knee (compression is more gradual around the threshold, known as the knee width). Comparing the two in a plot illustrates the difference clearly.

Hard knee in black and soft knee in light blue (threshold is -24 dB).

Hard knee in black and soft knee in light blue (threshold is -24 dB).

There are two common ways of specifying the knee width. One is an absolute value in dB, and the other is as a factor of the threshold as a value between 0 and 1. The latter is one that I’ve found to be most common, so it will be what I use. In the diagram above, for example, the threshold is -24 dB, so a knee value of 1.0 results in a knee width of 24 dB around the threshold. Like the lookahead feature, a new field will be required:

public class Compressor : MonoBehaviour
{
    [AudioSlider("Threshold (dB)", -60f, 0f)]
    public float threshold = 0f;		// in dB
    [AudioSlider("Ratio (x:1)", 1f, 20f)]
    public float ratio = 1f;
    [AudioSlider("Knee", 0f, 1f)]
    public float knee = 0.2f;
    [AudioSlider("Pre-gain (dB)", -12f, 24f)]
    public float preGain = 0f;			// in dB, amplifies the audio signal prior to envelope detection.
    [AudioSlider("Post-gain (dB)", -12f, 24f)]
    public float postGain = 0f;			// in dB, amplifies the audio signal after compression.
    [AudioSlider("Attack time (ms)", 0f, 200f)]
    public float attackTime = 10f;		// in ms
    [AudioSlider("Release time (ms)", 10f, 3000f)]
    public float releaseTime = 50f;		// in ms
    [AudioSlider("Lookahead time (ms)", 0, 200f)]
    public float lookaheadTime = 0f;	// in ms

    public ProcessType processType = ProcessType.Compressor;
    public DetectionMode detectMode = DetectionMode.Peak;

    private EnvelopeDetector[] m_EnvelopeDetector;
    private Delay m_LookaheadDelay;

    private delegate float SlopeCalculation (float ratio);
    private SlopeCalculation m_SlopeFunc;

    // Continued...

At the start of our process block (OnAudioFilterRead()), we set up for a possible soft knee compression:

float kneeWidth = threshold * knee * -1f; // Threshold is in dB and will always be either 0 or negative, so * by -1 to make positive.
float lowerKneeBound = threshold - (kneeWidth / 2f);
float upperKneeBound = threshold + (kneeWidth / 2f);

Still in the processing block, we calculate the compressor slope as normal according to the equation from part 2:

slope = 1 – (1 / ratio), for compression

slope = 1, for limiting

To calculate the actual soft knee, I will use linear interpolation. First I check if the knee width is > 0 for a soft knee. If it is, the slope value is scaled by the linear interpolation factor if the envelope value is within the knee bounds:

slope *= ((envValue – lowerKneeBound) / kneeWidth) * 0.5

The compressor gain is then determined using the same equation as before, except instead of calculating in relation to the threshold, we use the lower knee bound:

gain = slope * (lowerKneeBound – envValue)

The rest of the calculation is the same:

for (int i = 0, j = 0; i < data.Length; i+=numChannels, ++j) {
    envValue = AudioUtil.Amp2dB(envelopeData[0][j]);
    slope = m_SlopeFunc(ratio);

    if (kneeWidth > 0f && envValue > lowerKneeBound && envValue < upperKneeBound) { // Soft knee
        // Lerp the compressor slope value.
        // Slope is multiplied by 0.5 since the gain is calculated in relation to the lower knee bound for soft knee.
        // Otherwise, the interpolation's peak will be reached at the threshold instead of at the upper knee bound.
        slope *= ( ((envValue - lowerKneeBound) / kneeWidth) * 0.5f );
        gain = slope * (lowerKneeBound - envValue);
    } else { // Hard knee
        gain = slope * (threshold - envValue);
        gain = Mathf.Min(0f, gain);
    }

    gain = AudioUtil.dB2Amp(gain);

    for (int chan = 0; chan < numChannels; ++chan) {
        data[i+chan] *= (gain * postGainAmp);
    }
}

In order to verify that the soft knee is calculated correctly, it is best to plot the results. To do this I just created a helper method that calculates the compressor values for a range of input values from -90 dB to 0 dB. Here is the plot of a compressor with a threshold of -12.5 dB, a 4:1 ratio, and a knee of 0.4:

Compressor with a threshold of -12.5 dB, 4:1 ratio, and knee of 0.4.

Compressor with a threshold of -12.5 dB, 4:1 ratio, and knee of 0.4.

Of course this also works when the compressor is in limiter mode, which will result in a gentler application of the limiting effect.

Compressor in limiter mode with a threshold of -18 dB, and knee of 0.6.

Compressor in limiter mode with a threshold of -18 dB, and knee of 0.6.

That concludes this series on building a compressor/limiter component.

Advertisements

AdVerb: Building a Reverb Plug-In Using Modulating Comb Filters

Some time ago, I began exploring the early reverb algorithms of Schroeder and Moorer, whose work dates back all the way to the 1960s and 70s respectively.  Still their designs and theories inform the making of algorithmic reverbs today.  Recently I took it upon myself to continue experimenting with the Moorer design I left off with in an earlier post.  This resulted in the complete reverb plug-in “AdVerb”, which is available for free in downloads.  Let me share what went into designing and implementing this effect.

One of the foremost challenges in basing a reverb design on Schroeder or Moorer is that it tends to sound a little metallic because with the number of comb filters suggested, the echo density doesn’t build up fast or dense enough.  The all-pass filters in series that come after the comb filter section helps to diffuse the reverb tail, but I found that the delaying all-pass filters added a little metallic sound of their own.  One obvious way of overcoming this is to add more comb filters (today’s computers can certainly handle it).  More importantly, however, the delay times of the comb filters need to be mutually prime so that their frequency responses don’t overlap, which would result in increased beating in the reverb tail.

To arrive at my values for the 8 comb filters I’m using, I wrote a simple little script that calculated the greatest common divisor between all the delay times I chose and made sure that the results were 1.  This required a little bit of tweaking in the numbers, as you can imagine finding 8 coprimes is not as easy as it sounds, especially when trying to keep the range minimal between them.  It’s not as important for the two all-pass filters to be mutually prime because they are in series, not in parallel like the comb filters.

I also discovered, after a number of tests, that the tap delay used to generate the early reflections (based on Moorer’s design) was causing some problems in my sound.  I’m still a bit unsure as to why, though it could be poorly chosen tap delay times or something to do with mixing, but it was enough so that I decided to discard the tap delay network and just focus on comb filters and all-pass filters.  It was then that I took an idea from Dattorro and Frenette who both showed how the use of modulated comb/all-pass filters can help smear the echo density and add warmth to the reverb.  Dattorro is responsible for the well-known plate reverbs that use modulating all-pass filters in series.

The idea behind a modulated delay line is that some oscillator (usually a low-frequency sine wave) modulates the delay value according to a frequency rate and amplitude.  This is actually the basis for chorusing and flanging effects.  In a reverb, however, the values need to be kept very small so that the chorusing effect will not be evident.

I had fun experimenting with these modulated delay lines, and so I eventually decided to modulate one of the all-pass filters as well and give control of it to the user, which offers a great deal more fun and crazy ways to use this plug-in.  Let’s take a look at the modulated all-pass filter (the modulated comb filter is very similar).  We already know what an all-pass filter looks like, so here’s just the modulated delay line:

Modulated all-pass filter.

Modulated all-pass filter.

The oscillator modulates the value currently in the delay line that we then use to interpolate, resulting in the actual value.  In code it looks like this:

double offset, read_offset, fraction, next;
size_t read_pos;

offset = (delay_length / 2.) * (1. + sin(phase) * depth);
phase += phase_incr;
if (phase > TWO_PI) phase -= TWO_PI;
if (offset > delay_length) offset = delay_length;

read_offset = ((size_t)delay_buffer->p - (size_t)delay_buffer->p_head) / sizeof(double) - offset;
if (read_offset < 0) {
    read_offset = read_offset + delay_length;
} else if (read_offset > delay_length) {
    read_offset = read_offset - delay_length;
}

read_pos = (size_t)read_offset;
fraction = read_offset - read_pos;
if (read_pos != delay_length - 1) {
    next = *(delay_buffer->p_head + read_pos + 1);
} else {
    next = *delay_buffer->p_head;
}

return *(delay_buffer->p_head + read_pos) + fraction * (next - *(delay_buffer->p_head + read_pos));

In case that looks a little daunting, we’ll step through the C code (apologies for the pointer arithmetic!).  At the top we calculate the offset using the delay length in samples as our base point.  The following lines are easily seen as incrementing and wrapping the phase of the oscillator as well as capping the offset to the delay length.

The next line calculates the current position in the buffer from the current position pointer, p, and the buffer head, p_head.  This is accomplished by casting the pointer addresses to integral values and dividing by the size of the data type of each buffer element.  The read_offset position will determine where in the delay buffer we read from, so it needs to be clamped to the buffer’s length as well.

The rest is simply linear interpolation (albeit with some pointer arithmetic: delay_buffer->p_head + read_pos + 1 is equivalent to delay_buffer[read_pos + 1]).  Once we have our modulated delay value, we can finish processing the all-pass filter:

delay_val = get_modulated_delay_value(allpass_filter);

// don't write the modulated delay_val into the buffer, only use it for the output sample
*delay_buffer->p = sample_in + (*delay_buffer->p * allpass_filter->g);
sample_out = delay_val - (allpass_filter->g * sample_in);

The final topology of the reverb is given below:

Topology of the AdVerb plug-in.

Topology of the AdVerb plug-in.

The pre-delay is implemented by a simple delay line, and the low-pass filters are of the one-pole IIR variety.  Putting the LPFs inside the comb filters’ feedback loops simulates the absorption of energy that sound undergoes as it comes in contact with surfaces and travels through air.  This factor can be controlled with a damping parameter in the plug-in.

The one-pole moving-average filter is there for an extra bit of high frequency roll-off, and I chose it because this particular filter is an FIR type and has linear phase so it won’t add further disturbance to the modulated samples entering it.  The last (normal) all-pass filter in the series serves to add extra diffusion to the reverb tail.

Here are some short sound samples using a selection of presets included in the plug-in:

Piano, “Medium Room” preset

The preceding sample demonstrates a normal reverb setting.  Following are a few samples that demonstrate a couple of subtle and not-so-subtle effects:

Piano, “Make it Vintage” preset

Piano, “Bad Grammar” preset

Flute, “Shimmering Tail” preset

Feel free to get in touch regarding any questions or comments on “AdVerb“.

Algorithmic Reverbs: The Moorer Design

And we’re back to talk about reverberation.  Previously I introduced the Schroeder reverb design that used four comb filters in parallel that then fed two all-pass filters in series.  This signal would then be mixed with the original dry audio to produce the output.  This design was one of the very first in the digital domain, yet still provides the foundation for much of the algorithmic reverbs used today.  James A. Moorer was one of the first to expand and improve upon Schroeder’s design in the late seventies and was able to implement some of the suggestions and theories put forth by Schroeder that would enhance digital reverb.

One of these was the use of a tapped delay line to simulate early reflections, which are of crucial importance in the perception of acoustic space, moreso than the late reflections. This tapped delay line that forms the basis of the early reflections can contain delay times and a gain structure that could be modelled on a measured acoustic space, like a concert hall for instance.  In fact, Moorer did just that, and in his article “About This Reverberation Business” in the Computer Music Journal, he offers up a 19-tap delay line that was taken from a geometric simulation of the Boston Symphony Hall.  Here are those values put into an array for my implementation (I omit the first tap because it has a delay time of 0 with a gain of 1, which is just the original signal):

Tap delay time and gain values along with values for the comb filters and the LP filters

Another improvement Moorer made to his design was to include a simple first-order low-pass filter in the feedback loop of the six comb filters to simulate the absorption effects of air.  He goes on to talk about the intensity of sound and its relation to atmospheric conditions as it travels through it such as humidity, temperature, the frequency of the sound, and distance from the source.  The values I came up with for the low-pass filters are at this point experimental, though at this stage they seem to work well.  I’m not sure at this point exactly how to approximate the cutoff frequencies of these filters based on the data Moorer presented about the loss of energy that happens with sound as it travels, so more research will be needed in this area.  However, I’m also fine with deriving my own values and adjusting them to fit my needs of an acceptable sound.

We may recall previously the simple algorithm that implements a comb filter, and now with a low-pass filter in the the loop, it looks like this:

Comb Filter with a first-order IIR low-pass filter in the feedback loop

A little more experimentation can be done here too in placing the low-pass filter in an optimal position in the loop.  Here I am calculating the LP filter after the feedback gain is applied, though I’ve seen it being applied to the original signal prior to it entering the feedback loop as well.  Placing the LP filter in a good spot could potentially open up the possibility of controlling the brightness of the late reflections of the reverb in a meaningful way.

We now have a fairly complete picture of the Moorer design, illustrated below.

The Moorer Reverb Design

The last little detail has to do with the delay line in the late reflections network.  This ensures that the late reflections arrive at the output just a little after the early reflections. With a multitude of values, from delay lengths and gains, to how to mix all these elements together, it’s clear that reverb design is a combination of both science and art, and why it remains as one of the foremost challenges in DSP.

Now it follows that we do some listening, so here are some audio samples of the Moorer Reverberator.  The values used are for the most part Moorer’s own, but as was discussed earlier, the frequency cutoffs of the LP filters are my own, as is the delay time of the delay line in the late reflections network.  As an extension of this I have been tweaking the values proposed by Moorer as well as looking into other ways to modify this design to perhaps come up with my own reverb unit, but I’m sticking pretty close to Moorer’s design for this little show-and-tell.

Guitar strum with 1.4 second delay time at 27% wet mix

Guitar strum with 2.4 second delay time at 40% wet mix

Original guitar strum recording

The effects of the LP filter is quite noticeable in comparison to the Schroeder reverberation applied to the same audio file in that particular blog posting.  The overall effect on this soundfile is fairly subtle, however this is not necessarily a bad thing as it adds just a little sense of acoustic space to the sound.  The good thing about using this soundfile to test on is the long decay.  It is often here that we can hear the faults in a digital reverberator because the decay is otherwise masked in the more dense and active sections of audio.  We need to be careful to avoid “pumping” sounds or “puffing” in the decay tail of a reverb, and this is sometimes the fault of the all-pass filter as noted by Moorer.  The benefit of using this in the late reverberation network is to diffuse the late echoes, but it’s effect on the phase of the signal can be disruptive if the values for delay time and gain are not carefully chosen.  Moorer suggests a value of 6ms for delay time at a gain value of around 0.7.

Piano riff with 1.6 delay time at 24% wet mix

Piano riff with 3.6 second delay time at 50% wet mix

Original piano riff recording

With a more percussive sound like the piano or drums, we have to be careful to avoid creating a discernable echo in the early reflections as this won’t sound natural.  At a lower mix setting and relatively short delay time, this doesn’t seem to be too much of a problem in the above examples, but in the more extreme case of the 3.6 second delay, the reverb doesn’t hold up.  The decay feels unnatural and there is coloration on the sound.  There are few reverbs, however, that adhere to the one-size-fits-all model, and perhaps the Moorer design is a little more applicable to shorter reverb lengths.  But there is more experimentation to be done.  More tweaking.  Moorer did propose that additional filters could be inserted to further help shape the reverb decay and account for high frequency absorption and distance, and in experimenting around with all the numbers in the euqation, perhaps some really interesting things will happen.

Building a Comb Filter in Audio Units

Now as I am looking into and learning more about digital reverberation, including its implementation and theory, I decided to build a simple comb filter plug-in using Audio Units.  Previously all the plug-in work I’ve done has been using VST, but I was anxious to learn another side of plug-in development, hence Apple’s Audio Units.  It is, truth be told, very similar to VST development in that you derive your plug-in as a subclass of Audio Unit’s AUEffectBase class, inheriting and overwriting functions accordingly to the needs of your effect.  There are some notable differences, however, that are worth pointing out.  In addition, I’ve put up the plug-in available for download on the Downloads page.

The structure of an Audio Unit differs from VST in that within the main interface of the plug-in, a kernel object that is derived from AUKernelBase handles the actual DSP processing.  The outer interface as subclassed from AUEffectBase handles the view, parameters, and communication with the host.  What’s interesting about this method is that the Audio Unit automatically handles multichannel audio streams by initializing new kernels.  This means that the code you write within the Process() function of the kernel object is written as if to handle mono audio data.  When the plug-in detects stereo data it simply initializes another kernel to process the additional channel.  For n-to-n channel effects, this works well.  Naturally options are available for effects or instruments that require n-to-m channel output.

Another benefit of this structure is the generally fast load times of Audio Unit plug-ins.  The plug-in’s constructor, invoked during its instantiation, should not contain any code that requires heavy lifting.  Instead this should be placed within the kernel’s constructor, the initialization, so that any heavy processing will only occur when the user is ready for it.  Acquring the delay buffer in the comb filter happens in the kernel’s constructor, as indicated below, while the plug-in’s constructor only sets up the initial parameter values and presets.

Comb Filter kernel constructor

Comb Filter base constructor

The parameters in Audio Units also differ from VST in that they are not forced to be floating point values that the programmer is responsible for mapping for the purpose of displaying in the UI.  Audio Units comes with built-in categories for parameters which allow you to declare minimum and maximum values for in addition to a default value that is used for when the plug-in instantiates.

Declaring parameters in GetParameterInfo()

Like VST, Audio Units contains a function called Reset() that is called whenever the user starts or stops playback.  This is where you would clear buffers or reset any variables needed to return the plug-in to an initialized state to avoid any clicks, pops, or artifacts when playback is resumed.

Performing clean-up in Reset()

Because a comb filter is essentially a form of delay, a circular buffer is used (mDelayBuf) to hold the delayed audio samples.  In real-time processing where the delay time can change, however, this has repercussions on the size of the buffer used, as it would normally be allocated to the exact number of samples needed to hold the data.  But rather than deallocating and reallocating the delay buffer every time the delay time changes (requiring multiple memory accesses), I allocate the buffer to its maximum possible size as given by the maximum value allowed for the delay time.  As the delay time changes, I keep track of its size with the curBufSize variable, and it is this value that I use to wrap around the buffer’s cursor position (mPos).  This happens within the Process() function.

Comb Filter’s Process() function

Every time Process() is called (which is every time the host sends a new block of samples to the plug-in), it updates the current size of the buffer and checks to make sure that mPos does not exceed it.  The unfortunate consequence of varying the delay time of an effect such as this is that it results in pops and artifacting when it is changed in real time.  The reason being that when the delay time is changed in real time, samples are lost or skipped over, resulting in non-contiguous samples causing artifacting.  This could be remedied by implementing the Comb Filter as a variable delay, meaning when the delay time changes in real time, interpolation is used to fill in the gaps.  As it stands, however, the delay time is not practically suited for automation.

Yet another distinction with Audio Units is the requirement for validation to be usable in a host.  Audio Units are managed by OS X’s Component Manager, and this is where hosts check for Audio Unit plug-ins.  To validate an Audio Unit, a tool called “auval” is used.  This method has both pros and cons to it.  The testing procedure helps to ensure any plug-in behaves well in a host, it shouldn’t cause crashes or result in memory leaks.  While I doubt this method is foolproof, it is definitely useful to make sure your plug-in is secure.

Correction: Audio Units no longer use the Component Manager in OS X 10.7+. Here is a technical note from Apple on adapting to the new AUPlugIn entry point.

The downside to it is that some hosts, especially Logic, can be really picky with which plug-ins it accepts.  I had problems loading the Comb Filter plug-in for the simple reason that version numbers didn’t match (since I was going back and forth between debug and release versions), and so it failed Logic’s validation process.  To remedy this, I had to clear away the plug-in from its location in /Library/Audio/Plug-Ins/Components and then, after reinstalling it, open the AU Manager in Logic to force it to check the new version.  This got to be a little frustrating after having to add/remove versions of the plug-in for testing, especially since it passed successfully in auval.  Fortunately it is all up and running now, though!

Comb Filter plug-in in Logic 8

Finally, I’ll end this post with some examples of me “monkey-ing” around with the plug-in in Logic 8, using some of the factory presets I built into it.

Comb Filter, metallic ring preset

Comb Filter, light delay preset

Comb Filter, wax comb preset

Shaking it up with Vibrato

Let’s start it off with some music:

Vibrato has always been an essential technique in making music feel more alive, rich, and full of expression.  Whether it is string, wind, or brass players in an orchestra, a singer, or a synthesized waveform in an old 8-bit NES game, vibrato ensures those long notes and phrases connect with us in a more meaningful way by giving them character and shape.

Unlike tremolo (which was the subject of the previous blog entry), vibrato modulates pitch, not amplitude, using an LFO.  When generating a waveform using synthesis, this is a trivial matter as we have direct access to the frequency component.  But with prerecorded audio, the vibrato effect is a achieved through the use of modulated variable delay.  To better understand this, let’s start off by looking at a basic delay effect implemented in C++ code.

The way a simple delay works is by creating a buffer with a length equal to the delay time (making sure to initialize it to contain all zeroes) and then as we process the audio buffer, we transfer each sample from it into the delay buffer while extracting values from the delay buffer and mixing it with the original audio buffer.  Since the delay buffer is initialized to contain all zeroes, the first pass through it will do nothing to the original audio, but upon completing the first pass the delay buffer will contain the samples from the audio that will then be mixed in, creating the delay.  By using a delay time of 0.5 seconds (which would require the delay buffer to contain 22050 samples assuming a sample rate of 44.1kHz), and a ‘depth’ of 45% or so, the following code would generate a single half-second slap-back delay, or echo, at 45% of the original amplitude:

Adapting this code to create a vibrato effect isn’t too complex, but it does require a few steps that might seem a bit hard to grasp at first.  We need to create a variable delay and this requires two pointers to our delay buffer — a writing pointer that will proceed sample by sample as in the basic delay above, and a reading pointer that will be calculated in relation to the writing pointer and modulated by the LFO.  The reading position will almost always fall between buffer positions, so interpolation is required to achieve more accurate output.  With these points considered, the variable delay code becomes:

It was here that I first encountered a big roadblock in writing my vibrato effect.  Upon testing it on a number of soundfiles, I was getting a moderate amount of distortion, or sample noise, in my output.  Having already learned from similar challenges in writing the tremolo effect previously, I was fairly certain this was a new issue I had to tackle.  The test that led me to the source of the problem was using a constant delay time in the code above (no modulation by the sine wave) and that produced a clean output.  From here, I knew the problem had to lie in how I was calculating the offset using the sine wave modulator.  Originally I calculated it like this:

offset = (delay time * sine wave(phase)) * sample rate,

where the phase of the sine wave increments by the value of 2 * pi * freq / SR.  After doing some research (and hard thinking on the matter), it became clear that this was the wrong mathematical operation because multiplying the modulator with the delay time scales it; we want to move “around” it (i.e. vibrato fluctuates pitch by a small amount around a central pitch).  That eventually led me to come up with the following base equation:

offset = (delay time + sine wave(phase) * delay time) * sample rate.

This equation needs a couple more modifications since it isn’t modulating “around” the delay time yet, just adding to it.  A depth modifier needs to be included in here as well so that we can change the intensity of the vibrato effect (by modifying the magnitude of the sine wave).  The final equation then becomes:

offset = (delay time/2 + (sine wave(phase) *depth) * delay time/2) * sample rate,

which simplifies to:

offset = (delay time/2 * (1 + sine wave(phase) * depth)) * sample rate.

This finally created the expected output I was after!  It’s such a great feeling to solve logical programming challenges!  Here is an example of the output with a vibrato rate of 8.6Hz at 32% depth:

Terra’s theme with vibrato rate of 8.6Hz at 32% depth

One other important element to discuss is the actual delay time used to generate the vibrato effect.  I experiemented around with many values before settling on a delay time of 0.004 seconds, which is the value that we “delay around” using the sine wave.  I found as the values got smaller than 0.004 seconds that the sound of the effect degraded, and actually resulted in some sample noise because the delay buffer became so small (nearing as few as only 30 samples).  As the delay time increases, the pitch of the audio begins to vary so much that we actually lose almost all pitch in the original audio.

This is not necessarily a bad thing.  This opens up vibrato to be used as a sound effect rather than purely a musical expression tool.  By setting the delay time to 0.03 seconds for example, the vibrato effect generates an output not unlike a record-scratch or something resembling flanging (which is actually also achieved through the use of variable delay).  See if you can recognize the source music in this sample:

Vibrato effect at 9.0Hz and 75% depth

Of course a more subtle effect is often desired for musical purposes and this is controlled by the depth modifier.  Here is a sample of a more subtle vibrato effect (back to the delay time of 0.004 seconds):

Zelda with vibrato rate of 6.4Hz at a 13% depth

One final thing to mention in regards to applying the vibrato effect onto prerecorded audio is that it can distort the sound somewhat when the audio used is a fully realized composition.  The vibrato is of course being applied on to the entire file (i.e. every instrument, every sound).  A more practical application would be to use vibrato on a single instrument source; a flute for example (please excuse my horrible flute playing):

Flute with vibrato rate of 6.0Hz at a 40% depth

Last, but not least, it is important to consider the implementation and design of the code that applies the effect.  I have continued to code these effects as C++ classes using object-oriented design as it makes the implementation of them very easy and efficient.  For example, calling the effect in the main loop of the program is as trivial as:

Here we can see that first we read sample data in from the soundfile and store it in ‘buffer’.  Then the ‘buffer’ is passed, along with the LFO modulator, into the process that applies the variable delay (vibrato in this case), and this is then written to the output soundfile.  The LFO modulator used for the vibrato is just a new instance of the oscillator class I developed for the tremolo effect previously.  I just initialize a new instance of it for use in the vibrato effect, and done!

This is an example of the benefits of object-oriented design and how adaptable it is.  We’ll be seeing much more of this to come as well.  For example, it would require a few trivial code changes to set up multi-tap delays, each with their own depth, and even to incorporate filters into the delays once I get into developing them.  And finally, allowing the use of envelopes to further shape these effects will be an important step to be taken in the future.  With so many tantalizing possibilities, there’s no stopping now!