Tag Archives: class

Building a Comb Filter in Audio Units

Now as I am looking into and learning more about digital reverberation, including its implementation and theory, I decided to build a simple comb filter plug-in using Audio Units.  Previously all the plug-in work I’ve done has been using VST, but I was anxious to learn another side of plug-in development, hence Apple’s Audio Units.  It is, truth be told, very similar to VST development in that you derive your plug-in as a subclass of Audio Unit’s AUEffectBase class, inheriting and overwriting functions accordingly to the needs of your effect.  There are some notable differences, however, that are worth pointing out.  In addition, I’ve put up the plug-in available for download on the Downloads page.

The structure of an Audio Unit differs from VST in that within the main interface of the plug-in, a kernel object that is derived from AUKernelBase handles the actual DSP processing.  The outer interface as subclassed from AUEffectBase handles the view, parameters, and communication with the host.  What’s interesting about this method is that the Audio Unit automatically handles multichannel audio streams by initializing new kernels.  This means that the code you write within the Process() function of the kernel object is written as if to handle mono audio data.  When the plug-in detects stereo data it simply initializes another kernel to process the additional channel.  For n-to-n channel effects, this works well.  Naturally options are available for effects or instruments that require n-to-m channel output.

Another benefit of this structure is the generally fast load times of Audio Unit plug-ins.  The plug-in’s constructor, invoked during its instantiation, should not contain any code that requires heavy lifting.  Instead this should be placed within the kernel’s constructor, the initialization, so that any heavy processing will only occur when the user is ready for it.  Acquring the delay buffer in the comb filter happens in the kernel’s constructor, as indicated below, while the plug-in’s constructor only sets up the initial parameter values and presets.

Comb Filter kernel constructor

Comb Filter base constructor

The parameters in Audio Units also differ from VST in that they are not forced to be floating point values that the programmer is responsible for mapping for the purpose of displaying in the UI.  Audio Units comes with built-in categories for parameters which allow you to declare minimum and maximum values for in addition to a default value that is used for when the plug-in instantiates.

Declaring parameters in GetParameterInfo()

Like VST, Audio Units contains a function called Reset() that is called whenever the user starts or stops playback.  This is where you would clear buffers or reset any variables needed to return the plug-in to an initialized state to avoid any clicks, pops, or artifacts when playback is resumed.

Performing clean-up in Reset()

Because a comb filter is essentially a form of delay, a circular buffer is used (mDelayBuf) to hold the delayed audio samples.  In real-time processing where the delay time can change, however, this has repercussions on the size of the buffer used, as it would normally be allocated to the exact number of samples needed to hold the data.  But rather than deallocating and reallocating the delay buffer every time the delay time changes (requiring multiple memory accesses), I allocate the buffer to its maximum possible size as given by the maximum value allowed for the delay time.  As the delay time changes, I keep track of its size with the curBufSize variable, and it is this value that I use to wrap around the buffer’s cursor position (mPos).  This happens within the Process() function.

Comb Filter’s Process() function

Every time Process() is called (which is every time the host sends a new block of samples to the plug-in), it updates the current size of the buffer and checks to make sure that mPos does not exceed it.  The unfortunate consequence of varying the delay time of an effect such as this is that it results in pops and artifacting when it is changed in real time.  The reason being that when the delay time is changed in real time, samples are lost or skipped over, resulting in non-contiguous samples causing artifacting.  This could be remedied by implementing the Comb Filter as a variable delay, meaning when the delay time changes in real time, interpolation is used to fill in the gaps.  As it stands, however, the delay time is not practically suited for automation.

Yet another distinction with Audio Units is the requirement for validation to be usable in a host.  Audio Units are managed by OS X’s Component Manager, and this is where hosts check for Audio Unit plug-ins.  To validate an Audio Unit, a tool called “auval” is used.  This method has both pros and cons to it.  The testing procedure helps to ensure any plug-in behaves well in a host, it shouldn’t cause crashes or result in memory leaks.  While I doubt this method is foolproof, it is definitely useful to make sure your plug-in is secure.

Correction: Audio Units no longer use the Component Manager in OS X 10.7+. Here is a technical note from Apple on adapting to the new AUPlugIn entry point.

The downside to it is that some hosts, especially Logic, can be really picky with which plug-ins it accepts.  I had problems loading the Comb Filter plug-in for the simple reason that version numbers didn’t match (since I was going back and forth between debug and release versions), and so it failed Logic’s validation process.  To remedy this, I had to clear away the plug-in from its location in /Library/Audio/Plug-Ins/Components and then, after reinstalling it, open the AU Manager in Logic to force it to check the new version.  This got to be a little frustrating after having to add/remove versions of the plug-in for testing, especially since it passed successfully in auval.  Fortunately it is all up and running now, though!

Comb Filter plug-in in Logic 8

Finally, I’ll end this post with some examples of me “monkey-ing” around with the plug-in in Logic 8, using some of the factory presets I built into it.

Comb Filter, metallic ring preset

Comb Filter, light delay preset

Comb Filter, wax comb preset

Shaking it up with Vibrato

Let’s start it off with some music:

Vibrato has always been an essential technique in making music feel more alive, rich, and full of expression.  Whether it is string, wind, or brass players in an orchestra, a singer, or a synthesized waveform in an old 8-bit NES game, vibrato ensures those long notes and phrases connect with us in a more meaningful way by giving them character and shape.

Unlike tremolo (which was the subject of the previous blog entry), vibrato modulates pitch, not amplitude, using an LFO.  When generating a waveform using synthesis, this is a trivial matter as we have direct access to the frequency component.  But with prerecorded audio, the vibrato effect is a achieved through the use of modulated variable delay.  To better understand this, let’s start off by looking at a basic delay effect implemented in C++ code.

The way a simple delay works is by creating a buffer with a length equal to the delay time (making sure to initialize it to contain all zeroes) and then as we process the audio buffer, we transfer each sample from it into the delay buffer while extracting values from the delay buffer and mixing it with the original audio buffer.  Since the delay buffer is initialized to contain all zeroes, the first pass through it will do nothing to the original audio, but upon completing the first pass the delay buffer will contain the samples from the audio that will then be mixed in, creating the delay.  By using a delay time of 0.5 seconds (which would require the delay buffer to contain 22050 samples assuming a sample rate of 44.1kHz), and a ‘depth’ of 45% or so, the following code would generate a single half-second slap-back delay, or echo, at 45% of the original amplitude:

Adapting this code to create a vibrato effect isn’t too complex, but it does require a few steps that might seem a bit hard to grasp at first.  We need to create a variable delay and this requires two pointers to our delay buffer — a writing pointer that will proceed sample by sample as in the basic delay above, and a reading pointer that will be calculated in relation to the writing pointer and modulated by the LFO.  The reading position will almost always fall between buffer positions, so interpolation is required to achieve more accurate output.  With these points considered, the variable delay code becomes:

It was here that I first encountered a big roadblock in writing my vibrato effect.  Upon testing it on a number of soundfiles, I was getting a moderate amount of distortion, or sample noise, in my output.  Having already learned from similar challenges in writing the tremolo effect previously, I was fairly certain this was a new issue I had to tackle.  The test that led me to the source of the problem was using a constant delay time in the code above (no modulation by the sine wave) and that produced a clean output.  From here, I knew the problem had to lie in how I was calculating the offset using the sine wave modulator.  Originally I calculated it like this:

offset = (delay time * sine wave(phase)) * sample rate,

where the phase of the sine wave increments by the value of 2 * pi * freq / SR.  After doing some research (and hard thinking on the matter), it became clear that this was the wrong mathematical operation because multiplying the modulator with the delay time scales it; we want to move “around” it (i.e. vibrato fluctuates pitch by a small amount around a central pitch).  That eventually led me to come up with the following base equation:

offset = (delay time + sine wave(phase) * delay time) * sample rate.

This equation needs a couple more modifications since it isn’t modulating “around” the delay time yet, just adding to it.  A depth modifier needs to be included in here as well so that we can change the intensity of the vibrato effect (by modifying the magnitude of the sine wave).  The final equation then becomes:

offset = (delay time/2 + (sine wave(phase) *depth) * delay time/2) * sample rate,

which simplifies to:

offset = (delay time/2 * (1 + sine wave(phase) * depth)) * sample rate.

This finally created the expected output I was after!  It’s such a great feeling to solve logical programming challenges!  Here is an example of the output with a vibrato rate of 8.6Hz at 32% depth:

Terra’s theme with vibrato rate of 8.6Hz at 32% depth

One other important element to discuss is the actual delay time used to generate the vibrato effect.  I experiemented around with many values before settling on a delay time of 0.004 seconds, which is the value that we “delay around” using the sine wave.  I found as the values got smaller than 0.004 seconds that the sound of the effect degraded, and actually resulted in some sample noise because the delay buffer became so small (nearing as few as only 30 samples).  As the delay time increases, the pitch of the audio begins to vary so much that we actually lose almost all pitch in the original audio.

This is not necessarily a bad thing.  This opens up vibrato to be used as a sound effect rather than purely a musical expression tool.  By setting the delay time to 0.03 seconds for example, the vibrato effect generates an output not unlike a record-scratch or something resembling flanging (which is actually also achieved through the use of variable delay).  See if you can recognize the source music in this sample:

Vibrato effect at 9.0Hz and 75% depth

Of course a more subtle effect is often desired for musical purposes and this is controlled by the depth modifier.  Here is a sample of a more subtle vibrato effect (back to the delay time of 0.004 seconds):

Zelda with vibrato rate of 6.4Hz at a 13% depth

One final thing to mention in regards to applying the vibrato effect onto prerecorded audio is that it can distort the sound somewhat when the audio used is a fully realized composition.  The vibrato is of course being applied on to the entire file (i.e. every instrument, every sound).  A more practical application would be to use vibrato on a single instrument source; a flute for example (please excuse my horrible flute playing):

Flute with vibrato rate of 6.0Hz at a 40% depth

Last, but not least, it is important to consider the implementation and design of the code that applies the effect.  I have continued to code these effects as C++ classes using object-oriented design as it makes the implementation of them very easy and efficient.  For example, calling the effect in the main loop of the program is as trivial as:

Here we can see that first we read sample data in from the soundfile and store it in ‘buffer’.  Then the ‘buffer’ is passed, along with the LFO modulator, into the process that applies the variable delay (vibrato in this case), and this is then written to the output soundfile.  The LFO modulator used for the vibrato is just a new instance of the oscillator class I developed for the tremolo effect previously.  I just initialize a new instance of it for use in the vibrato effect, and done!

This is an example of the benefits of object-oriented design and how adaptable it is.  We’ll be seeing much more of this to come as well.  For example, it would require a few trivial code changes to set up multi-tap delays, each with their own depth, and even to incorporate filters into the delays once I get into developing them.  And finally, allowing the use of envelopes to further shape these effects will be an important step to be taken in the future.  With so many tantalizing possibilities, there’s no stopping now!

Coding some Tremolo

The adventure continues.  This time we occupy the world of tremolo as a digital signal processing effect; also known as amplitude modulation.  My studies into the area of audio programming has progressed quite far I must say, covering the likes of filters and delays (get your math hats ready), reverb, and even plug-in development.  In order to really solidify what I’ve been learning though, I decided to go back and create a program from scratch that will apply tremolo and vibrato to an existing audio file, and that’s where this blog entry comes in.  For now I am just covering the tremolo effect as there is plenty to discuss here on that, while vibrato will be the subject of the next blog entry.

Tremolo in and of itself is pretty straightforward to implement, both on generating signals and on existing soundfiles.  (Vibrato on the other hand is easy enough to apply on to signals being generated, but a bit more complex when it comes to sound input).  Nonetheless, several challenges were met along the way that required a fair amount of research, experimentation and problem solving to overcome, but in doing so I’ve only expanded my knowledge in the area of DSP and audio programming.  I think this is why I enjoy adventure games so much — chasing down solutions and the feeling you get when you solve a problem!

The tremolo effect is simply implemented by multiplying a signal by an LFO (low frequency oscillator).  While LFOs are normally between 0 – 20 Hz, a cap of 10 Hz works well for tremolo.  The other specification we need is depth — the amount of modulation the LFO will apply on to the original signal — specified in percent.  A modulation depth of 100%, for example, will alternate between full signal strength to complete suppression of the signal at the frequency rate of the LFO.  For a more subtle effect, a depth of around 30% or so will result in a much smoother amplitude variance of the signal.  With this information we can develop a mathematical formula for deriving the modulating signal in which we can base our code on.  This is also where I encountered one of my first big challenges.  The formula I used at first (from the book Audio Programming) was:

ModSignal = 1 + DEPTH * sin(w * FREQ)

where w = 2 * pi / samplerate.  This signal, derived from the LFO defined by the sine operation, would be used to modulate the incoming sound signal:

Signal = Signal * ModSignal

This produced the desired tremolo effect quite nicely.  But when the original signal approached full amplitude, overmodulation would occur resulting in a nasty digital distortion.  As can be seen in the above equation for the modulating signal, it will exceed 1 for values of sine > 0.   Essentially this equation is a DC offset, which takes a normally bipolar signal and shifts it up or down.  This is what we want to create the tremolo effect, but after realizing what was causing distortion in the output, I set about finding a new equation to calculate the modulating signal.  After some searching, I found this:

ModSignal = (1 – DEPTH) + DEPTH * (sin(w * FREQ))2

This equation was much better in that it never exceeds 1, so it won’t result in overmodulation of the original signal.  I did however make one personal modification to it; I decided not to square the sine operation after experimenting around with it in the main processing loop.  Ideally we want to perform as few calculations (especially costly ones) within loops as possible.  This is especially important in audio where responsiveness and efficiency are so important in real-time applications.  To compensate for this I scale the DEPTH parameter from a percentage to a range of 0 – 0.5.  From here we can now get into the code.  First, initialization occurs:

Then the main processing loop:

With expandability and flexibility in mind, I began creating my own “oscillator” class which can be seen here:

This is where the power of C++ and object-oriented programming start to show itself.  It affords the programmer much needed flexibility and efficiency in creating objects that can be portable between different programs and functions for future use, and this is definitely important for me as I can utilize these for upcoming plug-ins or standalone audio apps.  Furthermore, by designing it with flexibility in mind, this will allow for the modulation of the modulator so-to-speak.  In other words, we can time-vary the modulation frequency or depth through the use of envelopes or other oscillators.  Values extracted from an envelope or oscillator can be passed into the “oscillator” class which processes and updates its internal data with the proper function calls.  This will allow for anything from ramp ups of the tremolo effect to entirely new and more complex effects derived from amplitude modulation itself!

But now let’s get on to the listening part!  For this demonstration I extracted a short segment of the Great Fairy Fountain theme from the Zelda 25th Anniversary CD release, probably my favorite theme from all of Zelda.

Zelda theme

Here it is after being modulated with a frequency of 4.5 Hz at a depth of 40%:

Tremolo at 4.5 Hz and 40% depth

And for a little more extreme tremolo, we can modulate it at 7.0 Hz at a depth of 85%:

Tremolo at 7.0 Hz and 85% depth

This brings up another challenge that had to be overcome during the development of this program.  Prior to this most of the work I had been studying in the book “Audio Programming” dealt with mono soundfiles.  For this I really wanted to get into handling stereo files and this presented a few problems as I had to learn exactly how to properly process the buffer that holds all the sound data for stereo files.  I am using libsndfile (http://www.mega-nerd.com/libsndfile/) to handle I/O on the actual soundfile being processed and this required me to search around and further adapt my code to work properly with this library.  At one point I was getting very subtle distortion in all of my outputs as well as tremolo rates that were double (or even quadruple) the rates that I had specified.  It took a lot of investigation and trial & error before I discovered the root of the problem lie in how I was handling the stereo files.

In closing off this blog entry, here is a further processing I did on the Zelda sample.  After applying tremolo to it using the program I wrote, I put it through the pitch shifter VST plug-in I implemented to come up with a very eerie result.  ‘Till next time!

Eerie Zelda