Now as I am looking into and learning more about digital reverberation, including its implementation and theory, I decided to build a simple comb filter plug-in using Audio Units. Previously all the plug-in work I’ve done has been using VST, but I was anxious to learn another side of plug-in development, hence Apple’s Audio Units. It is, truth be told, very similar to VST development in that you derive your plug-in as a subclass of Audio Unit’s AUEffectBase class, inheriting and overwriting functions accordingly to the needs of your effect. There are some notable differences, however, that are worth pointing out. In addition, I’ve put up the plug-in available for download on the Downloads page.
The structure of an Audio Unit differs from VST in that within the main interface of the plug-in, a kernel object that is derived from AUKernelBase handles the actual DSP processing. The outer interface as subclassed from AUEffectBase handles the view, parameters, and communication with the host. What’s interesting about this method is that the Audio Unit automatically handles multichannel audio streams by initializing new kernels. This means that the code you write within the Process() function of the kernel object is written as if to handle mono audio data. When the plug-in detects stereo data it simply initializes another kernel to process the additional channel. For n-to-n channel effects, this works well. Naturally options are available for effects or instruments that require n-to-m channel output.
Another benefit of this structure is the generally fast load times of Audio Unit plug-ins. The plug-in’s constructor, invoked during its instantiation, should not contain any code that requires heavy lifting. Instead this should be placed within the kernel’s constructor, the initialization, so that any heavy processing will only occur when the user is ready for it. Acquring the delay buffer in the comb filter happens in the kernel’s constructor, as indicated below, while the plug-in’s constructor only sets up the initial parameter values and presets.
The parameters in Audio Units also differ from VST in that they are not forced to be floating point values that the programmer is responsible for mapping for the purpose of displaying in the UI. Audio Units comes with built-in categories for parameters which allow you to declare minimum and maximum values for in addition to a default value that is used for when the plug-in instantiates.
Like VST, Audio Units contains a function called Reset() that is called whenever the user starts or stops playback. This is where you would clear buffers or reset any variables needed to return the plug-in to an initialized state to avoid any clicks, pops, or artifacts when playback is resumed.
Because a comb filter is essentially a form of delay, a circular buffer is used (mDelayBuf) to hold the delayed audio samples. In real-time processing where the delay time can change, however, this has repercussions on the size of the buffer used, as it would normally be allocated to the exact number of samples needed to hold the data. But rather than deallocating and reallocating the delay buffer every time the delay time changes (requiring multiple memory accesses), I allocate the buffer to its maximum possible size as given by the maximum value allowed for the delay time. As the delay time changes, I keep track of its size with the curBufSize variable, and it is this value that I use to wrap around the buffer’s cursor position (mPos). This happens within the Process() function.
Every time Process() is called (which is every time the host sends a new block of samples to the plug-in), it updates the current size of the buffer and checks to make sure that mPos does not exceed it. The unfortunate consequence of varying the delay time of an effect such as this is that it results in pops and artifacting when it is changed in real time. The reason being that when the delay time is changed in real time, samples are lost or skipped over, resulting in non-contiguous samples causing artifacting. This could be remedied by implementing the Comb Filter as a variable delay, meaning when the delay time changes in real time, interpolation is used to fill in the gaps. As it stands, however, the delay time is not practically suited for automation.
Yet another distinction with Audio Units is the requirement for validation to be usable in a host. Audio Units are managed by OS X’s Component Manager, and this is where hosts check for Audio Unit plug-ins. To validate an Audio Unit, a tool called “auval” is used. This method has both pros and cons to it. The testing procedure helps to ensure any plug-in behaves well in a host, it shouldn’t cause crashes or result in memory leaks. While I doubt this method is foolproof, it is definitely useful to make sure your plug-in is secure.
Correction: Audio Units no longer use the Component Manager in OS X 10.7+. Here is a technical note from Apple on adapting to the new AUPlugIn entry point.
The downside to it is that some hosts, especially Logic, can be really picky with which plug-ins it accepts. I had problems loading the Comb Filter plug-in for the simple reason that version numbers didn’t match (since I was going back and forth between debug and release versions), and so it failed Logic’s validation process. To remedy this, I had to clear away the plug-in from its location in /Library/Audio/Plug-Ins/Components and then, after reinstalling it, open the AU Manager in Logic to force it to check the new version. This got to be a little frustrating after having to add/remove versions of the plug-in for testing, especially since it passed successfully in auval. Fortunately it is all up and running now, though!
Finally, I’ll end this post with some examples of me “monkey-ing” around with the plug-in in Logic 8, using some of the factory presets I built into it.