by Danny Maland
If you recall the first part of my post “The Plow and The Field,” I spent some time talking about how it can be easy to fall a little bit too much in love with gear and what it can do. I thought it might be good to do an epilogue to “The Plow and The Field” where I spent some time on why our processing tools are limited. This isn’t about, say, how one EQ implementation is better or worse than another. Rather, this is about the basic, inherent limitations of audio processors in general. I’m going to use equalizers as my object example, but pretty much any audio processing tool can fit into what I’m getting at.
If Grandpa Maland were alive today, he might have explained the limitations of tools with the following:
“Son, that plow don’t know the first thing about the ground it’s workin’ on. It just cuts the way it does, payin’ no mind to the soil. If it doesn’t cut the right way for the crops goin’ in the groun’, well, that ain’t its problem.”
To render that rural wisdom into a generalization about audio, I would say this:
Electronic devices are usually incapable of evaluating their own contribution to the sonic character of a space. The few devices that are capable of evaluating that contribution are usually subject to marked limitations in their data acquisition and interpretation - that is, as compared to a human operator.
Well, there’s a mouthful. Let’s step through that a bit at a time. As a quick note, when I’m talking about “total acoustic response,” what I’m referring to is the output of a device as it is being heard or measured after being converted into airborne sound pressure waves.
A basic EQ does not have any method for measuring or interpreting the signals being input, nor the results of the output. They simply act upon a signal as they receive it. If that device’s output - and the resulting total acoustic response in a space - is “correct” for an observer, then that’s wonderful. However, if something (anything) changes in such a way as to render the total acoustic response “incorrect” to an observer, the device is helpless without the intervention of an operator. Further, the device has no way of “knowing” whether what it is doing happens to be “correct” or “incorrect” to an arbitrary observer.
Now - what about devices that can make some interpretation about what they’re doing to a signal? For instance, we could consider an equalization device that can measure the total acoustic response involving its own output in the room, and make adjustments such that the response matches a set of parameters. (This is usually called something like Auto EQ or an EQ Wizard.) Firstly, most of these devices can only measure one set of data points at a time. They will usually accept only one measurement microphone input, and may or may not be able to compensate for any imperfections in the measurement microphone itself. Second, the device has no way of knowing why a given measurement is the way it is. A large notch in a measured frequency response might be because of an acoustical phase cancellation, but the device has no way of knowing if that is the case or not. It merely “hears” a large difference in sound pressure level from one frequency to the next. Third, useful measurements rely on predictable input signals that can be meaningfully compared to an output. It’s all fine and good to have a transfer function created by comparing the signal running through the outputs of a mixing console to the total acoustic response in the room - but what happens if that total acoustic response includes a sound that is not being produced by the PA system? The measurement (and any automated response to it) is invalid, whether wildly or by a small degree.
A striking real world example of the above, and also of “idolizing the tools,” is what happened to a local rock band that I worked for in years past. They set up their FOH (Front Of House) speaker system with the aid of an automatic EQ, hoping to get the frequency response of the loudspeakers to a reasonably “flat” condition. I don’t know how they set up the measurement mic, or under what conditions the auto EQ was run, but the results were anything but pretty. The system had a sound that was overly dominated by upper midrange and high frequencies, was prone to feedback problems, and tended to suffer from damaged high frequency drivers on a regular basis. These otherwise exceptionally smart and capable gentlemen had an implicit trust that their very sophisticated audio processor was doing the right thing, even when the results were not-at-all pleasing. The automatic EQ simply had no way of knowing that what it was doing was wrong in changing situations - all it knew was that the operators wanted a “flat” curve, and it had done everything it could to give them that. What it did was all based on what was happening at the measurement mic on a single occasion of measurement.
You can see here what I mean by limited data acquisition and interpretation. The device had only one measurement to work from, and it could only interpret whether or not the resulting frequency response conformed to a target curve. The humans involved were the only “devices” in the process that could tell that the results were inappropriate in most situations, but dang it, that was the “flat” curve. It must be right!
When I was asked to fix the problem, I first asked for the automatic EQ to be bypassed. With one press of a button, the FOH loudspeaker system sounded a million times more pleasing. I believe that I left the rehearsal space with exactly one parametric filter activated in the EQ. It was something like a one octave wide filter, centered at 400 Hz, with a gain of -3 dB. In my judgement, that was a starting point that gave the band a good foundation to work from when I couldn’t be there to help.
If I remember correctly, their feedback problems all but vanished, and they cooked no more high frequency drivers.
Those guys just needed someone to show them that humans are smarter than audio processors.