Science and Math for Audio Humans – Sound Characteristics

11/22/2011 4:22:31 PM
By PSN Staff

by Danny Maland

The standard disclaimer for this series: Everything that I set before you should be read with the idea that “this is how I've come to understand it.” If somebody catches something that's just flat-out wrong, or if you just think that an idea is debatable, please take the time to start a discussion via the comments.

Last time around, we were talking about what sound is. Having done that, it's time to start talking about how sound pressure waves can be characterized. These characterizations lie at the core of pretty much everything we can make use of in practical reality. I'll talk about that as we go. First, however, I'm going to hit you with the Danny Maland version of the diagram from every physics textbook in history – and yes, I'm going to represent sound as a transverse wave. (You know, clarity, and all that.)

maland sound character 1 

For audio technicians, amplitude pretty much comes down to one thing: How much force is behind this pressure wave? In air, more force results in more pressure, and thus, larger dB SPL (decibels Sound Pressure Level) measurements. In electrical signals, higher amplitude means more voltage (electromotive force), and by extension, more current and more power into a given device. Voltage measurements for audio are in dBu and dBV. I'll go into what all these dB thingamajigs are later.

Frequency is how often we get one full cycle of positive and negative amplitude. A sine wave like the picture above “swings” symmetrically above and below an arbitrary “0” reference point. The reference point indicates no pressure change, or no voltage swing. Ideally, for electrical signals, “0” does also indicate 0 volts. However, it is entirely possible to have the electrical reference point offset, commonly as a result of DC (direct current) getting into a signal line. Frequency is measured in cycles-per-second, but that's quite a fuss to pronounce. For convenience we use the unit Hz (hertz) as a shorthand. Higher frequencies correspond to higher “notes” or pitches. The typical bandwidth of human hearing is 20 Hz (a very low “E”) to 20,000 Hz (a very high “D#”).

Wavelength is the physical distance traveled by a wave as it goes through one full cycle of positive and negative amplitude. Wavelength is inversely related to frequency. As frequency goes up, wavelength goes down. Wavelength is directly related to the velocity of the propagating wave. As waves travel more quickly, their wavelength increases. These three factors of wavelength, frequency, and velocity are interrelated in a way that gives us the handy mathematical formula in figure 2.

maland audio character 2 


The funny lookin' thing on the left is “lambda” (wavelength), whereas velocity and frequency sit over on the right.

“Wait a minute!” you're exclaiming. “Why does this relationship exist the way it does?”

Here's an example. Let's say that you're in your shop, with a lot of downtime and a bucket of tennis balls on your hands. (Hey, it could happen...) Let's say you toss one of the balls so that it travels 5 feet in 1 second. Along the way, it only bounces once. Here's how that looks when put into the equation:

5 feet-per-cycle = 5 feet-per-second/ 1 cycle-per-second

What if you toss another ball, changing only the speed of its travel away from you? If it's still bouncing at the same frequency, but the travel speed has doubled, then the equation changes to this:

10 feet-per-cycle = 10 feet-per-second/ 1 cycle-per-second

The wave stretches out horizontally, because it's going farther between bounces. If you want to have the ball travel at the faster velocity, but get your old wavelength back, you have to double the bounce frequency, like this:

5 feet-per-cycle = 10 feet-per-second/ 2 cycles-per-second

By making the ball bounce more often, you've counteracted the “ wave stretching” effect of the ball moving outward at higher speed.

The natural question at this point is probably this: “Okay, so how fast does a sound pressure wave travel?” The answer to that question is one of my most often given replies. I think it's also my most annoying reply. “It depends.” Sound travels at different velocities in different atmospheric conditions. Find a place with 0 percent humidity, at sea level, that's at 68 degrees Fahrenheit, and sound will travel at 1126 feet-per-second. If the temperature rises to 95 degrees, those sound pressure waves will rocket by at 1155 feet-per-second.

For most purposes, just using 1126 feet-per-second is likely to be plenty close.

So  - is any of this actually useful in real life? Consider the following:

1. Issues deriving from amplitude (voltage and pressure, for instance) have to do with everything in professional audio. Decent gain-staging is one consideration, and the acoustical output of a loudspeaker system with given power amplifiers is another.

2. The effects of frequency are also continuously encountered in audio applications. Feedback problems are one major conundrum that gets dealt with every day, as are all the discussions centering around the sound quality of loudspeakers and mics. Equalization is all about frequency, all the time – and if it's dynamic EQ, then amplitude really steps into the picture.

3. Wavelength is tremendously important. The design of horn-loaded loudspeakers is dependent on wavelength, because a horn that's too short for a given pressure wave is pretty much no help at all. Acoustic problems, like nasty standing waves and general treatment needs, center on the wavelength/ frequency relationship.

4. The above examples only begin to illustrate how the speed of sound is a hinge point in all kinds of quantitative audio problems. Even though we've yet to talk about phase and time alignment issues, there's one particular example that was related to me that brings home the importance of sound velocity and changing temperature: An outdoor venue (I don't know which one) had a loudspeaker time alignment system put-in that utilized temperature probes. Based on the sensed air temperature, the alignment delays could adjust themselves to an appropriate setting. This was important because, as the sun would set, half the venue would fall into shadow...and sounds from the “cold” half of the PA would start to arrive “late” when compared to the “warm” half.


Share This Post