Chapter Five: Digital Audio
6. Quantizing, approximation errors and sample size | page 2
Digital audio editing programs can often use sample sizes above their final product's bit depth. Standard samples sizes of 20-bits, 24-bits, even 32-bit floating-point or long integer samples are now common, used to minimize fractional values and their attendant noise introduced when mixing or engaging in other mathematical processes on the samples. Another reason higher bit-depth recording is becoming more attractive, as prices come down and storage becomes less of an issue, is that quantization errors are much more critical at lower amplitudes, due to the linear amplitude divisions of the quantization process. At very low amplitudes, these errors are much more apparent, acting more like distortion than noise.
Higher bit-depth files can be reduced back to 16-bit for things like CD burning, often enhanced by using a process called dither, which tries to minimize the inducement of further digital noise. Dither allows higher bit depths, such as 20- and 24-bit files to be reduced to 16-bits by not doing the obvious, which would be rounding off the extra precision to the nearest 16-bit value. Instead, it combines the least significant bits below the most significant 16 with random values, then rounds up or down to the nearest 16-bit value. Click here for a fuller explanation of dither. If you don't record above 16-bit resolution, try to adjust recording levels to avoid prolonged periods of very low amplitudes while not exceeding the maximum amplitude of the system at peaks. (Digital systems do not provide the fuzzy "headroom" of analog systems—they just run out of values and clip.) You can see the effect of two different bit depths on the diagram below and how they each represent the crescendo indicated in the analog waveform being modeled. Notice that the one-bit sample depth doesn't have any dynamic change at all (yet they call that a 6 dB range—go figure).
1 | 2