Here's a very straightforward experiment anyone can do in Audacity.
1. Set your project sample rate to 48 kHz.
2. Generate a sine tone at 21,345 Hz for one second duration. If you zoom in to sample resolution on the lollipop display of the waveform, it is all over the place, looking nothing like a smooth sine wave.
3. Copy and paste the generated sine wave to another track, removing the first very first sample. This second track represents samples at t=1, 2, 3, 4, etc. The original track represents samples at t=0, 1, 2, 3, etc. Both of these tracks are completely band-limited.
4. Mix both tracks at 50:50 amplitudes to a third track. This creates a Lerped version of the first waveform, with a resulting sample exactly half-way between t=0 and t=1, t=1 and t=2, t=2 and t=3, etc. This resultant track is still band-limited, because both source tracks were band-limited and they were just added to each other in a linear way.
5. Notice that because the sine wave is very close to Nyquist, the comb-filtering effect has come into play and the amplitude of the mixed track is very much lower than either original track.
- Lerping1.jpg (150.5 KiB) Viewed 51485 times
6. Copy the mixed track to a new track and boost the amplitude so that its peak value is something like -0.5 dB.
7. Resample the entire project up to 192kHz (i.e. x4 oversampling). Audacity uses a fancy algorithm for upsampling which somehow restores the chaotic-loooking original generated waveforms to smooth sine waves. Note that no extra information has been added. All these upsampled point values have been calculated exclusively from the garbled 21,345 Hz sine wave sampled at 48 kHz.
8. Note also, that the two original tracks appear to be almost 180 degrees out-of-phase with each other, which accounts for the destructive interference and the low amplitude of the mixed signal.
9. Note also, (also) that the mixed signal (and the amplified mixed signal) appears to be phase-shifted by about 90 degrees relative to both source signals. Or, looked at another way, the peaks in the mixed signal fall between the peaks in the two original waveforms.
- Lerping2.jpg (163.67 KiB) Viewed 51485 times
10. If we perform a spectrum analysis of the mixed, amplified signal, we get a nice clean sine wave at 21,345 Hz. The tails are due to the Rectangular Window method and do not imply broadband noise.
- Lerping3.jpg (179.78 KiB) Viewed 51485 times
I
THINK this is what Chris has been talking about for the past few pages
. We're advancing through the buffer(s) at a rate of one playback sample per one recorded sample (no upsampling or downsampling, no freq shift). The mix ratio is always 50:50 so the effective fractional delay is always 0.5. Therefore, the resultant waveform is always the arithmetic midpoint between sample[t] and sample[t-1] (or between sample[t] and sample[t+1] depending on how you want to look at it). When we examine the mixed waveform (made up of
all midpoints and
none of the original end points), we are still left with a band-limited sine wave. Albeit, it has undergone a phase-shift and a reduction in amplitude. Lower frequency sine waves are hardly touched by either of these effects. If you want to see it for yourself, this is left as an exercise for the reader!