Timing accuracy in VM.
-
- Posts: 146
- Joined: Sun Jan 22, 2023 5:18 am
- Location: Melbourne
- Contact:
Timing accuracy in VM.
I've been trying to track down a timing bug in a module I'm working on. Basically, it's generates triggers and gates but the timing is not as precise as I thought it should be.
For example, if I try to generate triggers every 100ms what I am observing coming out is a gap between triggers of around 104.5ms, give or take a small amount. I can't see anything in my code that would cause this so I tried an experiment.
I know that System.nanoTIme(), which I used for timing, isn't going to be an ultra-accurate clock but I thought it would be batter than this.
So, I tried an experiment where I pass the standard CA LFO module (as a 10Hz square wave) into the standard CA Gate to Trigger module and look at the time between received triggers coming out and the problem is almost the same. The difference is that the pulse gap switches jumps back and forth between ~104ms and ~93ms in such a way that the overall average gap is 100ms. It's as if it was designed knowing there would be a timing problem and so compensates as it goes.
Has anyone seen this type of behaviour?
I'm going to try using calls to ProcessSample() as the basic clock signal to see if that gives me better consistency.
Peter
For example, if I try to generate triggers every 100ms what I am observing coming out is a gap between triggers of around 104.5ms, give or take a small amount. I can't see anything in my code that would cause this so I tried an experiment.
I know that System.nanoTIme(), which I used for timing, isn't going to be an ultra-accurate clock but I thought it would be batter than this.
So, I tried an experiment where I pass the standard CA LFO module (as a 10Hz square wave) into the standard CA Gate to Trigger module and look at the time between received triggers coming out and the problem is almost the same. The difference is that the pulse gap switches jumps back and forth between ~104ms and ~93ms in such a way that the overall average gap is 100ms. It's as if it was designed knowing there would be a timing problem and so compensates as it goes.
Has anyone seen this type of behaviour?
I'm going to try using calls to ProcessSample() as the basic clock signal to see if that gives me better consistency.
Peter
-
- Posts: 146
- Joined: Sun Jan 22, 2023 5:18 am
- Location: Melbourne
- Contact:
Re: Timing accuracy in VM.
Ok, that's interesting. By using the number of calls to ProcessSample() as the clock I get the same behaviour I was getting with the CA Gate to Trigger module. The time between triggers jumps back and forth between ~104ms and ~93ms but it maintains a stable average time of 100ms.
That seems to be the best solution for the moment but the timing variation is much larger than I was expecting.
FYI, I'm running this on a 2014 iMac (4 GHz Quad-Core Intel Core i7, 32 GB 1600 MHz DDR3).
Any thoughts from anyone?
Is this suggesting that there is substantial jitter in the VM sampling clock?
Peter
That seems to be the best solution for the moment but the timing variation is much larger than I was expecting.
FYI, I'm running this on a 2014 iMac (4 GHz Quad-Core Intel Core i7, 32 GB 1600 MHz DDR3).
Any thoughts from anyone?
Is this suggesting that there is substantial jitter in the VM sampling clock?
Peter
Re: Timing accuracy in VM.
I could be misunderstanding you but the following might help...
VM calculates data as fast as it can in output buffer sized chunks rather than in realtime.
So although ProcessSample() gets called once per sample it isn't been called at a steady rate. Instead it's being called in bursts.
Therefore time calculations should be done in reference to where you are in the buffer stream rather than what time it is "in the outside world".
So if you want to output a trigger every 100 ms you need to count the calls to ProcessSample() and output a trigger every 4,800 calls rather than try to measure 100 ms with a realtime timer.
VM calculates data as fast as it can in output buffer sized chunks rather than in realtime.
So although ProcessSample() gets called once per sample it isn't been called at a steady rate. Instead it's being called in bursts.
Therefore time calculations should be done in reference to where you are in the buffer stream rather than what time it is "in the outside world".
So if you want to output a trigger every 100 ms you need to count the calls to ProcessSample() and output a trigger every 4,800 calls rather than try to measure 100 ms with a realtime timer.
-
- Posts: 146
- Joined: Sun Jan 22, 2023 5:18 am
- Location: Melbourne
- Contact:
Re: Timing accuracy in VM.
The mechanism I was using to measure the time between triggers was using System.nanoTime(). This is perhaps the problem. Perhaps it is seriously inaccurate, way worse than I thought it would be.
I get accurate 1000ms measurements when I use the sample rate clock to measure them but that isn't surprising because, even if there were substantial jitter in that clock, the jitter would be common between the generating and measuring modules.
It might be interesting to send a series of trigger pulses out of VM into some other measuring device to see if the timing is stable, but I suspect it's just the inaccuracy of nanoTime() that is the heart of the problem.
Peter
I get accurate 1000ms measurements when I use the sample rate clock to measure them but that isn't surprising because, even if there were substantial jitter in that clock, the jitter would be common between the generating and measuring modules.
It might be interesting to send a series of trigger pulses out of VM into some other measuring device to see if the timing is stable, but I suspect it's just the inaccuracy of nanoTime() that is the heart of the problem.
Peter
-
- Posts: 146
- Joined: Sun Jan 22, 2023 5:18 am
- Location: Melbourne
- Contact:
Re: Timing accuracy in VM.
Colin, no you understand me precisely and, as I have just learned with a bit of experimentation, that is exactly what my problem was. I'm too used to thinking in real-time data acquisition and had forgotten about the buffering issue.ColinP wrote: ↑Wed Apr 19, 2023 12:29 pm
So although ProcessSample() gets called once per sample it isn't been called at a steady rate. Instead it's being called in bursts.
Therefore time calculations should be done in reference to where you are in the buffer stream rather than what time it is "in the outside world".
Thanks, once again.
Peter
- honki-bobo
- Posts: 310
- Joined: Sat Nov 09, 2019 1:18 pm
Re: Timing accuracy in VM.
The VoltageModule has a GetTotalSamplesMixed() method that serves as a clock in Voltage Modular. It returns the number of samples mixed since VM was started.
-
- Posts: 146
- Joined: Sun Jan 22, 2023 5:18 am
- Location: Melbourne
- Contact:
Re: Timing accuracy in VM.
Thanks, that's very handy to know.honki-bobo wrote: ↑Thu Apr 20, 2023 5:57 am The VoltageModule has a GetTotalSamplesMixed() method that serves as a clock in Voltage Modular. It returns the number of samples mixed since VM was started.
Peter
Re: Timing accuracy in VM.
as a general rule of thumb, you should never rely on actual time as dsp does not deal in time but in samples. everything that's related to timing (and is related to the audio processing) must be calculated in samples.
so let's say you want to have a trigger out output a trigger pulse every 100ms. then you have two options:
a) calculate the number of samples that matches this time period as close as possible or
b) calculate a fractional increment that represents the time period
for option a) things are quite easy:
then in your ProcessSample function you want to count up to that number of samples and then reset the counter. this will not be accurate as your minimum time interval is given by the samplerate. if your time period does not evenly divide the samplerate your timing will be off and with every iteration the error accumulates. depending on the context this is usually fine though.
for option b) things are more interesting. you get the fractional increment with
now in your ProcessSample function you want to accumulate this increment until you cross the 1.0 border. the difference to the sample counting is that you do not reset to 0, but you take the overshoot into account for the next iteration. so sth like this:
With this you will still be off by the minimum time period given by the sample rate, but the error does not accumulate. however your signal is now moving back and forth a sample or two to account for the rounding issue.
again, both methods are usually fine. which one to take depends on the context. usually the second option is a better fit though.
so let's say you want to have a trigger out output a trigger pulse every 100ms. then you have two options:
a) calculate the number of samples that matches this time period as close as possible or
b) calculate a fractional increment that represents the time period
for option a) things are quite easy:
Code: Select all
int samples = (int)(Values.SampleRate*timeInMs*1e-3);
for option b) things are more interesting. you get the fractional increment with
Code: Select all
double phi = timeInMs*1e-3/Values.SampleRate
Code: Select all
accumulator+=phi;
if(accumulator>=1.0)
{
accumulator -= (int)accumulator;
}
again, both methods are usually fine. which one to take depends on the context. usually the second option is a better fit though.
-
- Posts: 625
- Joined: Mon Nov 15, 2021 9:23 pm
Re: Timing accuracy in VM.
Not unlike how most LFO's do their time bases. Nifty, Chris.
Reid
Reid
Cyberwerks Heavy Industries -- viewforum.php?f=76
Re: Timing accuracy in VM.
I did a little profiling on this when I initially ported my code from JUCE to voltage modular. My computer runs natively at 48k with a 256 sample buffer on a Motu interface. I profiled every call to ProcessSample() and looked over 1 second. In other words, I measured the amount of time between each call to ProcessSample() and ran simple statistics.
Here is what I found based on 1 second, i.e. 48000 samples:
* avg=20.78us,
* stdev=295.57410us
To figure out what was going on, I then removed all profile samples that were larger than 10 times the average and kept track of how many profiling samples I removed. Here is what I found:
* "cleaned" stdev=18.74794us [Comment: still surprising high in my opinion - so lots of jitter]
* number of removed profiling samples = 187.0/188.0 [Comment: This essentially corresponds to 48000/256 = 187.5]
It is pretty clear that VM does something substantial on the buffer boundary of my interface. As I believe VM uses JUCE, this is essentially what I would expect.
Just for kick, I measured the largest measured profile value, and I found that to typically be around 5000 us or 5 ms. This again corresponds to exactly one buffer of 256 samples.
My Conclusion: I think of the 48k simply as an upper bound of how much time I have to complete my processing. Ideally, my maximum processing need to be substantially shorter than the 20 us, and that is how I currently profile my own code.
Here is what I found based on 1 second, i.e. 48000 samples:
* avg=20.78us,
* stdev=295.57410us
To figure out what was going on, I then removed all profile samples that were larger than 10 times the average and kept track of how many profiling samples I removed. Here is what I found:
* "cleaned" stdev=18.74794us [Comment: still surprising high in my opinion - so lots of jitter]
* number of removed profiling samples = 187.0/188.0 [Comment: This essentially corresponds to 48000/256 = 187.5]
It is pretty clear that VM does something substantial on the buffer boundary of my interface. As I believe VM uses JUCE, this is essentially what I would expect.
Just for kick, I measured the largest measured profile value, and I found that to typically be around 5000 us or 5 ms. This again corresponds to exactly one buffer of 256 samples.
My Conclusion: I think of the 48k simply as an upper bound of how much time I have to complete my processing. Ideally, my maximum processing need to be substantially shorter than the 20 us, and that is how I currently profile my own code.