This has never come up for me before, but now I have a situation where I'd like to bypass a very CPU-expensive operation if the inputs aren't actually there on a populated PolyJack. The best I can think up is to monitor each channel looking for runs of zero longer than some threshold. That wouldn't be an ideal solution, obviously, because there might be an actual signal present with long drop-outs, or again, just the gap between notes. Toggling in and out of my internal bypass might introduce artifacting, as well. Preferable would be some way to directly tell if there's actually a signal on each poly channel. Anybody got anything? Thanks.
Reid
Detecting Presence of Channels on PolyJack
-
- Posts: 625
- Joined: Mon Nov 15, 2021 9:23 pm
Detecting Presence of Channels on PolyJack
Cyberwerks Heavy Industries -- viewforum.php?f=76
Re: Detecting Presence of Channels on PolyJack
So, to make an analogy, you want to implement a 'noise gate' on the expensive processing?UrbanCyborg wrote: ↑Sat Mar 30, 2024 5:35 pm The best I can think up is to monitor each channel looking for runs of zero longer than some threshold. That wouldn't be an ideal solution, obviously, because there might be an actual signal present with long drop-outs, or again, just the gap between notes.
Unfortunately, I can't think of any solution at the moment, other than the non-ideal one you outline in the OP. The trouble is that a signal that reads 0.0 V for many samples is just as much a valid 'time-variant signal' as anything else (a uniploar square-wave LFO, or sequencer CV output, for example).
Any assumptions beyond that will probably bite you in the butt at the most inopportune times, due to Sod's Law.
Sorry I can't be much more helpful or hopeful.
______________________
Dome Music Technologies
Dome Music Technologies
Re: Detecting Presence of Channels on PolyJack
Without some extra API support there's no way you could know whether a particular channel is in use.
The only thing I can think of is to use caching - in other words if the channel's input value is the same as it was in the previous sample then use the result you calculated last time instead of calculating it afresh. But obviously this only works if there's a simple one to one relationship between input values and output values.
The only thing I can think of is to use caching - in other words if the channel's input value is the same as it was in the previous sample then use the result you calculated last time instead of calculating it afresh. But obviously this only works if there's a simple one to one relationship between input values and output values.
-
- Posts: 625
- Joined: Mon Nov 15, 2021 9:23 pm
Re: Detecting Presence of Channels on PolyJack
Yeah, that's about what I was afraid of. Thanks for the responses, guys. In implementing the second-best solution, I've run into most of the problems you've mentioned, so I guess I'll just have to bite the bullet and do the expensive processing. Thanks.
Reid
Reid
Cyberwerks Heavy Industries -- viewforum.php?f=76
Re: Detecting Presence of Channels on PolyJack
I always use caching when converting from V/Oct pitch to raw frequency.ColinP wrote: ↑Sun Mar 31, 2024 3:18 pm The only thing I can think of is to use caching - in other words if the channel's input value is the same as it was in the previous sample then use the result you calculated last time instead of calculating it afresh. But obviously this only works if there's a simple one to one relationship between input values and output values.
In my first modules (Big Rat and Smol Rat), I used the (expensive) full 64-bit floating point operations of raising 2 to the power and e to the power of x. I used caching of the input values (from knob and CV) to determine if I needed to recalculate new values during this iteration of ProcessSample:
Code: Select all
//---------------Snipped from BigRat ProcessSample--------------------
// Check for Rise Time value changes
processChange = false;
currentRiseTimeKnob = smoothRiseTime.GetSmoothValue();
if (currentRiseTimeKnob != previousRiseTimeKnob)
{
processChange = true;
previousRiseTimeKnob = currentRiseTimeKnob;
}
if (riseTimeCVConnected)
{
currentRiseTimeCV = riseTimeCV.GetValue();
if (currentRiseTimeCV != previousRiseTimeCV)
{
processChange = true;
previousRiseTimeCV = currentRiseTimeCV;
}
}
if (processChange)
{
CalculateAttackTime();
}
//---------------End of Snip from ProcessSample-------------------------
private void CalculateAttackTime ()
{
double timePeriod; // Base time period (knob * time base multiplier
double temp;
double voltsPerOctaveMultiplier;
// Get base period / freq as set by front panel knob
temp = timeBaseMultiplier / currentRiseTimeKnob; // Basic RC time period
// Modify base period by V/Oct CV input
voltsPerOctaveMultiplier = pow (2.0, currentRiseTimeCV); // Shorten period with increasing CV
temp *= voltsPerOctaveMultiplier;
attackTimeFactor = exp (temp);
}
In later modules I used the quicker 32-bit float operations of Values.FastTwoToTheX and Values.FastEToTheX without having any audible impact.
To further enhance efficiency in Micro Stringer and Pink Flight I was able to 'gate' the processing by ignoring a channel if its envelope generator was in the "OFF" phase:
Code: Select all
if (channelEnvelope[i].GetStage() != ADSREnvelope.ENV_STAGE.ADSR_Stage_Off)
{
// Only process this channel if audible
if (polyPitchConnected)
{
tempDbl = polyInputJackPitch.GetPolyValue(i);
}
else
{
tempDbl = GetIOPanelPolyValue(IOPanelPolyValues.IOPanel_PolyPitch, i);
}
tempFloat = (float)tempDbl;
if (tempFloat != channelPitch[i])
{
channelPitch[i] = tempFloat;
// Change in pitch voltage
tempFloat = LOWEST_C_FREQ *
Values.FastTwoToTheX(tempFloat);
channelFrequency[i] = tempFloat;
}
tempFloat = channelFrequency[i] * pitchMultiplier;
channelOscillator[i].SetFrequency(tempFloat);
channelOscillator[i].AdvanceSample();
oscillatorOutput = channelOscillator[i].GetSawtoothValue();
totalMix += oscillatorOutput *
channelEnvelope[i].GetValue() * FIVE_VOLTS_ATTEN;
}
}
______________________
Dome Music Technologies
Dome Music Technologies
Re: Detecting Presence of Channels on PolyJack
Oh yes, caching is a very useful tool. I sprinkle it everywhere.
I'm sure I don't need to give any tips to Grant or Reid but for any novice programmers reading there are some things to be wary of.
Testing for a cache hit or miss has a cost. Modern CPUs use pipelining and branch prediction. Taking an unpredicted branch is expensive as the speculative execution in the pipeline is thrown away. So sometimes the cost of caching is actually higher than the savings. It's only worth doing to avoid very expensive computation. Doing a branch to avoid one or two floating point multiplications is counter-productive.
It's an ever changing environment and different users have different CPUs and profiling only tells you about your own particular CPU but as a rough rule of thumb I tend to go by 1 unit of time for simple instructions like add, 3 units of time for multiplication, 5 for division and about 50 for things like cos. Predicted branches cost about 3, "wrong" branches about 10. Function calls cost about 20.
I'd value any other people's rules of thumb because I am not confident at all about these numbers. They are very ballparky.
Although I think the future trend in CPU design is to attack the cost of "wrong" branches by devoting more speculative execution in that direction so anything that works today could be useless in 5 or 10 years time.
The next issue is chosing an invalid cache identifier for initialization. The standard if data is known to always be >= 0 is to use -1 but one fairly common type of bug is when you do a mod that introduces negative numbers while forgetting (or never even knowing about) -1 being considered an out of range value by some bit of caching code buried in the depths.
If the data has full scope then how do we pick an invalid identifier? Well it's very very naughty but if you have a 64 bit number then you can pick something at random and use that. Don't tell anyone I said that though. As 2 to the power 64 is roughly 10 to the 19th the odds of a random signal hit is about 10,000,000,000,000,000,000 : 1 providing you pick something that's really really random. Even with a CPU counting up at 100 GHz it would take about 300 years for it to visit every number. So don't use this for mission critical coding but it's a dirty workaround that works in the real world.
The next issue is flushing the cache. Sometimes you need to calculate regardless. I use the method name defeatOptimization in my classes so I always know what to call. If you are paranoid you can even call defeatOptimization once a second "just in case". A bit like NASA ban while loops in favour of a for loop that exits after some ridiculously large number of iterations. This is why satellites and rovers sometimes mysteriously come back to life They wake up when they eventually exit a for loop that some parnoid old programmer insisted be used to cover the risk of code going astray.
There was probably one or two other things I meant to say about caching but that's enough of me taking air from doing actual work......
Whoops, I think I got carried away with the zero's. There aren't many CPUs that can count at 100 GHz. I meant to say it would take about 300 years if counting at a rate of 1 GHz.
I'm sure I don't need to give any tips to Grant or Reid but for any novice programmers reading there are some things to be wary of.
Testing for a cache hit or miss has a cost. Modern CPUs use pipelining and branch prediction. Taking an unpredicted branch is expensive as the speculative execution in the pipeline is thrown away. So sometimes the cost of caching is actually higher than the savings. It's only worth doing to avoid very expensive computation. Doing a branch to avoid one or two floating point multiplications is counter-productive.
It's an ever changing environment and different users have different CPUs and profiling only tells you about your own particular CPU but as a rough rule of thumb I tend to go by 1 unit of time for simple instructions like add, 3 units of time for multiplication, 5 for division and about 50 for things like cos. Predicted branches cost about 3, "wrong" branches about 10. Function calls cost about 20.
I'd value any other people's rules of thumb because I am not confident at all about these numbers. They are very ballparky.
Although I think the future trend in CPU design is to attack the cost of "wrong" branches by devoting more speculative execution in that direction so anything that works today could be useless in 5 or 10 years time.
The next issue is chosing an invalid cache identifier for initialization. The standard if data is known to always be >= 0 is to use -1 but one fairly common type of bug is when you do a mod that introduces negative numbers while forgetting (or never even knowing about) -1 being considered an out of range value by some bit of caching code buried in the depths.
If the data has full scope then how do we pick an invalid identifier? Well it's very very naughty but if you have a 64 bit number then you can pick something at random and use that. Don't tell anyone I said that though. As 2 to the power 64 is roughly 10 to the 19th the odds of a random signal hit is about 10,000,000,000,000,000,000 : 1 providing you pick something that's really really random. Even with a CPU counting up at 100 GHz it would take about 300 years for it to visit every number. So don't use this for mission critical coding but it's a dirty workaround that works in the real world.
The next issue is flushing the cache. Sometimes you need to calculate regardless. I use the method name defeatOptimization in my classes so I always know what to call. If you are paranoid you can even call defeatOptimization once a second "just in case". A bit like NASA ban while loops in favour of a for loop that exits after some ridiculously large number of iterations. This is why satellites and rovers sometimes mysteriously come back to life They wake up when they eventually exit a for loop that some parnoid old programmer insisted be used to cover the risk of code going astray.
There was probably one or two other things I meant to say about caching but that's enough of me taking air from doing actual work......
Whoops, I think I got carried away with the zero's. There aren't many CPUs that can count at 100 GHz. I meant to say it would take about 300 years if counting at a rate of 1 GHz.