If you have a problem or need to report a bug please email : support@dsprobotics.com
There are 3 sections to this support area:
DOWNLOADS: access to product manuals, support files and drivers
HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects
USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here
NEW REGISTRATIONS - please contact us if you wish to register on the forum
Users are reminded of the forum rules they sign up to which prohibits any activity that violates any laws including posting material covered by copyright
intelliger 64 leaky softy naive
8 posts
• Page 1 of 1
intelliger 64 leaky softy naive
The attached .fsm implements a 64-tap Widrow-Hoff LMS adaptive filter. I am calling this a "intelliger".
Indeed, such device is self-learning and capable of discriminating, categorizing, identifying, cloning, anticipating.
It gets configured here, for learning (cloning) (identifying) the transfer function of a given device, any kind of device, kind of "plant" would you say, whose input and output are accessible. The kind of device is a loudspeaker exhibiting a highpass, bell resonance, and lowpass behavior. Thus, here, the "plant" is a loudspeaker (modeled in digital for convenience).
The intelliger "mu" scale factor that's governing the learning speed / precision tradeoff can get adjusted from -200 dB (slow learning) to -100 dB (fast learning).
The intelliger is leaky.
The 64 FIR filter weights(i) exhibit a tendency to slowly return to zero.
The leak can get adjusted from -120 dB (slow return to zero) to -80 dB (fast return to zero).
The intelliger is softy.
Non-softy intelligers do compute their weights(i), following the time-integral of "the error multiplied by the input(i)".
Softy intelligers do compute their weights(i), following the time-integral of the signed square root of "the error multiplied by the input(i)". The idea behind such refinement, is that non-softy intelligers generate harsh signals emanating from the multiplication of one signal (the error signal), by another signal (the input(i)). Conceptually, softy intelligers replace the error signal by its square root, and replace the input(i) signal by its square root. This way, the multiplication result, still has the "dimension" of a signal. Its spectrum is softer. There is a calculation simplification, allowed by arithmetic. Signed square root(a) * signed square root(b) = signed square root (a * b).
There is no automatic gain, no filter whatsoever in the "error signal" path. A delayed or phase-corrupt "error signal" ruins the intelliger learning capability.
The learning capability is real, and outstanding.
The learning is fast in case "mu" = -100 dB.
The learning is still present (albeit slow) in case "mu" = -200 dB.
The precision is in the order of 1% when progressively dialing "mu" from -100 dB (fast coarse learning) to -200 dB (slow fine learning).
Unfortunately, the intelliger appears to be fooled by himself above Fs/8.
The intelliger is incapable of steering its gain correctly past Fs/4, upon reaching a 1% precision below Fs/8.
The 1% precision I am quoting corresponds to a measured 40 dB difference between the "plant" signal spectrum, and the "error" signal spectrum.
Unfortunately, in the absence of a "leak" function, instead of converging to a better than 1% precision, the intelliger enters instability. Above Fs/4, the intelliger gain starts gradually increasing, like plagued by resonance, despite the continuous learning process. The only way to stop the gradual gain increase above Fs/4, is to set "mu" to zero, actually stopping the learning. Thus, there is no way back. Such is the fatal flaw.
The "leak" function allows a way back, as rescue.
Here is how to use the "leak" function.
Enable the "leak".
Dis-engage the learning during a few seconds (set "mu" = 0).
Wait until the intelliger gain above Fs/4, goes down.
Augment the "leak" in case the intelliger gain above Fs/4, doesn't go down fast enough.
As soon you see the intelliger gain above Fs/4, reaching a correct level, re-engage the learning.
Unfortunately, once the "leak" function gets enabled, it generates a continuous internal perturbation.
Now, the best precision you can hope below Fs/8 is 5% instead of 1%.
Thus, the "unstable" system that's theoretically capable of a 1% precision, is only capable of a 5% precision.
The "leak" function, as instability fix, is far from optimal.
Let us try improving the system.
This intelliger misbehavior above Fs/8 may be caused by the imperfect digital implementation of the required time-integral function. This may be same as the instability that's plaguing the Agarwal-Burrus digital IIR filter when asked to produce cutoff frequencies that are higher than Fs/8.
The Agarwal-Burrus digital IIR filter can be described as a "naive Virtual Analog" filter, or "naive VA filter".
The Agarwal-Burrus digital IIR filter was popularized in Hal Chamberlin’s book Musical Applications of Microprocessors. See it here : https://www.earlevel.com/main/2003/03/02/the-digital-state-variable-filter/
The Agarwal-Burrus digital IIR filter got significantly improved over time, by replacing its two naive integrators by more complicated devices, doing a better time-integral emulation.
Look here, Vadim Zavalishin, The Art of VA Filter Design, Chapter 3.6 (Trapezoidal integration):
https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf
I am thus formulating the conjecture, that the 64-tap intelliger stability and precision will get significantly improved, by replacing its 64 naive integrators, by 64 Vadim Zavalishin trapezoidal integrators.
Please let me know
Have a nice day
Indeed, such device is self-learning and capable of discriminating, categorizing, identifying, cloning, anticipating.
It gets configured here, for learning (cloning) (identifying) the transfer function of a given device, any kind of device, kind of "plant" would you say, whose input and output are accessible. The kind of device is a loudspeaker exhibiting a highpass, bell resonance, and lowpass behavior. Thus, here, the "plant" is a loudspeaker (modeled in digital for convenience).
The intelliger "mu" scale factor that's governing the learning speed / precision tradeoff can get adjusted from -200 dB (slow learning) to -100 dB (fast learning).
The intelliger is leaky.
The 64 FIR filter weights(i) exhibit a tendency to slowly return to zero.
The leak can get adjusted from -120 dB (slow return to zero) to -80 dB (fast return to zero).
The intelliger is softy.
Non-softy intelligers do compute their weights(i), following the time-integral of "the error multiplied by the input(i)".
Softy intelligers do compute their weights(i), following the time-integral of the signed square root of "the error multiplied by the input(i)". The idea behind such refinement, is that non-softy intelligers generate harsh signals emanating from the multiplication of one signal (the error signal), by another signal (the input(i)). Conceptually, softy intelligers replace the error signal by its square root, and replace the input(i) signal by its square root. This way, the multiplication result, still has the "dimension" of a signal. Its spectrum is softer. There is a calculation simplification, allowed by arithmetic. Signed square root(a) * signed square root(b) = signed square root (a * b).
There is no automatic gain, no filter whatsoever in the "error signal" path. A delayed or phase-corrupt "error signal" ruins the intelliger learning capability.
The learning capability is real, and outstanding.
The learning is fast in case "mu" = -100 dB.
The learning is still present (albeit slow) in case "mu" = -200 dB.
The precision is in the order of 1% when progressively dialing "mu" from -100 dB (fast coarse learning) to -200 dB (slow fine learning).
Unfortunately, the intelliger appears to be fooled by himself above Fs/8.
The intelliger is incapable of steering its gain correctly past Fs/4, upon reaching a 1% precision below Fs/8.
The 1% precision I am quoting corresponds to a measured 40 dB difference between the "plant" signal spectrum, and the "error" signal spectrum.
Unfortunately, in the absence of a "leak" function, instead of converging to a better than 1% precision, the intelliger enters instability. Above Fs/4, the intelliger gain starts gradually increasing, like plagued by resonance, despite the continuous learning process. The only way to stop the gradual gain increase above Fs/4, is to set "mu" to zero, actually stopping the learning. Thus, there is no way back. Such is the fatal flaw.
The "leak" function allows a way back, as rescue.
Here is how to use the "leak" function.
Enable the "leak".
Dis-engage the learning during a few seconds (set "mu" = 0).
Wait until the intelliger gain above Fs/4, goes down.
Augment the "leak" in case the intelliger gain above Fs/4, doesn't go down fast enough.
As soon you see the intelliger gain above Fs/4, reaching a correct level, re-engage the learning.
Unfortunately, once the "leak" function gets enabled, it generates a continuous internal perturbation.
Now, the best precision you can hope below Fs/8 is 5% instead of 1%.
Thus, the "unstable" system that's theoretically capable of a 1% precision, is only capable of a 5% precision.
The "leak" function, as instability fix, is far from optimal.
Let us try improving the system.
This intelliger misbehavior above Fs/8 may be caused by the imperfect digital implementation of the required time-integral function. This may be same as the instability that's plaguing the Agarwal-Burrus digital IIR filter when asked to produce cutoff frequencies that are higher than Fs/8.
The Agarwal-Burrus digital IIR filter can be described as a "naive Virtual Analog" filter, or "naive VA filter".
The Agarwal-Burrus digital IIR filter was popularized in Hal Chamberlin’s book Musical Applications of Microprocessors. See it here : https://www.earlevel.com/main/2003/03/02/the-digital-state-variable-filter/
The Agarwal-Burrus digital IIR filter got significantly improved over time, by replacing its two naive integrators by more complicated devices, doing a better time-integral emulation.
Look here, Vadim Zavalishin, The Art of VA Filter Design, Chapter 3.6 (Trapezoidal integration):
https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_2.0.0a.pdf
I am thus formulating the conjecture, that the 64-tap intelliger stability and precision will get significantly improved, by replacing its 64 naive integrators, by 64 Vadim Zavalishin trapezoidal integrators.
Please let me know
Have a nice day
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: intelliger 64 leaky softy naive
iir, wouldn't that have a latency?
I often think it's a shame that the better sounding virtual analog stuff has latency issues. Maybe sometime I'll share My method for reducing phase cancellation without the use of latency whatsoever, it's exciting; but again, not analog at all.
Well, most of the better stuff with oversampling I meant. I'm surprised you've taken the optimized tact. I'll look into your schematic further.
I often think it's a shame that the better sounding virtual analog stuff has latency issues. Maybe sometime I'll share My method for reducing phase cancellation without the use of latency whatsoever, it's exciting; but again, not analog at all.
Well, most of the better stuff with oversampling I meant. I'm surprised you've taken the optimized tact. I'll look into your schematic further.
Last edited by wlangfor@uoguelph.ca on Sat Apr 04, 2020 2:07 pm, edited 1 time in total.
-
wlangfor@uoguelph.ca - Posts: 912
- Joined: Tue Apr 03, 2018 5:50 pm
- Location: North Bay, Ontario, Canada
Re: intelliger 64 leaky softy naive
Perhaps it would be useful to explain the applications difference between this and what, say, Acustica's VVK does.
We have to train ourselves so that we can improvise on anything... a bird, a sock, a fuming beaker! This, too, can be music. Anything can be music. -Biff Debris
-
Duckett - Posts: 132
- Joined: Mon Dec 14, 2015 12:39 am
Re: intelliger 64 leaky softy naive
Your comments welcome
Please open the .fsm. You'll see there is no parasitic latency whatsoever. The 64-tap FIR filter does its best, for real-time learning (emulating, cloning) the IIR filters stack considered here as "plant". The adjustable delay I've added before the IIR filters stack, serves to show that the 64-tap FIR filter is able to quickly reconfigure itself, for cloning such delay. The fact that the FIR filter is 64-taps long, doesn't imply that the FIR filter will add some parasitic audio delay. Remember that the impulse response of a looong FIR filter can be same as any analog filter. Say a peak that's concentrating in the three or four first audio samples, followed by a trailing section, more or less damped. Do analog filters induce audio delays? Yes of course, a analog lowpass filter that's -3 dB cutting at Fc is always introducing a delay, this is its "group propagation delay", and the lower the Fc, the longer the "group propagation delay". In case your hardware requires a 512-samples ASIO buffer for execution the 64-tap "learning" FIR filter, there will be a parasitic delay, of course not caused by the 64-tap FIR filter, but solely caused by the 512-samples ASIO buffer read only /write only swapping.wlangfor@uoguelph.ca wrote:Wouldn't that have a latency?
Looks weird. Can you please develop such opinion, kind of case study?wlangfor@uoguelph.ca wrote:I often think it's a shame that the better sounding virtual analog stuff has latency issues.
Last edited by steph_tsf on Sun Mar 22, 2020 1:35 am, edited 1 time in total.
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: intelliger 64 leaky softy naive
Your comment welcome
Virtual ... Veridic ... Keyboard?
I went reading http://acustica-audio.com/pages/specials/deep-learning-basics "Thanks to our proprietary VVK Technology, we can faithfully sample studio hardware equipment and render it inside our plug-ins."
Not far away from what "my" 64-tap intelliger is doing in real-time, speaking of time-invariant filters.
I went reading http://acustica-audio.com/pages/specials/deep-learning-basics
I believe that they run many sessions in order to identify and clone the time-variant features of the "plant" they are dealing with. Represent a multiband noise gate expander compressor limiter. You need to stimulate it at different levels, with different audio content, for uncovering and cloning its time-variant behavior.
I believe they add to their process, a deconvolution branch, in order to identify and clone echo and reverb.
I am positively impressed. The data they generate can serve heavy applications in forensic. Sometimes you need to know, more and deeper than the audio message. Some military applications need to ascertain the location, I mean the acoustic environment, where the audio or sonar message got captured in first instance.
What's VVK?Duckett wrote:Perhaps it would be useful to explain the applications difference between this and what, say, Acustica's VVK does.
Virtual ... Veridic ... Keyboard?
I went reading http://acustica-audio.com/pages/specials/deep-learning-basics "Thanks to our proprietary VVK Technology, we can faithfully sample studio hardware equipment and render it inside our plug-ins."
Not far away from what "my" 64-tap intelliger is doing in real-time, speaking of time-invariant filters.
I went reading http://acustica-audio.com/pages/specials/deep-learning-basics
I believe that they run many sessions in order to identify and clone the time-variant features of the "plant" they are dealing with. Represent a multiband noise gate expander compressor limiter. You need to stimulate it at different levels, with different audio content, for uncovering and cloning its time-variant behavior.
I believe they add to their process, a deconvolution branch, in order to identify and clone echo and reverb.
I am positively impressed. The data they generate can serve heavy applications in forensic. Sometimes you need to know, more and deeper than the audio message. Some military applications need to ascertain the location, I mean the acoustic environment, where the audio or sonar message got captured in first instance.
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: intelliger 64 leaky softy naive
I am currently, aiming at streamlining my DSP programming style, that's basing on bits and pieces borrowed from Martin Vicanek production. I mean DSP. Not x86 Assembly. Not Ruby.
http://www.dsprobotics.com/Files/V3/User%20Guide.pdf chapter 9 page 240 doesn't list:
- the "vertical bar" operator,
- the "abs" (absolute value) function,
- the "square root" (sqrt) function.
Question: Is there a supported operators list somewhere?
Question: Is there a supported functions list somewhere?
Upon reading page 242, I realized that one can declare and access indexed arrays.
Question: is this really supported, and reliable, because on Flowstone 3.0.4, this appears to be not supported.
Question: can somebody please publish a circular buffer management example, whose length is 64?
Upon reading page 243, I realized that memin() creates a Mem type connector on the component so that one can pass in data from outside.
Question: is this really supported, and reliable, because on Flowstone 3.0.4, this appears to be not supported.
Question: can somebody please publish a memin() example?
Question: is there a memout()?
Upon reading page 243, I realized one can program loops.
Question: Is it allowed to rely on a loop counter, that's a variable instead of a constant?
Considering the lack of "normal" conditional statements in Flowstone DSP coding, what Google search keyword do I need to specify for locating the utility that's converting a "if (condition) then (true branch) else (false branch) end" code sequence into a Flowstone DSP code sequence?
Any help, much appreciated
Have a nice day
http://www.dsprobotics.com/Files/V3/User%20Guide.pdf chapter 9 page 240 doesn't list:
- the "vertical bar" operator,
- the "abs" (absolute value) function,
- the "square root" (sqrt) function.
Question: Is there a supported operators list somewhere?
Question: Is there a supported functions list somewhere?
Upon reading page 242, I realized that one can declare and access indexed arrays.
Question: is this really supported, and reliable, because on Flowstone 3.0.4, this appears to be not supported.
Question: can somebody please publish a circular buffer management example, whose length is 64?
Upon reading page 243, I realized that memin() creates a Mem type connector on the component so that one can pass in data from outside.
Question: is this really supported, and reliable, because on Flowstone 3.0.4, this appears to be not supported.
Question: can somebody please publish a memin() example?
Question: is there a memout()?
Upon reading page 243, I realized one can program loops.
Question: Is it allowed to rely on a loop counter, that's a variable instead of a constant?
Considering the lack of "normal" conditional statements in Flowstone DSP coding, what Google search keyword do I need to specify for locating the utility that's converting a "if (condition) then (true branch) else (false branch) end" code sequence into a Flowstone DSP code sequence?
Any help, much appreciated
Have a nice day
- steph_tsf
- Posts: 249
- Joined: Sun Aug 15, 2010 10:26 pm
Re: intelliger 64 leaky softy naive
Vertical bar (|): bitwise "or"
ampersand (&): bitwise "and"
abs(x): yes, there is a built-in abs function, however it is not very efficient. I don't have a list, but you can try and guess: if a particular function is not supported it will change the color inside the codebox.
Arrays are likewise inefficient in code. There is a stock delay module as an example of a circular buffer.
I won't comment on memin because I abandoned it long ago (too many crashes).
Loops have constant length in code. ASM is more flexible.
If/then/else: you have to work with true/false masks, which is sometimes a bit awkward. Example:
if (a > b) then
c = 1
else
c = 3
end if;
in FS code:
c = 3 - 2&(a > b);
ampersand (&): bitwise "and"
abs(x): yes, there is a built-in abs function, however it is not very efficient. I don't have a list, but you can try and guess: if a particular function is not supported it will change the color inside the codebox.
Arrays are likewise inefficient in code. There is a stock delay module as an example of a circular buffer.
I won't comment on memin because I abandoned it long ago (too many crashes).
Loops have constant length in code. ASM is more flexible.
If/then/else: you have to work with true/false masks, which is sometimes a bit awkward. Example:
if (a > b) then
c = 1
else
c = 3
end if;
in FS code:
c = 3 - 2&(a > b);
-
martinvicanek - Posts: 1328
- Joined: Sat Jun 22, 2013 8:28 pm
Re: intelliger 64 leaky softy naive
Here's a very quick example of memin...
I'm fairly sure that FS3.0.4 should support it - my hunch is that you didn't declare the memin with a maximum size (the memin is unable to read this directly from the input connector).
A mem is simply an area of memory reserved as a buffer, of which the memin receives the base address. The memin's variable is then treated exactly as an array would be, allowing the value at any index to be both read and written. Hence, there is no memout, as it would serve no purpose (the whole point is that it's an array declared externally to the DSP/ASM primitive, so to speak).
If you only require a memory buffer for internal storage, a simple array would be equivalent; but a mem allows the data to be populated by a wave file or green float array, or to be shared between multiple DSP/ASM primitives.
I'm fairly sure that FS3.0.4 should support it - my hunch is that you didn't declare the memin with a maximum size (the memin is unable to read this directly from the input connector).
A mem is simply an area of memory reserved as a buffer, of which the memin receives the base address. The memin's variable is then treated exactly as an array would be, allowing the value at any index to be both read and written. Hence, there is no memout, as it would serve no purpose (the whole point is that it's an array declared externally to the DSP/ASM primitive, so to speak).
If you only require a memory buffer for internal storage, a simple array would be equivalent; but a mem allows the data to be populated by a wave file or green float array, or to be shared between multiple DSP/ASM primitives.
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
Don't stagnate, mutate to create!
-
trogluddite - Posts: 1730
- Joined: Fri Oct 22, 2010 12:46 am
- Location: Yorkshire, UK
8 posts
• Page 1 of 1
Who is online
Users browsing this forum: No registered users and 5 guests