Support

If you have a problem or need to report a bug please email : support@dsprobotics.com

There are 3 sections to this support area:

DOWNLOADS: access to product manuals, support files and drivers

HELP & INFORMATION: tutorials and example files for learning or finding pre-made modules for your projects

USER FORUMS: meet with other users and exchange ideas, you can also get help and assistance here

NEW REGISTRATIONS - please contact us if you wish to register on the forum

Weird built-in GraphFF Magnitude scaling

DSP related issues, mathematics, processing and techniques

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Tue Jun 18, 2013 7:05 pm

MyCo wrote:Regarding the first and last bin: The Guide is a little bit confusing at this point. The lines in the plot are drawn between the individual FFT results. That's why the first and last bands are smaller.
Narrower, if this is what you mean. However, can you please compare against the attached .jpg from Steven W. Smith? Surely the first and last frequency bands are narrower, but why-oh-why using GraphFF or FFT followed by MagPhas their amplitudes are +6dB compared to the truth? I have the impression that somebody within Synthmaker/Flowstone did the required magnitude correction for the first and last frequency bands, the correction Steven W. Smith is talking about, not once, but twice! Can you please check?
MyCo wrote:The first point is a x coordinate 0, the second at 1, third at 2 and so on, so the first boundary is always at 0.5 while the second boundary is always at 1.5. Unless you want to draw a band amplitude graph (like in a Graphic EQ) you can ignore that.
For sure I'm not going to ignore that. It's part of the FFT concept. Any FFT video rendering, whatever the FFT length, is only exact when graphically represented with bars like a Graphic Eq, with the possibility to zoom/scroll the horizontal axis. Therefore, a nice graphing option while zooming, mandatory for really knowing what gets measured, is to display on the X-axis, the actual central frequency values of each FFT band. It is also mandatory to pay special attention to the first frequency band and the last frequency band, displaying them 50% narrower than the other ones. Any help appreciated, for building the corresponding graphing engine. I intend to try this with a FFT-32 or FFT-64, not needing zooming, as first objective. I'll introduce zooming later on. Any help much appreciated.
MyCo wrote:In your examples you measure the DC offset... with a more excessive code than the one I posted... but you don't remove/reject it. Why? The DC offset can be significant and the FFT is then just displaying wrong data.
Ruby allows nice, elegant & efficient code, however when metrology is involved (represent a physical value with scientific accuracy), I prefer relying on code that any scientist fluent with Fortran and Basic can audit in a snapshot, instead of dealing with language particularities (however remarkable, elegant, and much appreciated) like Ruby is introducing. Must say that Ruby, not forcing (allowing?) you to declare variables types, induces some cold with scientists, like the fundamental difference between initializing v=0 and v=0.0. I know scientists deeply disliking this. By the way I enjoy the discussion, and it's a real pleasure dealing with you, MyCo. My learning curve gets enjoyable. You say the FFT delivers wrong results if the signal has a DC offset. I wouldn't say the FFT is displaying wrong results. All I can say is that in case of a short FFT, aggravated by an improper window type, aggravated by the fact that Syntmaker/Flowstone is wrongly displaying the DC offset (twice the actual value), the first frequency band (containing the DC) will bleed into the next frequency bands, in a way that's compliant with the sampled signal theory. I thus prefear measuring the DC offset in parallel, using that little (ugly and non optimized, however easily readable) Ruby code, gently saying "hello, are you aware that there is a DC offset in the signal equal to -0.0126 relative to full scale".
Attachments
FFT_32 delivers 17 frequency samples.jpg
FFT_32 delivers 17 frequency samples.jpg (47.7 KiB) Viewed 27289 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby MyCo » Tue Jun 18, 2013 9:36 pm

Actually I've never heard of this half band problem before. I've been using FFT already a very long time and also done some code conversions and never notice that there is a special adjustment case.
I think it's because the DSP Guide is talking about DFT and not FFT. The difference is - beside that the DFT is way slower - the DFT outputs N/2+1 samples, while the FFT output N/2 samples. So this additional sample in the DFT is the one, that get's split. I don't know exactly, but as I said, I've never heard about that band problem before and can't find anything about that problem in combination with FFT on the net.
User avatar
MyCo
 
Posts: 718
Joined: Tue Jul 13, 2010 12:33 pm
Location: Germany

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Wed Jun 19, 2013 3:17 am

@MyCo : you are not the first and the last one, getting trapped into this. You need to remember that the FFT is a particular DFT implementation, speed-optimized, running on time-domain samples organized in powers of two. Thus, what Steven W. Smith writes about the DFT data formats, remains valid for the FFT data formats.

The National Instrument documentation at http://www.ni.com/pdf/manuals/322194c.pdf confirms what I'm saying. This is LabVIEW Sound and Vibration Toolkit User Manual, April 2004 Edition, Part Number 322194C-01. On page 10-5 and 10-6, it reads : "For example, if 1,024 samples are input to the FFT algorithm, the computed spectrum has 512 non-DC spectral lines. The computed spectrum has a total of 513 lines including the DC component."

As a footnote, I'll say that another common misconception, possibly induced by National Instrument's inappropriate wording, is to assume that the first spectral line is the DC component. It is more correct to say that the first spectral band delivered by the FFT, is the one extending towards DC. Steven W. Smith makes very clear (see the .jpg in my previous message), that the first spectral band has a special, well defined bandwidth (half the ordinary spectral bands), thus it is including more than just DC. In audio, if you set the first DFT or FFT spectral band to zero, you also cut the deep bass end.

It should be clear now that the Synthmaker/Flowstone GraphFF (method 1) and FFT followed by MagPhas (method 2), all running on N time-domain samples, are outputting :
a) (N/2)+1 spectral bands including the one towards DC (and this is absolutely correct),
b) a weird global scaling, which is nor "Volts Full Scale" related, neither "RMS" related, as consequence a global multiplication correction factor is required, equal to 4.000 in "Volts Full Scale" and equal to 2.818 in "RMS",
c) a x2 overestimation of the first spectral band magnitude (the one towards DC),
d) a x2 overestimation of the last spectral band magnitude (the one towards Fs/2).

Please take some time checking this matter using the two .fsm I'm attaching.
Attachments
FFT_16 Spectrum Meter (normalized Rectangle window)(FFTmagbugfix).fsm
(12.59 KiB) Downloaded 1197 times
FFT_16 Spectrum Meter (normalized Rectangle window).fsm
(12.39 KiB) Downloaded 1223 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby trogluddite » Wed Jun 19, 2013 8:47 am

Very interesting stuff, steph - keep it coming :)
All schematics/modules I post are free for all to use - but a credit is always polite!
Don't stagnate, mutate to create!
User avatar
trogluddite
 
Posts: 1730
Joined: Fri Oct 22, 2010 12:46 am
Location: Yorkshire, UK

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Thu Jun 20, 2013 2:26 am

I went reading the Flowstone User Manual, chapter 8 - Ruby Component/Drawing. I realized how Flowstone + Ruby manage to greatly extend the graphing capabilities, far beyond what was attainable with Synthmaker.

For graphing a Truly Accurate Spectrum Meter In Linear Frequency Axis, I see two simple methods basing on Ruby Component/Drawing.

Method A is basing on a pen. The Float Array containing the magnitudes gets converted by Ruby into a larger Float Array containing all begin-end coordinates of each line to be graphed using a pen. For each spectral band, we need to graph a vertical line for reaching the required dB level, then graph a horizontal line for reaching the required spectrum bandwidth. We repeat this for all spectral bands, paying attention to the first spectral band width (half the others), and paying attention to the last spectral width (also, half the others). Actually, before graphing a line, we ask ourselves if it is visible, or not. If the required dB level is below the low dB value of the Y axis, or above the max dB value of the Y axis, the horizontal line is not visible.

Method B is basing on a brush instead of a pen. The Float Array containing the magnitudes gets converted by Ruby into a larger Float Array containing all begin-end coordinates of each vertical bar to be graphed using a brush. For each spectral band, we need to graph a vertical bar using a brush for reaching the required dB level. We repeat this for all spectral bands, paying attention to the first spectral band width (we adjust the brush width to 50%), and paying attention to the last spectral width (again, we adjust the brush width to 50%). Actually, before graphing a vertical bar, we ask ourselves if it is visible, or not. If the required dB level is below the low dB value of the Y axis, the vertical bar is not visible.

For getting a proper pixel correspondence, without bars getting blurred, merged or vanishing, we shall define the FFT_512 layout as archetype :
bar #000 : 1 pixel wide
bar #001 to 255 : 2 pixels wide
bar #256 : 1 pixel wide
Thus, a total width of 512 pixels, not including the borders.
Such layout fits 1024 x 600 screens we find in lowcost netbook PCs.
I know it's not efficient, almost 50% waste because of doubling the normal lines width. I want to try this way in first place, as canonic FFT graphing implementation. Let's drop bells and whistles like gradient fill.

Any shorter FFT will consist on normal bars having 4 pixels width, 8 pixels width, 16 pixels width, etc.

We'll consider such Truly Accurate Spectrum Meter In Linear Frequency Axis, as pedagogic tool.
When it will be okay, it may become a valuable addition to the STEM Examples Project initated by Admin.

Can somebody show me a rough example, as I feel that if I'm undertaking this from scratch, I'll get nervous, losing a lot of time in details?

By the way I'm curious to measure the CPU load of the three different versions :
- the GraphLin component version (unfortunately not truly respecting the FFT definition)
- the Ruby Component/Drawing version basing on a pen (Method A, above)
- the Ruby Component/Drawing version basing on a brush (Method B, above)

Your support, much appreciated.
Steph
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Fri Jun 21, 2013 5:07 am

Things are progressing. See attached .fsm.
Attachments
FFT-based Spectrum Meter (normalized Rectangle window)(FFTmagbugfix)(Lines01).fsm
(22.69 KiB) Downloaded 1217 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Fri Jun 21, 2013 7:57 pm

Can somebody check if this Spectrum Meter can be considered as accurate?

See attached .jpg and .fsm

Considering a FFT computed on N time-domain samples, the FFT delivers (N/2)-1 spectral bands.
Any decent computer should be able to run any FFT size up to 256, with a ticker equal to 0.2 second.

The frequency scale only gets readable for FFT sizes equal to 32 or less (16, or 8, or 4).
The frequency scale represents the start-stop frequencies of all spectral bands.

The first spectral band is the one starting with DC - that particular one has a bandwidth equal to Fs/(2*N).
The next spectral band exhibit a frequency bandwidth equal to Fs/N.
The last spectral band is the one ending at Fs/2 - that particular one has a bandwidth equal to Fs/(2*N).

Let exercise, configuring an FFT size equal to 32, and a signal amplitude set to 0.1 which corresponds to -20dB in a system where the reference signal has an amplitude equal to 1, thus evolving between -1.0 and +1.0 in the time domain.

When adding DC to the signal, the level displayed by the first spectral band is stable, and corresponds to the level you would expect. Say you add 0.1 as DC to the input signal (supposed to go from -1 to +1). On this Spectrum Meter, the first spectral band will display a -20 dB level. Surely this is correct ...

When the signal is a sinus at a frequency that's very close to Fs/2, the level displayed by the last spectral band is unstable. There appear to be a beat phenomenon. Say the signal is 22,049.5 kHz at a -20dB level. On this Spectrum Meter, the last spectral band will be pulsating at a 0.5 Hz rhythm, displaying a slowly variable level sitting anywhere between -20dB and nihil. Is this correct ?

When the signal is a 8.268 kHz sinus (set the knob a tad bit below the 0.2 value) at a -20dB amplitude, the Spectrum Meter displays a nice clean spectrum. Only the 7th spectral band appears to be active. It displays a -20dB level, as one would expect.

When the signal is a 8.985 kHz sinus (set the knob a tad bit above the 0.2 value) at a -20dB amplitude, the Spectrum Meter displays a wide spectrum. All spectral bands appears to be active. The 7th and 8th spectral bands appear to be the most actives, and fluctuating. They fluctuate anywhere between -23dB and a -25dB. Reading such spectrum, nothing tells you that the input signal is an ultra-stable -20dB sinus signal at 8.985 kHz. The Spectrum Meter seems to be malfunctioning. The bad guy is the short FFT length. This is a mathematic thing. One must remember that a FFT_32 behaves like it is analyzing a signal of infinite duration, made of chunks of 32 samples repeating indefinitely. Because of the chunks appending to each other, there is a discontinuity at the boundaries in case the 32 samples chunk has a duration that's not a multiple of the sinus test signal period. Each boundary discontinuity introduces a Dirac pulse into the infinite length signal. The longer the FFT, the bigger the dilution. With a FFT_32, you introduce a Dirac every 32 samples. With a FFT_256, you only introduce a Dirac every 256 samples. Changing the FFT length to 256 restores selectivity and stability.

Let's revert to a FFT_32, and a 8.985 kHz sinus (set the knob a tad bit above the 0.2 value) at a -20dB amplitude. We are not yet finished with it. As seen previously, the Spectrum Meter displays a wide spectrum. All spectral bands appears to be active. The 7th and 8th spectral bands appear to be the most actives, and fluctuating. From there, both sides of the spectrum appear to decay smoothly. There is a clear pattern. Without changing anything into the setup, let's observe the behavior of the first spectral band (the one towards DC) and the last spectral band (the one towards Fs/2). Both don't fit the smooth decay pattern. They would better fit the smooth decay pattern, if they were 6dB up. Is this normal ?

At this point, there appears to be a contradiction.
1) The first spectral band appears to deliver the expected level in dB, compliant with the amount of DC we introduce into the measured signal. Indeed, adding 0.1 as DC, to a sinus signal fluctuating between -0.1 and +0.1 delivers a spectrum having the first spectral band at -20 dB, and the spectral band corresponding to the frequency of the sinus, also to -20 dB. We thus assume that the first spectral band got correctly calibrated.
2) We attempt doing the same calibration for the last spectral band, but unfortunately no firm calibration is possible because when supplying the -20dB "close to Fs/2" signal that's needed, we face a beat phenomenon. All we can see, is the last spectral band fluctuating between the -20dB level (the one we are expecting), and nihil. We thus assume that the last spectral band got correctly calibrated.
3) Doing so we have the impression that we have done everything right, for calibrating the Spectrum Meter. However, when supplying a 8.985 kHz sinus signal (not fitting the sampling window), we observe that the first spectral band and the last spectral band are 6dB down the ideal level that would ensure a smooth decay of the spectrum. Such is the apparent contradiction.

I think there is no contradiction at all. When harvesting noise, like the boundary discontinuity is, the first spectral band and the last spectral band appear to be 6dB down the expected levels, because they harvest half the frequency bandwidth of all other spectral bands.

I think that my Spectrum Meter is correct, and correctly calibrated.
For achieving this I needed to apply a bug fix, the one I'm calling FFTmagbugfix.
Within FFTmagbugfix, I process the Float Array delivered by the Synthmaker/Flowstone built-in FFT and MagPhase components. I divide by 2 the magnitude of the first spectral band, and I divide by 2 the magnitude of the last spectral band. I think that such bug fix is needed for achieving a correct Spectrum Meter calibration.

If the bug is confirmed, its origin is easy to understand.
Apparently, somebody within Synthmaker/Flowstone staff, realized there was a 6dB drop at the edges when dealing with certain signals, like the 8.985 kHz sinus. For restoring a smooth decay at the edges, he added 6dB to the first and last spectral bands. It becomes clear why most people relying on such buggy FFT, say that it delivers wrong results in presence of DC. Surely it delivers a wrong result in the presence of DC, as it overestimates the DC by 6dB.

Am I right?
Attachments
FFT-based Spectrum Meter (normalized Rectangle window)(FFTmagbugfix)(Lines01).jpg
FFT-based Spectrum Meter (normalized Rectangle window)(FFTmagbugfix)(Lines01).jpg (21.27 KiB) Viewed 27238 times
FFT-based Spectrum Meter (normalized Rectangle window)(FFTmagbugfix)(Lines01).fsm
(11.92 KiB) Downloaded 1246 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Sat Jun 22, 2013 8:47 am

Just added flexible windowing.
Can somebody check if this Spectrum Meter can be considered as accurate?

See attached .jpg and .fsm
Attachments
FFT-based Spectrum Meter (flexible windowing)(FFTmagbugfix)(Lines).fsm
(19.08 KiB) Downloaded 1195 times
FFT-based Spectrum Meter (flexible windowing)(FFTmagbugfix)(Lines).jpg
FFT-based Spectrum Meter (flexible windowing)(FFTmagbugfix)(Lines).jpg (21 KiB) Viewed 27223 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby steph_tsf » Sat Jun 22, 2013 9:08 am

dB grid is now automatically managed, using the dB_range parameter
see attached .fsm
Attachments
FFT-based Spectrum Meter (flexible windowing)(FFTmagbugfix)(Lines).fsm
(19.11 KiB) Downloaded 1227 times
steph_tsf
 
Posts: 249
Joined: Sun Aug 15, 2010 10:26 pm

Re: Weird built-in GraphFF Magnitude scaling

Postby MyCo » Sat Jun 22, 2013 9:29 am

Looks ok to me. The fix seems to work, although I don't know why this is needed. Believe me, the devs done nothing to the FFT code. I think they use a straight forward implementation, like we do for the stream fft. There is no workaround in it. You can compare the FS FFT output with one of the free calculators on the internet, and they'll output exactly the same.
User avatar
MyCo
 
Posts: 718
Joined: Tue Jul 13, 2010 12:33 pm
Location: Germany

PreviousNext

Return to DSP

Who is online

Users browsing this forum: No registered users and 20 guests

cron