DSPRelated.com
Forums

Has any one seen this window family?

Started by Cedron 4 years ago6 replieslatest reply 4 years ago213 views
This is a followup to my earlier forum posting:  Has any one seen this window function?

Nobody claimed to have seen that member or thought it was special.  I think otherwise.

Based on that window, I have discovered a previously unrecognized class of window functions.  What is particularly amazing about these is that the Eigenvectors of the DFT seem to be regularly placed members in this family, or a variation thereof.  This provides not only a numerical solution for calculating the exact minimal sized eigenvectors, but
more importantly a concise mathematical definition of each family member.

I have written an article here: The Zeroing Sine Family of Window Functions

This is the definition of the Zeroing Sine Windows:

$$ ZSW_L[n] = \prod_{l=1}^{L} \sin \left( \frac{n+l}{N}\pi \right) $$

This is the definition of Raised Sine Windows:

$$ RSW_L[n] = \sin^L \left( \frac{n}{N}\pi \right) $$

The former is the "factorial" version, the latter the "power" version.  It should be clear that for a fixed value of L:

$$ \lim_{N \to \infty} ZSW_L = RSW_L $$

Another way to view that is the the Zeroing Sine Windows is the exact discrete version and the Raised Sine Windows is an approximation.

Having a concise definition of the window function allows this discrete formula to be calculated:
$$
\begin{aligned}
W_L[0]   &=  \text{Some Non-zero Value} \\
W_L[n]   &=  \frac{\sin \left( \frac{n+L}{N}\pi \right)}{\sin \left( \frac{n}{N}\pi \right)} W_L[n-1] \\
\end{aligned}
$$
The starter value isn't that important to create a vector since any multiple will do.  The actual value matching the family build would be:
$$
W_L[0] = \prod_{l=1}^{L} \sin \left( \frac{l}{N}\pi \right)
$$
This lets you build any of the family members quicker than the method in my article.  For a particular \(L\) value for each \(N\), the family member is the minimum non-zero width eigenvector of that sized DFT, making a unique definition possible.

Here is a source code listing you can test yourself (Update: I posted the wrong code at first, oops):

import numpy as np

#========================================================================
def main():

#---- Set parameters

        N = 16
        
#---- Calculate needed zero size

        if ( N & 1 ) == 1:
           L = ( N - 1 ) >> 1
        else: 
           L = ( N >> 1 )

        print( N, L )

#---- Build the Zeroing Sine Family member

        theChi = np.pi / N

        w = np.zeros( N, dtype=complex )

        w[0] = 2**(-(L>>1))
        
        for n in range( 1 , N ):
          w[n] = w[n-1] * np.sin( (n+L) * theChi ) / np.sin( n * theChi )

#---- Even N fix

        if ( N & 1 ) == 0:
           w += np.roll( w, 1 )

#---- Twist it up to match peaks in signal and spectrum

        theSpinner = 1.0
        theStepper = np.exp( 1j * L * np.pi / N )
        
        for n in range( 0 , N ):
          w[n]       *= theSpinner
          theSpinner *= theStepper

#---- Take the DFT and compare

        p = np.zeros( N, dtype=complex )

        s = np.fft.fft( w ) / np.sqrt( N )
        
        for n in range( N ):
          p[n] = w[n] / s[n]
          print( " %3d %11.6f  %11.6f     %11.6f %11.6f    %11.6f %11.6f " % \
                ( n, np.abs( w[n] ), np.angle( w[n] ), \
                     np.abs( s[n] ), np.angle( s[n] ), \
                     np.abs( p[n] ), np.angle( p[n] ) ) )

        print()

        for n in range( 1, L ):
          print( np.angle( p[n] / p[n-1] ) / theChi )

#========================================================================
main()


In order to make it work for all N, the window function has to be shifted to zero to match the spectrum centered at zero.  This only works for every fourth N for whole shifts.  Partial shifts can be simulated by "spinning up" the window function so the spectrum shifts to match the window.  Of course, this is the same as applying the window function to a pure complex tone, meaning that the windowed spectrum of a pure tone will be the same window in the spectrum at the frequency of that pure tone magnitude wise.

This is not a contrived window family definition.  It arose out of my analysis.

It is my contention that it has been undiscovered until now because of the conventional approach to DSP is to define the discrete version of something as the sampled version of the continuous case.

Well, if you have a sequence that converges to a limit.  It is a lot easier to find the limit knowing the sequence than it is to find the sequence given the limit.  Yet, it is the latter position you are in when you ground yourself in the continuous case.


In my answer here:

Amplitude after Fourier transform

I make the argument that the natural normalization factor for the DFT is \(1/N\).  If you accept that, then the corresponding eigenvalue is \(\sqrt{N}\).  Otherwise, you can see the code uses a \(\frac{1}{\sqrt{N}}\) normalization factor so the values are easier to compare.

Bottom line:  It is time for DSP education to be put on a proper discrete footing.  The continuous case is derived from the discrete, not the other way around.


Comments?  I'm ready for a debate.

[ - ]
Reply by bmoersAugust 19, 2020

Cedron,

I believe your mathematical prowess will not be debated here.  Next time I need a window function, I will explore the efficacy of a zeroing sine window.  Kudos to you for your discovery!

[ - ]
Reply by CedronAugust 19, 2020
Thank you.  

However, my prowess is not what I want to debate (or defend).

What this example points out, is the huge blind spot created by doing discrete math coming back from the limit form of it.  As I am studying this further, it is becoming more apparent that these are the "native" window functions of the DFT, as the Hermite-Gaussian functions are the "natives" of the continuous case.

As in this example, given the ZSW (sequence) definition is easy to find the RSW (limit).  The RSW (the limit) has been known for years, and is very inuitive in the continuous case, and readily (obviously) extendable back to the discrete case, but that is not where they come from.

This is where we are now, the Hermite-Gaussian functions are going to be the limit of sequences of these eigenvectors.  I've only cracked the first layer so far.  Like I said, it is tougher working backwards.  Ironically, discrete math is neither my strong suit or favorite, I am much more comfortable with Calculus and differential equations.

It bothers me to no end that the current methodology (paradigm) of teaching this material makes learning Calculus (and done properly, Real Analysis) a prerequisite for learning and understanding the DFT when that is not only not necessary, but makes it exceedingly more difficult.  That's the contention I want to debate.

As time goes on, I see the need and use of the DFT to be strictly in the digital realm.  Pretending that it rests on the continuous case, when in fact, the continuous case depends on the discrete case is just back assward.


The fact that this window family has remained undiscovered for so long is the strongest testimony I can give to back my assertion.  Well there are also my exact frequency formulas in which I constantly ran into a wall of "That's not possible so I'm not even going to look at it."  A prejudice (and false notion) that I believe is rooted in this very issue.

I have never been a fan of window functions.  The only one I ever use is the VonHann for spectrogram display.  These have actually changed my mind.  The resolution magnification that I mention in the article is truly amazing.


[ - ]
Reply by CedronAugust 19, 2020
Let me propose another analogy:

Suppose you visited a country where they did addition by taking the anti-log of each number, multiplied them, then took the log to get their answer.

"It works!", they say,  "That's how we learned it in school!"

So, here you are, the FT comes from the DFT (as integration is the limit of summation), but that has been lost in the tracks of time.....
[ - ]
Reply by bholzmayerAugust 19, 2020

Hi Cedron,

I appreciate your mathematical approach, and I'm sure it will reveal a whole bunch of possibilities which we either did not find or we did not understand up to now.

While in our enterprise we have been working with signals coming from inductive coil sensors for long years, I can watch a steady decrease in mathematical/physical knowledge, fading away with collegues' retirement.

The newer generations usually are happy with the present knowledge and just apply it without digging into the matter. 

Where understanding lacks, numerical approximations or KI technology must fill the gap. Thus, the learning curve becomes steeper, but in the wrong direction :(

So I like your example about how they do addition. I actually see, they do it this way. 

We're living in a too quick world. That's why I like this forum: it's slowing down sometimes to have a close watch on some issues which seemed so true, at first glance.
I took the time and enjoyed reading your article (although I probably did not understand all of it, but at least I tried to follow the exact derivations). 

I guess, I'll dig into it again, when it comes to work with filters/windows in future. So I look forward to your further insights. Great work! Thanks.

Bernhard

[ - ]
Reply by CedronAugust 19, 2020
Hi Bernhard,

Thanks for the reply and kind words.

It seems to me that every generation, since the dawn of civilization, has decried that the following generation are "lazy", "have it too easy", "don't know...." so I read your words with a nodding head, yep.

On the other hand, they tend to have new knowledge and stuff instead, and why should anybody have to know why a slide rule works (opposite of my analogy) when they know how to use it?  Some stuff does go obsolete.

So far, I've only demonstrated my claims numerically.  My first stab at a proof was difficult, but I may not have set it up right.  I'm going to give it another try as the proof should give a clue of how to construct the higher order Gaussians (with Hermite polynomial effects included)  My gut feeling is that simple differentiation will do in the discrete case and the Hermite polynomials emerge in the limit of $ N \to \infty $.

I've never been a fan of window functions.  In my frequency formula calculations, they just make the math harder without adding any information.  The only exception is when I make a spectrogram, then I want to have smooth transitions bin to bin as a frequency moves up the scale.  In the past, a VonHann has been good enough for me, I will now be using this instead.  Also, when I do multiple tone decomposition (iterative identify and remove tones from the mix) it is handy to have a starter list of tones.  This technique will provide the initial estimation of the tone set and may be just as good of a platform to do the iterative approach as the raw DFT.  Open question on my to do list.

The people I am really trying to reach are the EE profs (Dept chairs and Deans) to get them to change their ways because I think they are doing it all wrong.  I want at least one of them to try to justify needing Calculus to learn the DFT.  Somebody, please, I am so ready to unload.

Don't get me wrong, I think every High School student should get to Calculus, maybe even at the Middle School level for the brighter students.

My understanding of filters has changed lately as well.  Had somebody simply said (or I would have found somewhere) they are the discrete version of linear differential equations (IIR homogenous case, FIR particular case), it would have been a lot easier for me than to try to interpret the language applied byt those who use them without knowing that connection.  Since, I have found a few references that make that connection.  There is no way I can promise to find something innovative there, but if I do, I will surely write it up in another article, so thanks for your words of encourage on that, too.

Ced

P.S.  Did you mean AI, not KI?
[ - ]
Reply by bholzmayerAugust 19, 2020

Yes, AI. Sorry for my German abbreviation, but you got it :-)