For a school's project, I am programming a standard phase vocoder for time
compression/expansion of digital audio. The audiodata is broken up in
frames, each frame is multiplied by a window function (in my case a
Hann-window) and the result is transformed into frequency-domain by FFT.
Now because the frames overlap each other, I was wondering if it's possible
to optimize this computation. For example, I am using an FFT/window size of
4096 and a frame hopsize of 64.
So strictly speaking there are only 64 "new" samples per new frame.
Also I noticed a standard implementation gives incorrect beginning and
endings of the processed data, with a fade-in respectively fade-out due to
the window function.
Thanks in advance