DSPRelated.com
Forums

Resampling

Started by Inaki Val October 31, 2005
cs_post...@hotmail.com wrote:
> robert bristow-johnson wrote: > > > I want to resample a real signal from for instance 1024 -> 1027 samples. > > > I prefer not to do it in the temporal domain, in order to avoid large > > > upsampling and downsampling steps. > > > > where do these guys pick up these misconceptions? is there some textbook > > somewhere that is saying that you first upsample by an integer factor of > > 1027, then pick out 1 sample in every 1024 thus throwing away the other > > 1023? does it not occur to people that, if all you're gonna do with the > > other 1023 samples is throw them away, maybe you don't have to compute > > them in the first place?
Agree.
> You don't have to compute all of them in the first place, but you do > have to compute smoothed intermediate samples at a wide variety of > possible points in between the input ones. Do you have a way of > computing only those that are needed?
Yes. It's all interpolation. Just interpolate the samples needed, doesn't matter where they are or what their spacing is. Using a simple windowed Sinc, you can recalculate your filter coefficients for each output sample anywhere, given sufficient compute power. For one channel on a fast PC, you might not even have to bother building (precalculating) and interpolating some multi-phase table as in the Rabbit code. For a small enough sample buffer, you could even contemplate using a pure infinite Sinc reconstruction with no window at all to resample. No need to calculate a single unneeded sample or even an unused filter coefficient. IMHO. YMMV. -- rhn A.T nicholson d.O.t C-o-M
rhnlogic@yahoo.com wrote:
> Using a simple windowed Sinc, you can recalculate your filter > coefficients for each output sample anywhere, given sufficient > compute power. For one channel on a fast PC, you might not > even have to bother building (precalculating) and interpolating > some multi-phase table as in the Rabbit code.
I say this because a contemporaty PC can do many double precision FP trancendental function evaluations much faster then the CCRMA DEC10, on which polyphase resampling was popularized, could do a single multiply. Good programming practice is to use obfuscated optimizations only when needed after profiling your algorithm. IMHO. YMMV. -- rhn A.T nicholson d.O.t C-o-M
in article 1130882481.528510.213690@z14g2000cwz.googlegroups.com,
rhnlogic@yahoo.com at rhnlogic@yahoo.com wrote on 11/01/2005 17:01:

> cs_post...@hotmail.com wrote: >> robert bristow-johnson wrote: >>>> I want to resample a real signal from for instance 1024 -> 1027 samples. >>>> I prefer not to do it in the temporal domain, in order to avoid large >>>> upsampling and downsampling steps. >>> >>> where do these guys pick up these misconceptions? is there some textbook >>> somewhere that is saying that you first upsample by an integer factor of >>> 1027, then pick out 1 sample in every 1024 thus throwing away the other >>> 1023? does it not occur to people that, if all you're gonna do with the >>> other 1023 samples is throw them away, maybe you don't have to compute >>> them in the first place? > > Agree. > >> You don't have to compute all of them in the first place, but you do >> have to compute smoothed intermediate samples at a wide variety of >> possible points in between the input ones. Do you have a way of >> computing only those that are needed? > > Yes. It's all interpolation. Just interpolate the samples needed, > doesn't matter where they are or what their spacing is.
well, i dunno about that. some sample locations or spacings require more coefficient memory or sample computation than others.
> Using a simple windowed Sinc, you can recalculate your filter > coefficients for each output sample anywhere, given sufficient > compute power.
that's true. but it does make a difference on compute power what the SRC ratio is. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Mon, 31 Oct 2005 11:38:09 -0600, "Inaki Val" <nahemoth@gmail.com>
wrote:

>Dear, > > I want to resample a real signal from for instance 1024 -> 1027 samples. >I prefer not to do it in the temporal domain, in order to avoid large >upsampling and downsampling steps. I have read that one possibility could >be zero padding in frequency, but for this example I must use a IDFT and >not IFFT to transform the signal to time domain. I have heard something >about chirp-z transform, but I don't understand how to use it in this >application. Someone has solved this problem or has any other idea? >Thanks. > >Inaki
Meanwhile, now that the time-domain solution has been adequately thrashed, I'm wondering why you're reluctant to pursue the frequency-domain approach. The quantities you're looking at are small, i.e., a 1024-pt FFT and 1027-pt IDFT aren't hard to compute and would do the job nicely. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message 
news:BF8BC747.BB58%rbj@audioimagination.com...
> in article P7GdnWqkjuwcy_veRVn-hA@giganews.com, Inaki Val at > nahemoth@gmail.com wrote on 10/31/2005 12:38: > >> I want to resample a real signal from for instance 1024 -> 1027 samples. >> I prefer not to do it in the temporal domain, in order to avoid large >> upsampling and downsampling steps. > > where do these guys pick up these misconceptions? is there some textbook > somewhere that is saying that you first upsample by an integer factor of > 1027, then pick out 1 sample in every 1024 thus throwing away the other > 1023?
Actually, yes, a lot of stuff (textbooks and papers) I've read on the subject does start from that theoretical basis. But the good ones go on to point out the obvious polyphase optimization.
> does it not occur to people that, if all you're gonna do with the > other 1023 samples is throw them away, maybe you don't have to compute them > in the first place?
I guess some people only get the basic idea and never figure out the rest. Personally, I learned about SRC starting from the perspective of Lagrange interpolation, so it was obvious that one didn't have to do that crazy zero-stuffing bit. To me, windowed-sinc interpolation just seemed like a better form of Lagrange interpolation (more taps, better results) with a pre-computed table of coefficients instead of on-the-fly calculation.
Jon Harris wrote:

>"robert bristow-johnson" <rbj@audioimagination.com> wrote in message >news:BF8BC747.BB58%rbj@audioimagination.com... > > >>in article P7GdnWqkjuwcy_veRVn-hA@giganews.com, Inaki Val at >>nahemoth@gmail.com wrote on 10/31/2005 12:38: >> >> >> >>>I want to resample a real signal from for instance 1024 -> 1027 samples. >>>I prefer not to do it in the temporal domain, in order to avoid large >>>upsampling and downsampling steps. >>> >>> >>where do these guys pick up these misconceptions? is there some textbook >>somewhere that is saying that you first upsample by an integer factor of >>1027, then pick out 1 sample in every 1024 thus throwing away the other >>1023? >> >> > >Actually, yes, a lot of stuff (textbooks and papers) I've read on the subject >does start from that theoretical basis. But the good ones go on to point out >the obvious polyphase optimization. > >
A generally good way to look at the real computational needs of DSP algorithms is to always start at the output and work back. If you don't need to perform a calculation to generate the input to the next stage as you work back, you won't need to do it a real implementation that works forwards. :-) Yes, yes, I know this is a simplification, and real world algorithms have many loops. Following the general pattern of this thinking will, however, generally serve you pretty well. Its surprising how many implementations burn lots of MIPs generating completely unused answers. Regards, Steve
On Wed, 02 Nov 2005 15:16:28 +0800, Steve Underwood <steveu@dis.org>
wrote:

>Yes, yes, I know this is a simplification, and real world algorithms >have many loops. Following the general pattern of this thinking will, >however, generally serve you pretty well. Its surprising how many >implementations burn lots of MIPs generating completely unused answers. > >Regards, >Steve
No kidding. Early in my career I worked a lot with Taylor-weighting for generating FFT window functions. There was a sfotware routine circulating in the department that was used by most folks to do the task, and it contained some nested loops and took forever to calculate the window coefficients. I knew personally of at least three good DSP engineers who had tweaked it over a couple of years time to optimize it a little better because everybody wanted it to run faster. Just by chance I had a bit of a revelation one day and collapsed the whole thing to a single loop that took almost no time to execute and got the same answers that it had with the long algorithm. I think it was just one of those cases where everybody had been looking at it for too long and could no longer see other alternative methods. That happens a lot, and I've been surprised often enough at how other really smart folks miss "obvious" things sometimes, as well as how I miss the obvious until it's pointed out by someone else. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org