DSPRelated.com
Forums

Upsampling problem

Started by jungledmnc September 21, 2008
Hi,
I'm performing upsampling for audio to avoid aliasing. This is what I do:
1) Take source buffer and using cubic interpolation convert it into a
buffer let's say 16x larger.
2) Perform lowpass on the temporary buffer with cutoff at X / 16, where X
is just some kind of factor compensating steepness of the filter.
3) Here comes the blackbox - some effect. But in this testing case it
simply does nothing, I'm writing it here, just to show, where some
processing should occur.
4) Downsample simply by taking every 16th sample.
5) Perform lowpass again with cutoff X (now without "/16" since the buffer
has original sampling rate)

The thing is, this generates a huuuuge amount of higher frequencies. I
first removed 5 (final lowpass). Nothing changed. Now when I remove 2
(initial lowpass), then the higher frequencies disappear! Why??? It should
be a lowpass, it should remove them! But it add them instead!

It probably isn't rounding problem, since e.g. 1+2 are done in 64bit fp
arithmetics, so I don't think that would do something like that.

Could it be caused by the downsampling? I originally thought I should
downsample by averaging all 16 samples, not just taking the first of them,
but I found this one in this forum... somewhere. Is it right?

Or any other ideas?

Btw. about the X coefficient. I really did not know what number to use, so
I was thinking like this : SR/2 is the total max -> X = 0.5, but since the
filter is not very good (some kind of biquad), I simply used X = 0.4.
Therefore e.g. if SR=44100 -> cutoff=17.5kHz
Is there a little more precise way to get the coefficient?

Thanks a lot!
jungledmnc
On Sep 21, 8:13&#4294967295;pm, "jungledmnc" <jungled...@gmail.com> wrote:
> Hi, > I'm performing upsampling for audio to avoid aliasing. This is what I do:
Filtering to avoid aliasing has to be done before sampling. or before a reduction in sampling rate.
> 1) Take source buffer and using cubic interpolation convert it into a > buffer let's say 16x larger.
That's the hard way.
> 2) Perform lowpass on the temporary buffer with cutoff at X / 16, where X > is just some kind of factor compensating steepness of the filter.
How do you measure steepness? Why do you care?
> 3) Here comes the blackbox - some effect. But in this testing case it > simply does nothing, I'm writing it here, just to show, where some > processing should occur. > 4) Downsample simply by taking every 16th sample.
First you upsample by 16, then you downsample by 16, getting back to where you started. What are you trying to accomplish? ...
> Or any other ideas?
Read a good book like Lyons' or Smith's. You have something in mind, but it makes no sense. I repeat my question: what are you trying to accomplish? Jerry
I repeat my question: what are you trying to
>accomplish? > >Jerry
I'm working with digital audio, so I assume the filtering HAS been done before sampling and if not, there is nothing I can do. But when I want to perform some kind of nonlinear processing, let's say soft limiting, they say it is good to perform upsampling first and then get back to the original sampling rate. First it logically should perform averaging for downsampling, so taking the first from each block does not make sense to me either. On the other hand with averaging it makes perfect sense - imagine rectangular signal. On every edge the limiter generates edge too. But when we use cubic interpolation, it creates much smoother curve. When we then apply such nonlinear 1-sample-based function and average all 16 samples, the result will also be smoother, that might be good. Th upsampling/downsampling problem is described on wikipedia. But no practical info. Btw. you said that cubic interpolation is "the hard way". What is the easy one? dmnc

jungledmnc wrote:
> > Hi, > I'm performing upsampling for audio to avoid aliasing. This is what I do: > 1) Take source buffer and using cubic interpolation convert it into a > buffer let's say 16x larger. > 2) Perform lowpass on the temporary buffer with cutoff at X / 16, where X > is just some kind of factor compensating steepness of the filter. > 3) Here comes the blackbox - some effect. But in this testing case it > simply does nothing, I'm writing it here, just to show, where some > processing should occur. > 4) Downsample simply by taking every 16th sample. > 5) Perform lowpass again with cutoff X (now without "/16" since the buffer > has original sampling rate) > > The thing is, this generates a huuuuge amount of higher frequencies. I > first removed 5 (final lowpass). Nothing changed. Now when I remove 2 > (initial lowpass), then the higher frequencies disappear! Why??? It should > be a lowpass, it should remove them! But it add them instead!
That would suggest whatever you are doing at step 2) is wrong. If you upsample using cubic interpolation doesn't every 1/16th sample at the new 16X sample rate remain the same as the original? Then when you decimate by discarding 15 out of 16 samples don't you get the original signal back? In other words, it seems likely that your step 4) discards the same samples that step 1) invented. If step 2) were working correctly then it would have very little impact on the outcome. -jim
> > It probably isn't rounding problem, since e.g. 1+2 are done in 64bit fp > arithmetics, so I don't think that would do something like that. > > Could it be caused by the downsampling? I originally thought I should > downsample by averaging all 16 samples, not just taking the first of them, > but I found this one in this forum... somewhere. Is it right? > > Or any other ideas? > > Btw. about the X coefficient. I really did not know what number to use, so > I was thinking like this : SR/2 is the total max -> X = 0.5, but since the > filter is not very good (some kind of biquad), I simply used X = 0.4. > Therefore e.g. if SR=44100 -> cutoff=17.5kHz > Is there a little more precise way to get the coefficient? > > Thanks a lot! > jungledmnc
----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups ---= - Total Privacy via Encryption =---
jungledmnc wrote:
> Btw. you said that cubic interpolation is "the hard way". What is the easy > one?
http://www.dspguru.com/info/faqs/mrfaq.htm BTW, the reason they suggest that you upsample before doing your non-linear processing is because it will likely generate frequencies higher than fs/2. Upsampling makes room for them. Unless the signal has been upsampled, those components will fold back into the 0 to fs/2 range, where they are unwanted. Assuming that's what's going on, you really will need to filter before downsampling. -- Jim Thomas Principal Applications Engineer Bittware, Inc jthomas@bittware.com http://www.bittware.com (603) 226-0404 x536 When you're great, people often mistake candor for bragging. - Calvin

jungledmnc wrote:
> > I repeat my question: what are you trying to > >accomplish?
> Btw. you said that cubic interpolation is "the hard way". What is the easy > one?
Cubic interpolation is pretty much the same thing as the easy way. What people tend to call "easy" and "hard" tend to be what they are familiar with and what they are not. Some methods execute faster and others require less programming effort and some achieve neither and some both. With a typical implementation of cubic interpolation you compute the coefficients as you go. But since you are upsampling by an integer factor that would mean computing the same coefficients over and over. If you realized that and computed the coefficients before hand and stored them in memory, you would have a method that looks the same as the so called "easy" way. BTW, when you get around to implementing step 3) you will need to put step 2) (the A-A filter) after step 3) since it's purpose is presumably intended to be removal of the higher frequencies created by step 3). -jim ----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups ---= - Total Privacy via Encryption =---
On Sep 22, 5:16&#4294967295;am, "jungledmnc" <jungled...@gmail.com> wrote:
> I repeat my question: what are you trying to > > >accomplish? > > >Jerry > > I'm working with digital audio, so I assume the filtering HAS been done > before sampling and if not, there is nothing I can do. But when I want to > perform some kind of nonlinear processing, let's say soft limiting, they > say it is good to perform upsampling first and then get back to the > original sampling rate. > First it logically should perform averaging for downsampling, so taking > the first from each block does not make sense to me either. On the other > hand with averaging it makes perfect sense - imagine rectangular signal. On > every edge the limiter generates edge too. But when we use cubic > interpolation, it creates much smoother curve. When we then apply such > nonlinear 1-sample-based function and average all 16 samples, the result > will also be smoother, that might be good. > Th upsampling/downsampling problem is described on wikipedia. But no > practical info. > > Btw. you said that cubic interpolation is "the hard way". What is the easy > one? > > dmnc
Now I recall what you're trying to accomplish. There are several issues. You are correct that higher sampling rates can simplify nonlinear processing, but you need to recognize that nonlinearities introduce harmonics and intermodulation products. Those artifacts are inherent in the process and happen with analog signals too. For upsampling, spline interpolation isn't quite as good as sinc interpolation. You can do sinc interpolation by 16 by inserting 15 zero samples after every real one, thereby making the signal 16 times longer. Then -- this is the interpolation part -- filter the padded signal with a low-pass filter to remove the images -- not aliases! -- above the original frequency band. The necessary filtering will probably be simpler and faster if you upsample by four twice. If the upsampled signal contains no frequencies that can't be supported by the lower sample rate, simply selecting every sisteenth sample will get you back down. If nonlinearities have introduced components above the lower fs/2 -- and they likely will -- filter before downsampling. Again: nonlinear processing will introduce distortion products. If that's objectionable, tough! The art is in making the dostortion "pleasing". What please you probably wouldn't please me. :-)
On Sep 22, 10:56&#4294967295;am, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
> jungledmnc wrote:
...
> > Btw. you said that cubic interpolation is "the hard way". What is the easy > > one? > > Cubic interpolation is pretty much the same thing as the easy way. What people > tend to call "easy" and "hard" tend to be what they are familiar with and what > they are not.
There is no guarantee that cubic interpolation will produce no components above the original fs/2, so the result needs to be filtered. If one is going to filter anyway, it is easier to generate zeros than the splines points. ... Jerry

Jerry Avins wrote:
> > On Sep 22, 10:56 am, jim <"sjedgingN0sp"@m...@mwt.net> wrote: > > jungledmnc wrote: > > ... > > > > Btw. you said that cubic interpolation is "the hard way". What is the easy > > > one? > > > > Cubic interpolation is pretty much the same thing as the easy way. What people > > tend to call "easy" and "hard" tend to be what they are familiar with and what > > they are not. > > There is no guarantee that cubic interpolation will produce no > components above the original fs/2, so the result needs to be > filtered. If one is going to filter anyway, it is easier to generate > zeros than the splines points.
Cubic interpolation is exactly the same thing as stuffing with 15 zeroes and then filtering with a low pass filter of length 64. There is no guarantee that any low pass filter of length 64 will be able to remove all the content above the original fs/2. However, let's compare for the specific particular example he gave where he upsamples and then downsamples (step 3 processing not yet implemented). If he does the cubic interpolation correctly he should end up after step 5 with the original signal he started with unchanged. Of course if he uses the correct length 64 windowed sinc filter after stuffing with zeroes he should also get that same result. But I don't see any particulart reason why one would be considered hard and one not. -jim ----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups ---= - Total Privacy via Encryption =---
On Sep 22, 12:33&#4294967295;pm, jim <".sjedgingN0sp"@m...@mwt.net> wrote:
> Jerry Avins wrote: > > > On Sep 22, 10:56 am, jim <"sjedgingN0sp"@m...@mwt.net> wrote: > > > jungledmnc wrote: > > > &#4294967295; ... > > > > > Btw. you said that cubic interpolation is "the hard way". What is the easy > > > > one? > > > > Cubic interpolation is pretty much the same thing as the easy way. What people > > > tend to call "easy" and "hard" tend to be what they are familiar with and what > > > they are not. > > > There is no guarantee that cubic interpolation will produce no > > components above the original fs/2, so the result needs to be > > filtered. If one is going to filter anyway, it is easier to generate > > zeros than the splines points. > > Cubic interpolation is exactly the same thing as stuffing with 15 zeroes and > then filtering with a low pass filter of length 64. There is no guarantee that > any low pass filter of length 64 will be able to remove all the content above > the original fs/2.
True, but where is it written that the filter length must be limited to 64? the filter ought to be made as good as it needs to be.
> &#4294967295; &#4294967295; &#4294967295; &#4294967295; However, let's compare for the specific particular example he gave where he > upsamples and then downsamples (step 3 &#4294967295;processing not yet implemented). If he > does the cubic interpolation correctly he should end up after step 5 with the > original signal he started with unchanged. Of course if he uses the correct > length 64 windowed sinc filter after stuffing with zeroes he should also &#4294967295;get > that same result. But I don't see any particulart reason why one would be > considered hard and one not.
If he just stuffs zeros, then takes every 16th sample (starting with the right one) he'll also get back the original.That he doesn't makes it clear that there's an error somewhere, but that's a bit beside the point. Ultimately, he wants to process at the higher rate, then filter again and downsample. He needs that filter in the end, so why not use it to interpolate? Jerry