# Upsampling problem

Started by September 21, 2008
```
Jerry Avins wrote:
>
> On Sep 22, 12:33 pm, jim <".sjedgingN0sp"@m...@mwt.net> wrote:
> > Jerry Avins wrote:
> >
> > > On Sep 22, 10:56 am, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
> > > > jungledmnc wrote:
> >
> > >   ...
> >
> > > > > Btw. you said that cubic interpolation is "the hard way". What is the easy
> > > > > one?
> >
> > > > Cubic interpolation is pretty much the same thing as the easy way. What people
> > > > tend to call "easy" and "hard" tend to be what they are familiar with and what
> > > > they are not.
> >
> > > There is no guarantee that cubic interpolation will produce no
> > > components above the original fs/2, so the result needs to be
> > > filtered. If one is going to filter anyway, it is easier to generate
> > > zeros than the splines points.
> >
> > Cubic interpolation is exactly the same thing as stuffing with 15 zeroes and
> > then filtering with a low pass filter of length 64. There is no guarantee that
> > any low pass filter of length 64 will be able to remove all the content above
> > the original fs/2.
>
> True, but where is it written that the filter length must be limited
> to 64? the filter ought to be made as good as it needs to be.
>
> >         However, let's compare for the specific particular example he gave where he
> > upsamples and then downsamples (step 3  processing not yet implemented). If he
> > does the cubic interpolation correctly he should end up after step 5 with the
> > original signal he started with unchanged. Of course if he uses the correct
> > length 64 windowed sinc filter after stuffing with zeroes he should also  get
> > that same result. But I don't see any particulart reason why one would be
> > considered hard and one not.
>
> If he just stuffs zeros, then takes every 16th sample (starting with
> the right one) he'll also get back the original.That he doesn't makes
> it clear that there's an error somewhere, but that's a bit beside the
> point.

Well yes step 2 could be eliminated whether or not a spline type of
interpolation is used if no processing is done in step 3. But it does indicate
where the error is if that filter in step 2 is supposed to be an interpolator.
So it's not beside the point. There are only certain filters that will produce
that result. Many people would claim that is a necessary condition that such a
filter be used for the operation to even be called interpolation. The point is
also that the implied filter of cubic interpolation is not very different than
the sinc filter (among other things both are interpolators) of the same length.
Given what he is doing he probably won't be able to tell the difference.

> Ultimately, he wants to process at the higher rate, then filter
> again and downsample. He needs that filter in the end, so why not use
> it to interpolate?

Hard to say - considering what he wants to do is _supposed_ to be non-linear.
So what is your suggestion? Should he filter before he does the non-linear
operation? If he filters before and decimates after without any filtering then
high frequencies created by the nonlinear op  will alias. But who knows, maybe
that will be part of the "effect".
Or maybe you are suggesting filter after he does the non-linear operation? Or
maybe he should filter both before and after?

-jim

>
> Jerry

----== Posted via Pronews.Com - Unlimited-Unrestricted-Secure Usenet News==----
http://www.pronews.com The #1 Newsgroup Service in the World! >100,000 Newsgroups
---= - Total Privacy via Encryption =---
```
```On Sep 22, 2:18&#2013266080;pm, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
> Jerry Avins wrote:

...

> Well yes step 2 could be eliminated whether or not a spline type of
> interpolation is used if no processing is done in step 3. But it does indicate
> where the error is if that filter in step 2 is supposed to be an interpolator.

Yes; his present difficulty shows a problem with the calculation.

> &#2013266080; &#2013266080; &#2013266080; &#2013266080; So it's not beside the point. There are only certain filters that will produce
> that result.

What's to the point or not is a matter of emphasis. I agree that I was
more dismissive than was warranted.

> Many people would claim that is a necessary condition that such a
> filter be used for the operation to even be called interpolation. The point is
> also that the implied filter of cubic interpolation is not very different than
> the sinc filter (among other things both are interpolators) of the same length.
> Given what he is doing he probably won't be able to tell the difference.

Unless he finds a way to clean up his maniulations, he won't be able
to tell anything. :-)

> > Ultimately, he wants to process at the higher rate, then filter
> > again and downsample. He needs that filter in the end, so why not use
> > it to interpolate?
>
> &#2013266080;Hard to say - considering what he wants to do is _supposed_ to be non-linear.
> So what is your suggestion? Should he filter before he does the non-linear
> operation? If he filters before and decimates after without any filtering then
> high frequencies created by the nonlinear op &#2013266080;will alias. But who knows, maybe
> that will be part of the "effect".

He shoulf filter before the nonlinear processing unless he can be sure
that his interpolation is good enough to avoid the need for it.

> &#2013266080; &#2013266080; &#2013266080; &#2013266080; Or maybe you are suggesting filter after he does the non-linear operation? Or
> maybe he should filter both before and after?

The point of the upsampling is keeping the harmonics generated by his
non-linear processing from aliasing. They must then be filtered out
before downconverting.  Ideally, only the in-band intermodulation
products will remain. The trade-offs here are an art. I can't make the
subjective judgments needed because I can't imagine any music being
improved by a crooked transfer function. I'm too old fashioned. My
game is fidelity.

Jerry
```
```it looks to me like the op wants to implement a nice-sounding
waveshaper maybe? or something like that.

in that case, the normal procedure would be

bandlimit
distort
bandlimit

so the procedure described in the first post seems scrambled to me.
you want to get rid of the aliasing from the distortion before you
downsample...

as for bandlimiting techniques, i am not the expert here.

-ezb

On Sep 22, 1:05&#2013266080;pm, Jerry Avins <j...@ieee.org> wrote:
> On Sep 22, 2:18&#2013266080;pm, jim <"sjedgingN0sp"@m...@mwt.net> wrote:
>
> > Jerry Avins wrote:
>
> &#2013266080; ...
>
> > Well yes step 2 could be eliminated whether or not a spline type of
> > interpolation is used if no processing is done in step 3. But it does indicate
> > where the error is if that filter in step 2 is supposed to be an interpolator.
>
> Yes; his present difficulty shows a problem with the calculation.
>
> > &#2013266080; &#2013266080; &#2013266080; &#2013266080; So it's not beside the point. There are only certain filters that will produce
> > that result.
>
> What's to the point or not is a matter of emphasis. I agree that I was
> more dismissive than was warranted.
>
> > Many people would claim that is a necessary condition that such a
> > filter be used for the operation to even be called interpolation. The point is
> > also that the implied filter of cubic interpolation is not very different than
> > the sinc filter (among other things both are interpolators) of the same length.
> > Given what he is doing he probably won't be able to tell the difference.
>
> Unless he finds a way to clean up his maniulations, he won't be able
> to tell anything. :-)
>
> > > Ultimately, he wants to process at the higher rate, then filter
> > > again and downsample. He needs that filter in the end, so why not use
> > > it to interpolate?
>
> > &#2013266080;Hard to say - considering what he wants to do is _supposed_ to be non-linear.
> > So what is your suggestion? Should he filter before he does the non-linear
> > operation? If he filters before and decimates after without any filtering then
> > high frequencies created by the nonlinear op &#2013266080;will alias. But who knows, maybe
> > that will be part of the "effect".
>
> He shoulf filter before the nonlinear processing unless he can be sure
> that his interpolation is good enough to avoid the need for it.
>
> > &#2013266080; &#2013266080; &#2013266080; &#2013266080; Or maybe you are suggesting filter after he does the non-linear operation? Or
> > maybe he should filter both before and after?
>
> The point of the upsampling is keeping the harmonics generated by his
> non-linear processing from aliasing. They must then be filtered out
> before downconverting. &#2013266080;Ideally, only the in-band intermodulation
> products will remain. The trade-offs here are an art. I can't make the
> subjective judgments needed because I can't imagine any music being
> improved by a crooked transfer function. I'm too old fashioned. My
> game is fidelity.
>
> Jerry

```
```pre-emptive correction

> you want to get rid of the aliasing from the distortion before you
> downsample...

sorry, wrong terminology, but you know what i mean

anyway, the last person said it all with more technical accuracy, i
just wanted to clarify the procedure (as someone who does appreciate
the sound of crooked transfers...)

;)

-zeb
```
```Hey thanks guys for quite meaty answers. I will clarify a little more - my
idea was, that when I use a cubic interpolation, it probably will not
generate so much harmonics itself. So when I pass it through a lowpass
then, the signal will be even more "healthy".
And there comes some nonlinear operation - waveshaper is a typical example
- which induces some harmonics itself. So next filter will remove them and
I can downsample easily.

But there are some things that bother me:
1) No filter is perfect. I don't know much about filter theory, but since
there are typically parameters like -12dB per octave, then it does not look
very good to me. I cannot imagine, how signal totally destroyed by
inserting so many zeros can be successfuly corrected by such a bad filter.
Sure, if I got lowpass with no ripple and -1000dB per octave, then so be
it! :-)

2) Finally I need to filter the result. So it seems to me obvious, that
the filter must be applied before downsampling. So what filter should I
use? There are many articles about and all say "we have this, this and this
and this...". None say use this filter, for typical audio needs fits, it
has following parameters, and is better because x,y,z... So I'm asking you
:-). What would you recommend?

Btw. if the first filter (after) interpolation is not necessary, even
better! Seems to me, that there is no big performance difference. 8
multiplications and additions for a FIR, or similar number of ops for
interpolation...

dmnc
```
```On Sep 23, 2:56&#2013266080;pm, "jungledmnc" <jungled...@gmail.com> wrote:
> Hey thanks guys for quite meaty answers. I will clarify a little more - my
> idea was, that when I use a cubic interpolation, it probably will not
> generate so much harmonics itself. So when I pass it through a lowpass
> then, the signal will be even more "healthy".
> And there comes some nonlinear operation - waveshaper is a typical example
> - which induces some harmonics itself. So next filter will remove them and
> I can downsample easily.
>
> But there are some things that bother me:
> 1) No filter is perfect. I don't know much about filter theory, but since
> there are typically parameters like -12dB per octave, then it does not look
> very good to me. I cannot imagine, how signal totally destroyed by
> inserting so many zeros can be successfuly corrected by such a bad filter.
> Sure, if I got lowpass with no ripple and -1000dB per octave, then so be
> it! :-)
>
> 2) Finally I need to filter the result. So it seems to me obvious, that
> the filter must be applied before downsampling. So what filter should I
> use? There are many articles about and all say "we have this, this and this
> and this...". None say use this filter, for typical audio needs fits, it
> has following parameters, and is better because x,y,z... So I'm asking you
> :-). What would you recommend?
>
> Btw. if the first filter (after) interpolation is not necessary, even
> better! Seems to me, that there is no big performance difference. 8
> multiplications and additions for a FIR, or similar number of ops for
> interpolation...
>
> dmnc

you should try to design the non-linear operation or waveshaper so
that it inherently does not create any frequency components above fs/
2.  at the  fs at which it is operating...

Mark

```
```> you should try to design the non-linear operation or waveshaper so
> that it inherently does not create any frequency components above fs/
> 2. &#2013266080;at the &#2013266080;fs at which it is operating...

oh right, for sure

there is the always handy fact that an n-th order polynomial produces
up to nth order harmonics. so upsample by a factor of n...

i've lately been tweaking a lookup waveshaper. even when filled with
polynomial curves there are some additional interpolation artifacts
(hard to hear, though, amid all the intermodulation)
```
```>i've lately been tweaking a lookup waveshaper. even when filled with
>polynomial curves there are some additional interpolation artifacts
>(hard to hear, though, amid all the intermodulation)

Btw. what does exactly "lookup waveshaper" mean? I understand that it has
a table. But how is that possible? In 32-bit floating point arithmetics it
sounds impossible. You always have to limit the number of entries, but it
creates "stairway" or you can use some kind of interpolation, which leads
to a question if it is worth it, since modern processors rely on cache,
that does not like tables very much :-).

dmnc

```
```On Sep 24, 2:28&#2013266080;pm, "jungledmnc" <jungled...@gmail.com> wrote:
> >i've lately been tweaking a lookup waveshaper. even when filled with
> >polynomial curves there are some additional interpolation artifacts
> >(hard to hear, though, amid all the intermodulation)
>
> Btw. what does exactly "lookup waveshaper" mean? I understand that it has
> a table. But how is that possible? In 32-bit floating point arithmetics it
> sounds impossible. You always have to limit the number of entries, but it
> creates "stairway" or you can use some kind of interpolation, which leads
> to a question if it is worth it, since modern processors rely on cache,
> that does not like tables very much :-).
>
> dmnc

yes, of course one interpolates, in a floating point world.

certainly there is a tradeoff, not the fastest thing nor the least
memory intensive but you can load arbitrary data, morph through stacks
of tables (which allows for variable assymetry and whatnot), and
generally have a very flexible synth component. that's the kind of
thing i have been working on for a while, your application is likely
very different.

and, of course, not all applications involve "modern" processors or
even floating-point arithmetic... i've done a lot of work with
synthesizers on 8051 arch chips...

-zeb
```
```jungledmnc wrote:

> But there are some things that bother me:
> 1) No filter is perfect. I don't know much about filter theory, but since
> there are typically parameters like -12dB per octave, then it does not look
> very good to me.

With a halfway decent FIR you can do a lot better than -12dB per octave.

--
Jim Thomas            Principal Applications Engineer  Bittware, Inc
jthomas@bittware.com  http://www.bittware.com    (603) 226-0404 x536
When you have a new hammer, the whole world looks like a nail.
```