Reply by mnentwig August 31, 20132013-08-31
>> that can create a really nasty polynomial
yes... it's free to run wherever it likes between the points. And starts gnawing on the furniture... BTW, one practical comment on "house-training" polynomial models: Taylor series expansions of asymptotically constant functions like tanh (see Matlab script) diverge at some point. But, that can be fixed, at least to some extent - simply don't use a Taylor series expansion but least-squares fit (or higher-norm) It trades off accuracy in the mid-range against a wider useful input range. _____________________________ Posted through www.DSPRelated.com
Reply by robert bristow-johnson August 29, 20132013-08-29
On 8/29/13 1:03 PM, mnentwig wrote:
> > The Matlab script is nice (and yes, Octave understands it). >
thems are all polynomials (of increasing order) and they flatten out quite well at the +1 and -1 points.
> > I wouldn't consider the polynomial _fitting_ the problem.
well, i meant fitting exactly (not approximating). like fitting a 12th-order polynomial to 13 data points. if you have a (-1)^n component to your data points, that can create a really nasty polynomial. we're on the same page. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by mnentwig August 29, 20132013-08-29
>> is about the only way to model a
nonlinearity that puts a lid on how far the images go probably yes. The polynomial multiplication is a convolution of the power spectrum. Fitting into n times the bandwidth allows only the original signal and (n-1) multiplications. It seems at least intuitive. The Matlab script is nice (and yes, Octave understands it). I wouldn't consider the polynomial _fitting_ the problem. It's least-squares optimal to the data you feed in - garbage in, garbage out. The problems start when we evaluate outside the original range (evaluate over 4x range in your script and the edges run away to ~10^10). _____________________________ Posted through www.DSPRelated.com
Reply by robert bristow-johnson August 29, 20132013-08-29
On 8/29/13 6:41 AM, mnentwig wrote:
> Hi RBJ, > > I don't disagree with what you're saying. My statement "polynomials fail > miserably" is obviously a little tongue-in-cheek.
i guess i missed that. (my ability to pick up on the obviously tongue-in-cheek is poor.)
> It says a lot in only > three words, and I wasn't planning to start with a five-page treatise on > nonlinear modeling. > > If a polynomial works, fine, why not use it. I'm with you here, and the > bandlimited nature can be very convenient. > But the first post shows an example it turns into a mess because of the > high order.
yeah, just blindly fitting a polynomial to some set of data can result in a mess. like if your data points had some (-1)^n component in them, you can get a mess.
> BTW, I use typically 11th order polynomial fits for RF amplifier > characterization myself. Nobody argues that it works in many cases (much > has been written on Volterra theory), but it's by no means the only way to > model a nonlinearity.
but i think a finite-order polynomial is about the only way to model a nonlinearity that puts a lid on how far the images go. a hard breakpoint can create images that go on forever. depends on how bad the discontinuity is. did you run that MATLAB script? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by mnentwig August 29, 20132013-08-29
Hi RBJ,

I don't disagree with what you're saying. My statement "polynomials fail
miserably" is obviously a little tongue-in-cheek. It says a lot in only
three words, and I wasn't planning to start with a five-page treaty on
nonlinear modeling. 
 
If a polynomial works, fine, why not use it. I'm with you here, and the
bandlimited nature can be very convenient.
But the first post shows an example it turns into a mess because of the
high order. 

BTW, I use typically 11th order polynomial fits for RF amplifier
characterization myself. Nobody argues that it works in many cases (much
has been written on Volterra theory), but it's by no means the only way to
model a nonlinearity.	 

_____________________________		
Posted through www.DSPRelated.com
Reply by robert bristow-johnson August 28, 20132013-08-28
so, mnentwig, here's a little MATLAB (might also work in Octave) to show 
you what i mean:





line_color = ['g' 'c' 'b' 'm' 'r'];
figure;
hold on;
x = linspace(-1, 1, 2^16 + 1);
for N = 0:10
     n = linspace(0, N, N+1);
     a = (-1).^n ./ (factorial(N-n) .* factorial(n) .* (2*n+1));
     a = a/sum(a);
     y = x .* polyval(fliplr(a), x.^2);
     plot(x, y, line_color(mod(N,length(line_color))+1) );
end
hold off;




-- 

r b-j                  rbj@audioimagination.com

"Imagination is more important than knowledge."


Reply by robert bristow-johnson August 28, 20132013-08-28
On 8/28/13 7:06 AM, mnentwig wrote:
> Hi, > > polynomials work for weakly nonlinear systems, but fail miserably when the > nonlinearity saturates.
well, i guess that different people have different experiences. (i might disagree with "fail miserably" vis-a-vis my own experience.) when saturation happens, no matter what kind of nonlinear function you are using for levels below saturation, you have to somehow butt-splice the nonlinear function to the constant function that represents the limiting or saturation. so consider this odd-symmetry polynomial (of order 2N+1): x p(x) = A * integral{ (1 - v^2)^N dv} 0 N = A * SUM{ ((-1)^n*N!) / ((N-n)!*n!*(2n+1)) * x^(2n+1) } n=0 where N 1/A = SUM{ ((-1)^n*N!) / ((N-n)!*n!*(2n+1)) } n=0 so with A scaling p(x) as shown, then it's true that p(0) = 0 p(1) = 1 p(-1) = -1 and all of the derivatives of p(x) at +1 or -1 are zero up to the (2N)th derivative. so where x=+1 and where x=-1 then p(x) can be spliced to a constant value (which is also +1 and -1 because A scales p(x) to be so) with as many continuous derivatives and you are willing to pay for. with increased N, that means more and higher derivatives are continuous at the splice points of -1 and +1. so the nonlinear function would be { p(x) |x| <= 1 f(x) = { { sgn(x) |x| >= 1 and this nonlinear function is perfectly continuous through *all* of its derivatives for all |x|<1 and all derivatives for |x|>1 and is continuous up to the (2N)th derivative for |x|=1. this nonlinearity flattens out very nicely as it approaches the saturation points.
> As is the case for example when an amplifier clips. > The problem here is that higher order polynomial terms get only steeper for > increasing input value, whereas what I want them to do is to flatten out > against a constant.
not for all increasing input values. it is true that p(x) will take off for |x|>1, but that is not where x is for p(x).
> > Piecewise linear modeling is surprisingly powerful. The "trick" is to use > V-shaped basis functions. > > For example (all x, y, b are column vectors) > b1 = abs(x) > b2 = abs(x-0.2) > b3 = abs(x-0.5) > b4 = abs(x-0.9) > etc. Equidistant grid is possible but not mandatory.
yes, any breakpoint can be expressed as a linear combination of x and |x|. but each breakpoint of a piecewise linear function has discontinuous 1st derivative and higher. you will need a *lot* of breakpoints to make this an audio-friendly nonlinearity. it is true that this only means a larger table size (and memory is cheap), but using a polynomial nonlinearity has the advantage of strictly limiting how many harmonics are generated. that allows one to know how high they need to upsample the data to prevent aliasing. you need to upsample by a factor N if your polynomial order is 2N+1 as it is above. some of the images (from the nonlinearity) will fold over, but none will fold over back into your original baseband and then all of those aliased images can be removed before downsampling. that cannot be done with piecewise linear *unless* your piecewise linear function has many, many breakpoints *and* the function it defines fits a polynomial of finite order. so i wonder why not just use the polynomial in the first place? using Horner's method, a reasonably low-order polynomial is quite efficiently implemented, not much more expensive than table-lookup with linear interpolation. and there are *no* breakpoints that generate spurious high-frequency components that might alias badly. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by mnentwig August 28, 20132013-08-28
Hi,

polynomials work for weakly nonlinear systems, but fail miserably when the
nonlinearity saturates. As is the case for example when an amplifier clips.
The problem here is that higher order polynomial terms get only steeper for
increasing input value, whereas what I want them to do is to flatten out
against a constant. 

Piecewise linear modeling is surprisingly powerful. The "trick" is to use
V-shaped basis functions. 

For example (all x, y, b are column vectors)
b1 = abs(x)
b2 = abs(x-0.2)
b3 = abs(x-0.5)
b4 = abs(x-0.9)
etc. Equidistant grid is possible but not mandatory. 

Then combine them into a matrix M:
M=[b1 b2 b3...]

and find a least-squares solution to 
y = M*c 

c = pinv(M) * y

This gives a least-squares optimal piecewise linear function that maps x to
y. 	 

_____________________________		
Posted through www.DSPRelated.com
Reply by August 28, 20132013-08-28
You need to find a different approach. Chebychev is not the way to go if you need this many harmonics. 

Bob
Reply by MatthewA August 27, 20132013-08-27
Hey all, you were so helpful last time that I thought you might help again in my efforts to create a chebyshev distortion.  

It looks like I've got all my math right.  But unfortunately it looks like I need something like 40 to 60 harmonics to really do this right This is not CPU friendly AT ALL!  BAD NEWS!

I've even written code to collect the terms into a simplified version like so.

-0.19x^0 +&#8232;
-8.52x^1 +&#8232;
-191.71x^2 +&#8232;
2,318.07x^3 +
&#8232;26,488.43x^4 +&#8232;
-184,552.17x^5 +
&#8232;-1,456,812.92x^6 +
&#8232;6,893,573.67x^7 +&#8232;
42,380,743.33x^8 +
&#8232;-147,354,937.96x^9 +
&#8232;-752,688,643.57x^10 +
&#8232;2,012,077,911.39x^11 +
&#8232;8,889,320,982.44x^12 +&#8232;
-18,797,243,052.44x^13 +
&#8232;-73,808,192,398.21x^14 +
&#8232;125,762,059,988.99x^15 +&#8232;
447,541,435,437.78x^16 +&#8232;
-621,756,277,461.88x^17 +
&#8232;-2,034,896,913,067.02x^18 +
&#8232;2,320,810,555,328.17x^19 +&#8232;
7,065,358,834,042.67x^20 +&#8232;
-6,633,381,770,362.36x^21 +
&#8232;-18,955,068,394,145.06x^22

These are extremely large coefficients and I'm assuming that the equation can be factored in some way.  Does anyone know any solutions to this sort of thing?

Thanks again,
-Matt