Reply by Peter Mairhofer November 18, 20162016-11-18
Hi,

I have the following setup:

x --->[ G(z) ]----->[ (.)^3 ]----->[ H(z) ]-----> y

where x is a Gaussian input with E(x)=0 and autocorrelation Rxx.

For simplicity, I can set Rxx=sigma^2*delta(tau) and G(z)=1, i.e. a pure
Hammerstein system driven with white Gaussian noise (I would like to
generalize afterwards if possible):

x --->[ (.)^3 ]----->[ H(z) ]-----> y

Now I would like to obtain the error in terms of its normalized mean
squared error caused by this system: error = E((y-x)^2)/E(x^2)

When H(z)=1, this is trivial: E((x^3)^2)/E(x^2) = 15*sigma^4.

With H(z) != 1, I think maybe Bussgang's theorem can help but I do not
see how. As it is written in Wikipedia, it works for Wiener systems but
not Hammerstein.

But I found in [1]: "For a Gaussian input signal, the iterative approach
can be motivated by Bussgang's theorem. This result implies that for a
Hammerstein system, the LTI model that minimizes the mean-square error

E((y(t) − G(q)u(t))^2)

will be equal to a scaled version of the LTI part of the system." This
implies to me that the best fit is obtained by C*H(z) where C is
obtained using the formula from Wikipedia [2]: C = 3*sigma^2.

However, a quick MATLAB check shows that something is wrong:

N = 2^16;
sigma = 1;
C = 3*sigma^2;
b = fir1(10, 0.1);
x = sigma*randn(N,1);
y1 = filter(b, 1, x.^3);
y2 = C*filter(b, 1, x);
plot([y1 , y2])

y1 and y2 are way too different, std(y1) and std(y2) completely different.

Is there any mistake. Or is there a better approach in the first place
to solve this problem?

Thanks,
Peter



[1] Martin Enquist, "Identification of Hammerstein Systems Using
Separable Random Multisines"
[2] https://en.wikipedia.org/wiki/Bussgang_theorem