Spiking decon has been used in geophysics for over 50 years.
An FIR Wiener filter is computed by Levinson recursion. The inputs
are the autocorrelation of the trace, and an impulse (1,0,0,0,...)
as desired signal. It is based on certain assumptions, such as
the wavelet being minimum phase and stationary (which are only
approximations in reality).
Now I have been looking at an open-source package, and it does a
number of checks in the algorithm, such as scan for overflow or
divide by zeroes. I compiled it not to trap floating exceptions,
as it would simply abort at such events.
The computed Wiener filter is normalized by the square root of the
first filter sample, so it checks if that is negative.
Now in some datasets, I get negative square root warnings, up to
5% of the data in worst case. Some times they follow overflow
warnings, that would suggest a blowup in the Levinson recursion
(divide by very small denominator), but not always.