Liz wrote:
> As a signal-processing person, I am trying to wade through some
> heavy-duty math papers and having a problem.
>
> Suppose that you have a signal-processing network that is
> represented by a transfer function (either Laplace or Z, doesn't
> matter right now). Suppose that this transfer function is of the
> form;
>
> H(s) = a(s) + b(s) + c(s) ... to infinity
>
> Assume, for example, that a(s), b(s), etc, represent weighted
> ideal delay elements, so you have some infinite sum of weighted
> delays. Now, since the sum is infinite, it's possible to have
> poles that come from the infinite summation, rather than the more
> usual 1/(1-f(s)) type of recursive pole.
>
> Now, when you give this to a Math person, they will find the
> "region of convergence" of this function in terms of the complex
> variable s. If you ask them what happens outside of this region,
> they will simply say "it doesn't converge there, so why are you
> asking?" But, in the same breath, they will say that the function
> has zeros in this region (the region that supposedly does not
> converge).
>
> Can someone explain this to me?
>
> Bob Adams

Hi Liz/Bob,
in fact, I cannot imagine such a H(s) as you describe.
However, maybe I can shed a piece of light to your question about
convergence region...
if you think of the fraction 1/n as a basic element, you'll
certainly agree that the sum of 1/n for all n=1..inf is infinite.
You can say, sum(1/n) doesn't converge.
If you take 1/(n^2) instead, you'll find out that its sum converges
near 1,64.
Now we have two "functions", one converges, the other doesn't.
If you sum both sums together, you'll certainly get a mixture.
If it converges, depends on the weight of the influence of both.
Let's say you take w1*(sum(1/n))+w2*(sum(1/n^2)), where w1 and w2
are the weights of both. Playing around with some values will show
that the region of convergence now depends from w1 and w2.
We can define the region of convergence to be roc=f(w1,w2).
In this example we can easily find one pair of values where it's
convergent: w1=0, w2=1
Some more reflecting reveals w1=0;w2=arbitrary
And an example of inconvergence: w1=1; w2=0 (works with any other
value instead of 1 for w1, too)
I hope it's not too complicated yet.
The point now is this:
if you imagine w1 and w2 to be coordinates of a cartesian system,
you can "draw" the region of convergence.
Leaving out the (interesting) point w1=0, w2=0 for a moment, it's
obvious (hehe), that one axis marks a region of convergence, the
other a region of inconvergence.
Certainly there will be a sharp border between both (which we don't
know yet, but possibly we could calculate or estimate it).
In our example there are two regions of convergence which are
separated by the special spot w1=w2=0.
Imagine that the functions are not so trivial, then the regions will
be trivial neither. But in the end, this is what you mentioned:
math people can derive the ROC (region of convergence).
Now let's look at w1=w2=0 in the above example.
Is the function convergent here or not?
Sometimes the answer to such a question is easy, sometimes it isn't.
Sometimes even resorting of the elements of a sum decides if the sum
converges or not. (I remember that fact, but I don't have an
example at hand - sorry). I mean: in some cases, it's difficult to
find an exact answer to such critical points.
At other times it's easy: Imagine that the weight of another
function is f(n)*(1/(w3-1)). At w3=1, the function value would be
infinite, resulting in inconvergence of the sum. But only at this
point, not in it's environment. This would be a thing like a pole.
You think it's trivial, but only combine this case with the point
where w1=w2=0 from above, and things are going to be so complex
that you don't find the answer.
The nice thing is, that usually you can easily define the ROC,
though sometimes not in a closed formula but only in a list of
regions, (and sometimes it's even a list with an infinite number of
regions).
The nasty thing is, that you usually know almost nothing about the
rest where convergence is not guaranteed.
Usually there are poles causing the inconvergence, but this need not
be the case as shown above.
However, if you think of folding regions like in Z-Transform /
Schwarz-Christoffel /..., it's the same thing anyhow.
Therefore, it might be the best to join the math people in not
bothering about what happens inside these critical areas.
HTH
Bernhard