Reply by Andor October 28, 20032003-10-28
robert.w.adams wrote:
...
> Since you mentioned the Rienmann zeta function, the region of > convergence appears to be re(s) > 1. Is this due to the "pole" at s = > 1?
That's exactly the reason. The sum representation of the zeta function converges for all complex s with re(s) > 1. However, just as in the case with the geometric series, it turns out that the zeta function can be analytically continued to the whole complex plane minus the point 1, where it has a simple pole (pole of first order). We used to have a professor who said that if you sum the natural numbers you get -1/12. That is because zeta(-1) is -1/12. If you plug a -1 into the sum representation of the zeta function, you get exactly the sum of the natural numbers ...
> If so, then I wonder about the function 1/zeta(s). You might say that, > since zeta(s) does not converge for s < 1, then 1/zeta(s) also does > not converge. But on the other hand, 1/zeta(s) has a zero at s=1 > instead of a pole; so shouldn't the region of convergence be > different?
Sure. 1/zeta(s) is holomorphic on the complex plane minus all the zeros of the zeta function. To simplify, you can just view zeta(s) as a meromorphic function (which means it's inverse is guaranteed to exist and also meromorphic). I think the main thing here is to remember that an analytic function (eg. zeta function) can be represented in many ways, locally around a z_0 by a power series, and these representations need not necessarily have the same domains of definitions. BTW: To show that two analytic functions are equal in their common domain of definition, all you have to do is find a sequence of numbers with an accumulation point where the two functions conincide. Regards, Andor
Reply by Glen Herrmannsfeldt October 27, 20032003-10-27
"Randy Yates" <yates@ieee.org> wrote in message
news:IT7nb.5882$X22.1146@newsread2.news.atl.earthlink.net...
> Liz wrote: > > > Thanks for all the inputs. > > > > This business of analytic continuation is exactly what I am trying to > > understand.
> That's not too hard. Get a book on complex variables (I suggest > the old chestnut by Churchill and Brown). Essentially, analytic > continuation is based on Taylor's theorem. If a function is analytic in a > certain region, and you have a known "piece" of the function in a > continuous interval or a neighborhood, then you can construct the > series expansion for the function using Taylor's theorem. Then you > can use that series expansion to generate the function at other points > outside the area in which it is currently available.
That sounds like how I was remembering it, too. I still have my Saff and Snider, and looked it up to see how they explain it. You can evaluate the function, and all its derivatives anywhere within the region of convergence of the original series. Then write a new Taylor series expansion around that point, preferrably not near the poles of the original series. The example they have is someone working with the Log z series, expanded around (1,0), with a pole at (0,0), but not recognizing the Log series, which we know is analytic except at (0,0). In the next section, they do a series of analytic continuations, until they get to (-1,0) on a y>0 path, and also on a y<0 path. The somewhat obvious result that analytic continuation may end up on a different branch of the function than one started on, for functions with branch cuts. -- glen
Reply by Randy Yates October 27, 20032003-10-27
Liz wrote:

> Thanks for all the inputs. > > This business of analytic continuation is exactly what I am trying to > understand.
Hey Bob, That's not too hard. Get a book on complex variables (I suggest the old chestnut by Churchill and Brown). Essentially, analytic continuation is based on Taylor's theorem. If a function is analytic in a certain region, and you have a known "piece" of the function in a continuous interval or a neighborhood, then you can construct the series expansion for the function using Taylor's theorem. Then you can use that series expansion to generate the function at other points outside the area in which it is currently available. This is used in extrapolation of signals that have been corrupted by dropouts. At least I hope this is right - I'm going from memory, the books and papers are at work. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
Reply by Liz October 26, 20032003-10-26
Thanks for all the inputs.  

This business of analytic continuation is exactly what I am trying to
understand. In signal-processing terms, I wonder if it is the same
thing as saying that sometimes an infinite sum (an infinitely-long FIR
filter, for example) can be converted via a power-series relationship
to a recursive (IIR) structure that has a larger region of
convergence. I guess that the thing I don't quite understand is, if
there really is such a power-series equivalence, then why don't they
give the same answer for EVERY point in the complex plane? Is it
really just an issue of it not being possible to truly sum a series to
infinity?

Since you mentioned the Rienmann zeta function, the region of
convergence appears to be re(s) > 1. Is this due to the "pole" at s =
1?

If so, then I wonder about the function 1/zeta(s). You might say that,
since zeta(s) does not converge for s < 1, then 1/zeta(s) also does
not converge. But on the other hand, 1/zeta(s) has a zero at s=1
instead of a pole; so shouldn't the region of convergence be
different?


Regards


Bob Adams
Reply by Greg Berchin October 23, 20032003-10-23
On 22 Oct 2003 17:40:27 -0700, robert.w.adams@verizon.net (Liz)
wrote:

>>Now, when you give this to a Math person, they will find the "region >>of convergence" of this function in terms of the complex variable s. >>If you ask them what happens outside of this region, they will simply >>say "it doesn't converge there, so why are you asking?" But, in the >>same breath, they will say that the function has zeros in this region >>(the region that supposedly does not converge). >> >>Can someone explain this to me?
I think that zeroes are only meaningful inside the ROC. Using one of the examples from Oppenheim and Schafer, if x(n) = (a^n)u(n) then inf 1 z X(z) = SUM [az^(-1)]^n = --------- = --- 0 1-az^(-1) z-a This has a zero at z=0, but a ROC of |z|>|a|. Well if z=0 then there is no value of 'a' for which |z|>|a|. For convergence of X(z) it is necessary that inf SUM [(az^(-1)]^n < infinity 0 If z=0, the sum blows-up. So basically when we say that X(z) has a zero at a location, we really mean that it would have a zero at that location IF it converged there. Greg Berchin
Reply by Bernhard Holzmayer October 23, 20032003-10-23
Liz wrote:

> As a signal-processing person, I am trying to wade through some > heavy-duty math papers and having a problem. > > Suppose that you have a signal-processing network that is > represented by a transfer function (either Laplace or Z, doesn't > matter right now). Suppose that this transfer function is of the > form; > > H(s) = a(s) + b(s) + c(s) ... to infinity > > Assume, for example, that a(s), b(s), etc, represent weighted > ideal delay elements, so you have some infinite sum of weighted > delays. Now, since the sum is infinite, it's possible to have > poles that come from the infinite summation, rather than the more > usual 1/(1-f(s)) type of recursive pole. > > Now, when you give this to a Math person, they will find the > "region of convergence" of this function in terms of the complex > variable s. If you ask them what happens outside of this region, > they will simply say "it doesn't converge there, so why are you > asking?" But, in the same breath, they will say that the function > has zeros in this region (the region that supposedly does not > converge). > > Can someone explain this to me? > > Bob Adams
Hi Liz/Bob, in fact, I cannot imagine such a H(s) as you describe. However, maybe I can shed a piece of light to your question about convergence region... if you think of the fraction 1/n as a basic element, you'll certainly agree that the sum of 1/n for all n=1..inf is infinite. You can say, sum(1/n) doesn't converge. If you take 1/(n^2) instead, you'll find out that its sum converges near 1,64. Now we have two "functions", one converges, the other doesn't. If you sum both sums together, you'll certainly get a mixture. If it converges, depends on the weight of the influence of both. Let's say you take w1*(sum(1/n))+w2*(sum(1/n^2)), where w1 and w2 are the weights of both. Playing around with some values will show that the region of convergence now depends from w1 and w2. We can define the region of convergence to be roc=f(w1,w2). In this example we can easily find one pair of values where it's convergent: w1=0, w2=1 Some more reflecting reveals w1=0;w2=arbitrary And an example of inconvergence: w1=1; w2=0 (works with any other value instead of 1 for w1, too) I hope it's not too complicated yet. The point now is this: if you imagine w1 and w2 to be coordinates of a cartesian system, you can "draw" the region of convergence. Leaving out the (interesting) point w1=0, w2=0 for a moment, it's obvious (hehe), that one axis marks a region of convergence, the other a region of inconvergence. Certainly there will be a sharp border between both (which we don't know yet, but possibly we could calculate or estimate it). In our example there are two regions of convergence which are separated by the special spot w1=w2=0. Imagine that the functions are not so trivial, then the regions will be trivial neither. But in the end, this is what you mentioned: math people can derive the ROC (region of convergence). Now let's look at w1=w2=0 in the above example. Is the function convergent here or not? Sometimes the answer to such a question is easy, sometimes it isn't. Sometimes even resorting of the elements of a sum decides if the sum converges or not. (I remember that fact, but I don't have an example at hand - sorry). I mean: in some cases, it's difficult to find an exact answer to such critical points. At other times it's easy: Imagine that the weight of another function is f(n)*(1/(w3-1)). At w3=1, the function value would be infinite, resulting in inconvergence of the sum. But only at this point, not in it's environment. This would be a thing like a pole. You think it's trivial, but only combine this case with the point where w1=w2=0 from above, and things are going to be so complex that you don't find the answer. The nice thing is, that usually you can easily define the ROC, though sometimes not in a closed formula but only in a list of regions, (and sometimes it's even a list with an infinite number of regions). The nasty thing is, that you usually know almost nothing about the rest where convergence is not guaranteed. Usually there are poles causing the inconvergence, but this need not be the case as shown above. However, if you think of folding regions like in Z-Transform / Schwarz-Christoffel /..., it's the same thing anyhow. Therefore, it might be the best to join the math people in not bothering about what happens inside these critical areas. HTH Bernhard
Reply by Andor October 23, 20032003-10-23
robert.w.adams wrote:
...
> H(s) = a(s) + b(s) + c(s) ... to infinity > > Assume, for example, that a(s), b(s), etc, represent weighted ideal > delay elements, so you have some infinite sum of weighted delays.
Hi Bob, if you let s be a complex variable, then H(s) is a sum of complex functions a_k(s), k from 0 to infinity (notice I renamed your functions).
> > Now, when you give this to a Math person, they will find the "region > of convergence" of this function in terms of the complex variable s.
Ok.
> If you ask them what happens outside of this region, they will simply > say "it doesn't converge there, so why are you asking?" But, in the > same breath, they will say that the function has zeros in this region > (the region that supposedly does not converge).
I'm not quite sure about your specific H(s) and where those zeros come from. But look at the following: Let a_k(s) := s^k. Then H(s) is the geometric series and converges for all complex s with |s| < 1. However, H has an analytic continuation, namely H_e(s) = 1/(1-s), which is defined for all complex s =/= 1, and H_e(s) = H(s) for |s| < 1. (This example does not have any zeros outside the region of convergence of the original sum. There are other examples, like the Riemann-zeta function, which do.) However, H(s) is quite clearly only convergent for |s| < 1, and does not converge outside the unit disc. It is wrong to say that H(s) has any value outside its region of convergence (even by a math person :). It is right to say that H has a analytic continuation H_e which can have a larger domain of definition (and thus can possibly have zeros outside the region of convergence of H).
> Can someone explain this to me?
fwiw Regards, Andor
Reply by robert bristow-johnson October 23, 20032003-10-23
> Liz wrote:
hey Bob! what's this "Liz" thingie?
>> As a signal-processing person, I am trying to wade through some >> heavy-duty math papers and having a problem. >> >> Suppose that you have a signal-processing network that is represented >> by a transfer function (either Laplace or Z, doesn't matter right >> now). Suppose that this transfer function is of the form; >> >> H(s) = a(s) + b(s) + c(s) ... to infinity >> >> Assume, for example, that a(s), b(s), etc, represent weighted ideal >> delay elements, so you have some infinite sum of weighted delays. Now, >> since the sum is infinite, it's possible to have poles that come from >> the infinite summation, rather than the more usual 1/(1-f(s)) type of >> recursive pole. >> >> Now, when you give this to a Math person, they will find the "region >> of convergence" of this function in terms of the complex variable s. >> If you ask them what happens outside of this region, they will simply >> say "it doesn't converge there, so why are you asking?" But, in the >> same breath, they will say that the function has zeros in this region >> (the region that supposedly does not converge). >> >> Can someone explain this to me?
no, but isn't the ROC of a normal recursive thingie, the stuff outside the unit circle (or right-half plane for the "s" guys) and don't you get non-minimum phase filters that have zeros in that non-ROC area? yet it is kinda hard to grasp how something that doesn't converge at all can be zero at the same time. i dunno.
>> Bob Adams
we gotta get this Bob Adams to post here on comp.dsp more often and tell us all of his DSP secrets. In article jPFlb.13378$W16.13129@newsread2.news.atl.earthlink.net, Randy Yates at yates@ieee.org wrote on 10/22/2003 21:05:
> Could it be that the region-of-nonconvergence includes isolated > "points-of-convergence," so that the region generally doesn't converge > but at these isolated points? I can't think of a function that would > do this off the top of my head.
why not a non-min phase filter? still dunno. r b-j
Reply by Randy Yates October 22, 20032003-10-22
Liz wrote:

> As a signal-processing person, I am trying to wade through some > heavy-duty math papers and having a problem. > > Suppose that you have a signal-processing network that is represented > by a transfer function (either Laplace or Z, doesn't matter right > now). Suppose that this transfer function is of the form; > > H(s) = a(s) + b(s) + c(s) ... to infinity > > Assume, for example, that a(s), b(s), etc, represent weighted ideal > delay elements, so you have some infinite sum of weighted delays. Now, > since the sum is infinite, it's possible to have poles that come from > the infinite summation, rather than the more usual 1/(1-f(s)) type of > recursive pole. > > Now, when you give this to a Math person, they will find the "region > of convergence" of this function in terms of the complex variable s. > If you ask them what happens outside of this region, they will simply > say "it doesn't converge there, so why are you asking?" But, in the > same breath, they will say that the function has zeros in this region > (the region that supposedly does not converge). > > Can someone explain this to me? > > Bob Adams
Could it be that the region-of-nonconvergence includes isolated "points-of-convergence," so that the region generally doesn't converge but at these isolated points? I can't think of a function that would do this off the top of my head. -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr
Reply by Liz October 22, 20032003-10-22
As a signal-processing person, I am trying to wade through some
heavy-duty math papers and having a problem.

Suppose that you have a signal-processing network that is represented
by a transfer function (either Laplace or Z, doesn't matter right
now). Suppose that this transfer function is of the form;

H(s) = a(s) + b(s) + c(s) ... to infinity

Assume, for example, that a(s), b(s), etc, represent weighted ideal
delay elements, so you have some infinite sum of weighted delays. Now,
since the sum is infinite, it's possible to have poles that come from
the infinite summation, rather than the more usual 1/(1-f(s)) type of
recursive pole.

Now, when you give this to a Math person, they will find the "region
of convergence" of this function in terms of the complex variable s.
If you ask them what happens outside of this region, they will simply
say "it doesn't converge there, so why are you asking?" But, in the
same breath, they will say that the function has zeros in this region
(the region that supposedly does not converge).

Can someone explain this to me?

Bob Adams