DSPRelated.com
Forums

Do the mathematical inverse and identity elements exist for convolution?

Started by Charles Krug April 23, 2005
Charles Krug <cdkrug@worldnet.att.net> writes:
> [...] > OTOH, I was kinda expecting someone to ask just what the heck I meant by > a group . . .
As someone who pursued a separate degree in undergraduate math as well as engineering, I am also amazed by the number of bit-heads who know what a group is. After taking my coding and modulation class at NCSU, I'm also somewhat ambivalent about their knowledge. They (we) were taught the necessary elements of abstract algebra in about a week, up through fields. But how can such a short period of instruction have any depth? It can't. At the same time I should ask why it takes the mathematicians so freaking long to get to the relevent aspects??? -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Randy Yates wrote:

> Charles Krug <cdkrug@worldnet.att.net> writes: > > [...] > > OTOH, I was kinda expecting someone to ask just what the heck I > > meant by a group . . . > > As someone who pursued a separate degree in undergraduate math as > well as engineering, I am also amazed by the number of bit-heads > who know what a group is. After taking my coding and modulation > class at NCSU, I'm also somewhat ambivalent about their knowledge. > They (we) were taught the necessary elements of abstract algebra in > about a week, up through fields. But how can such a short period of > instruction have any depth? It can't. At the same time I should ask > why it takes the mathematicians so freaking long to get to the > relevent aspects???
:-) The dichotomy is because engineers only need to understand enough of the subject area to solve their problem. And engineering profs teaching the stuff have had more than enough exposure to abstract algebra to understand it for that purpose; they expect their students to be sharp studies. The reason mathematician profs take so long to get to it is because, for them and their students, mathematics is the beginning, the end and the journey... so how long it takes to "get there" is a slightly foreign concept. That's one of the reasons mathematics is sometimes more like a humanities discipline than a science. Ciao, Peter K.
"Peter K." <p.kootsookos@iolfree.ie> writes:
> [...] > That's one of the reasons mathematics is sometimes more like a > humanities discipline than a science.
Indeed. My math degree is a B.A., while my EE is a B.S. Good insights, Peter. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
Randy Yates wrote:

> > Good insights, Peter. >
Thanks. My better half did a doctoral thesis on Lie rings in and around the restricted Burnside problem* so I've had time to digest the situation. ;-) Ciao, Peter K. *Don't ask.
On 26 Apr 2005 07:32:03 -0400, Randy Yates
<randy.yates@sonyericsson.com> wrote:
> "Peter K." <p.kootsookos@iolfree.ie> writes: >> [...] >> That's one of the reasons mathematics is sometimes more like a >> humanities discipline than a science. > > Indeed. My math degree is a B.A., while my EE is a B.S. > > Good insights, Peter.
My math and CS degrees are both "BS" . . sometimes more than others. . . I remember hating ODE's taught from an engineering PoV--it was all cookbook "how to solves" that waved its hands at the math--I still have the text, and it's a fair assessment IMO. Couple years back, I found a Dover reprint from the '50's that developed them rigorously and I had a great time working them out again--though my wife gave me more than a few strange looks.
"Charles Krug" <cdkrug@worldnet.att.net> wrote in message
news:ig6be.643620$w62.333967@bgtnsc05-news.ops.worldnet.att.net...

> I'm borrowing Matlab's term in this case.
> I'll have to think about James' "countably long" case. You can't > easily convolve ( . . ., 0, 1, 0, . . , ) given finite memory, which I > suppose points out the difference between math and implementation.
Permit me to point out that in your first post to this thread, you said:
> Mathworld is helpful in showing that it's commutative, associative, and > distributive, so if it's a group, it's also abelian and a field with > addition.
The closure under addition is inconsistent with the properties of Matlab vectors: try adding [1 0] to [1 0 0]. That is one reason that I viewed the elements of the vector space in question as I did. -- write(*,*) transfer((/17.392111325966148d0,6.5794487871554595D-85, & 6.0134700243160014d-154/),(/'x'/)); end
"James Van Buskirk" <not_valid@comcast.net> writes:

> "Charles Krug" <cdkrug@worldnet.att.net> wrote in message > news:ig6be.643620$w62.333967@bgtnsc05-news.ops.worldnet.att.net... > > > I'm borrowing Matlab's term in this case. > > > I'll have to think about James' "countably long" case. You can't > > easily convolve ( . . ., 0, 1, 0, . . , ) given finite memory, which I > > suppose points out the difference between math and implementation. > > Permit me to point out that in your first post to this thread, > you said: > > > Mathworld is helpful in showing that it's commutative, associative, and > > distributive, so if it's a group, it's also abelian and a field with > > addition. > > The closure under addition is inconsistent with the properties > of Matlab vectors: try adding [1 0] to [1 0 0]. That is one > reason that I viewed the elements of the vector space in question > as I did.
This problem isn't one of closure but rather that the operation of "addition" is undefined for some possible operands. In other words, "+" is NOT a mapping from SxS (the cartesion product of S and S) to S when S is the set of all finite-lengthed vectors. However, addition wasn't the operation under consideration here. -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124
"James Van Buskirk" <not_valid@comcast.net> writes:
> [...]
James et al., if you're interested in this subject, I was going to suggest a great introductory book "Modern Algebra: An Introduction" by John Durbin but the freaking thing is $105 on Amazon. Sheesh, where do they come off asking this much money for a book!?!?! Anyway, I had a previous edition (2nd) in my undergrad abstract algebra course and I thought it was excellent. It is the "Signals and Systems" (Oppenheim et al.) of abstract algebra - the basics covered with authority and crystal clarity. --RY -- Randy Yates Sony Ericsson Mobile Communications Research Triangle Park, NC, USA randy.yates@sonyericsson.com, 919-472-1124

Randy Yates wrote:
> "James Van Buskirk" <not_valid@comcast.net> writes: > > >>"Charles Krug" <cdkrug@worldnet.att.net> wrote in message >>news:ig6be.643620$w62.333967@bgtnsc05-news.ops.worldnet.att.net... >> >> >>>I'm borrowing Matlab's term in this case. >> >>>I'll have to think about James' "countably long" case. You can't >>>easily convolve ( . . ., 0, 1, 0, . . , ) given finite memory, which I >>>suppose points out the difference between math and implementation. >> >>Permit me to point out that in your first post to this thread, >>you said: >> >> >>>Mathworld is helpful in showing that it's commutative, associative, and >>>distributive, so if it's a group, it's also abelian and a field with >>>addition. >> >>The closure under addition is inconsistent with the properties >>of Matlab vectors: try adding [1 0] to [1 0 0]. That is one >>reason that I viewed the elements of the vector space in question >>as I did. > > > This problem isn't one of closure but rather that the operation of > "addition" is undefined for some possible operands. In other words, > "+" is NOT a mapping from SxS (the cartesion product of S and S) to S > when S is the set of all finite-lengthed vectors. > > However, addition wasn't the operation under consideration here.
Take any element of the set you want with the property that its convolution with itself has greater support than the element itself. Now repeat the convolution arbitrarily many times and the the support will be greater than any finite value chosen in advance. In plain English, it ain't a closed system as you have proposed it. So either the "time" index must be periodic or unbounded. Convolution does not go with finite aperiodic time if you are trying to do the operations rigorously which in particular means that one can have as many convolutions as one cares to. In implementations you will run into overflow and memory limits which do not match the notion of rigor.
Charles Krug wrote:
> Guys: > > I'm brushing up the mathematical side of my DSP brain. It's a bit dusty > having spent the last few years doing implementation and device control > without regard for the underlying math. > > I'm trying to satisfy myself that vectors are a group under convolution. > > I've determined that there's an identity element: > > For any vector V, V*[1] = V (where * is convolution rather than > multiplication) > > As well as closure and associativity (the underlying operations are > associative and closed). > > Mathworld is helpful in showing that it's commutative, associative, and > distributive, so if it's a group, it's also abelian and a field with > addition. > > What about the inverse? > > For a vector V, is there a unique vector V^-1 such that: > > V*(V^-1) - [1] > > My mental model is that it cannot possibly exist for finite length > vectors, as the length of the output is necessarily the summed length of > V and V^-1. > > But given a function f, there may well be a function f^-1 such that > > {1 if x = 0 > f*(f^-1) = | > {0 elsewhere > > (or do I mean +Infinity if x = 0?) >
I don't know if this helps or not but convolution is equivalent to multiplication of polynomials.