Hi,
In our second year, we did an algorithm in Applied maths known as
Newton's method. This is an iterative method for finding roots of
especially non-linear equations. Is there a similar iterative method
for finding the polynomial, given the roots of the polynomial? (I
recall that we did some algorithm where we computed the function_x
which had a_0,a_1,..,a_x as roots and then in the next iteration we
computed the function_{x+1} such that a_{x+1} would also be a root as
well as a_0,..a_x, however I can't remember how it was done or the
name of the algorithm).
Your time, effort and suggestions will be greatly appreciated
Jaco
Inverse of Newton's method?
Started by ●October 29, 2003
Reply by ●October 29, 20032003-10-29
In article <e48813da.0310290737.2bf55a35@posting.google.com>, jaco_versfeld@hotmail.com (Jaco Versfeld) writes:> In our second year, we did an algorithm in Applied maths known as > Newton's method. This is an iterative method for finding roots of > especially non-linear equations. Is there a similar iterative method > for finding the polynomial, given the roots of the polynomial? (I > recall that we did some algorithm where we computed the function_x > which had a_0,a_1,..,a_x as roots and then in the next iteration we > computed the function_{x+1} such that a_{x+1} would also be a root as > well as a_0,..a_x, however I can't remember how it was done or the > name of the algorithm).If you know the roots of a polynomial, the polynomial is explicitly given by: P(x) = k (x - root1) (x - root2) (x - root3) ... (x - rootn) To determine the constant k, you need to know the value of the polynomial somewhere other than at a root. Otherwise the constant polynomial 0 trivially solves all such systems. This generalizes to the notion of an "interpolating polynomial". Given a function f(x) and n points, x1, x2 ... xn, there is a unique polynomial of degree n-1 that that matches the value of f(x) at those points. Finding for the coefficients of the polynomial should be a simple matter of solving a system of n linear equations in n unknowns. As long as the x values are all unique, this system of equations should always have a unique solution. If the x values are not all unique, you can further generalize by requiring that the polynomial's first derivitive match f'(x) at all doubled x's, that the second derivitive match f''(x) at all tripled x's and so on. At the extreme of n duplicated points you get the n term Taylor expansion. Again, you should obtain a solvable system of n linear equations in n unknowns. My single published contribution to mathematical lore is the affirmative answer to the question of whether, given a continuous (and continuously differentiable to all degrees as I recall) function that is strictly positive over [most of] an open interval, there is an interpolating polynomial of any chosen degree that is strictly positive over that same interval. The proof technique I employed was iterative (or inductive if you prefer). Given a suitable interpolating polynomial of degree n, you adjust that polynomial until at least one more intersection point is obtained, thus producing an interpolating polynomial of degree n+1. Interpolating polynomials tend not to be good numerical approximations. They are usually very sensitive to small measurement errors and oscillate wildly above and below the function that they are supposed to approximate. If you're trying to fit a curve to a function, you are typically better served by selecting from a family of functions with a much smaller parameter set and choosing those parameters using some sort of least squares approach. John Briggs
Reply by ●October 29, 20032003-10-29
Jaco Versfeld wrote:> Hi, > > In our second year, we did an algorithm in Applied maths known as > Newton's method. This is an iterative method for finding roots of > especially non-linear equations. Is there a similar iterative method > for finding the polynomial, given the roots of the polynomial? (I > recall that we did some algorithm where we computed the function_x > which had a_0,a_1,..,a_x as roots and then in the next iteration we > computed the function_{x+1} such that a_{x+1} would also be a root as > well as a_0,..a_x, however I can't remember how it was done or the > name of the algorithm). > > Your time, effort and suggestions will be greatly appreciated > JacoWhy iterate? If (R) is a root of a polynomial in x, then (x - R) is a factor of that polynomial. Just multiply together all the factors so obtained. One factor -- the one that sets the scale -- isn't given by the set of roots. It needs to be determined some other way. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●October 29, 20032003-10-29
jaco_versfeld@hotmail.com (Jaco Versfeld) wrote in message news:<e48813da.0310290737.2bf55a35@posting.google.com>...> Hi, > > In our second year, we did an algorithm in Applied maths known as > Newton's method. This is an iterative method for finding roots of > especially non-linear equations. Is there a similar iterative method > for finding the polynomial, given the roots of the polynomial? (I > recall that we did some algorithm where we computed the function_x > which had a_0,a_1,..,a_x as roots and then in the next iteration we > computed the function_{x+1} such that a_{x+1} would also be a root as > well as a_0,..a_x, however I can't remember how it was done or the > name of the algorithm).The function f_n(x) = (x - a_1)...(x-a_n) is a polynomial with a_1, ..., a_n as roots. If you want it in the form f_n(x) = b_0 + b_1*x + ... + b_n*x^n, it's almost trivial to write an algorithm to multiply any given polynomial by a term (x - a). In fact, here's the algorithm: Define a polynomial as a variable-length list of coefficients in the order [b_0, b_1, ..., b_n] which are the coefficients of x^0, x^1, ..., x^n. Suppose you want the polynomial that results from multiplying [b_0, b_1, ..., b_n] (in algebraic notation, the expression f_n(x) I wrote above) by a linear term [a_0, a_1] (in algebraic notation, a_0 + a_1*x). The result is a new polynomial with n+2 coefficients [c_0, c_1, ..., c_(n+1)] given by c_0 = a_0*b_0 c_1 = a_1*b_0 + a_0*b_1 c_2 = a_1*b_1 + a_0*b_2 ... c_n = a_1*b_(n-1) + a_0*b_n c_(n+1) = a_1*b_n Now the answer to your question: f_n(x) = (x-a_1)(x-a_2)...(x-a_n) is a polynomial with roots a_1, a_2, ..., a_n. f_n(x)*(x-a_(n+1)) has all those roots, and an additional root at a_(n+1). - Randy
Reply by ●October 29, 20032003-10-29
briggs@encompasserve.org writes:> If you know the roots of a polynomial, the polynomial is explicitly > given by: > > P(x) = k (x - root1) (x - root2) (x - root3) ... (x - rootn)An interesting case is polynomials of infinite degree. For example, sin(pi*x) has roots at all the integers, so: sin(pi*x) = pi * x * (1-x)(1+x)(1-x/2)(1+x/2)(1-x/3)(1+x/3) ... (This statement is true, although I haven't proven it.) Anyway, you can pair the positive roots which their corresponding negative roots: sin(pi*x) = pi * x * (1-x^2)(1-x^2/4)(1-x^2/9) ... Now compare this with the Taylor series for sin(pi*x): sin(pi*x) = pi*x - (pi*x)^3/3! + (pi*x)^5/5! - ... The coefficients of the two series have to be equal. The "x" coefficient of each series is pi. Things get interesting with the higher coefficients. The coefficient of "x^3" in the first series is the sum -( 1 + 1/4 + 1/9 + ... + 1/k^2 + ... ) The coefficient of "x^3" in the second series is -pi^3/6. So the equivalence of the two series implies that the sum of the reciprocals of the squares of the integers is pi^2/6. It gets messier, but you can find the sums of the reciprocals of all the even powers by evaluating successively higher terms in these series. Scott -- Scott Hemphill hemphill@alumni.caltech.edu "This isn't flying. This is falling, with style." -- Buzz Lightyear
Reply by ●October 29, 20032003-10-29
Hello Jaco, This is actually much easier. Just multiply out (x-root1)(x-root2)(x-root3)....(x-rootn) Since the roots by themeselves are an incomplete specification, you get to also multiply the resulting poly by any non zero number. Now there is Newton's method for evaluating a poly given its roots. This is akin to LaGrange's interpolation. And you may be recalling the Newton form. IHTH, Clay jaco_versfeld@hotmail.com (Jaco Versfeld) wrote in message news:<e48813da.0310290737.2bf55a35@posting.google.com>...> Hi, > > In our second year, we did an algorithm in Applied maths known as > Newton's method. This is an iterative method for finding roots of > especially non-linear equations. Is there a similar iterative method > for finding the polynomial, given the roots of the polynomial? (I > recall that we did some algorithm where we computed the function_x > which had a_0,a_1,..,a_x as roots and then in the next iteration we > computed the function_{x+1} such that a_{x+1} would also be a root as > well as a_0,..a_x, however I can't remember how it was done or the > name of the algorithm). > > Your time, effort and suggestions will be greatly appreciated > Jaco
Reply by ●October 29, 20032003-10-29
Scott Hemphill <hemphill@hemphills.net> writes:> The coefficients of the two series have to be equal. The "x" coefficient > of each series is pi. Things get interesting with the higher coefficients. > The coefficient of "x^3" in the first series is the sum > > -( 1 + 1/4 + 1/9 + ... + 1/k^2 + ... )I left out a factor of pi here. Scott -- Scott Hemphill hemphill@alumni.caltech.edu "This isn't flying. This is falling, with style." -- Buzz Lightyear
Reply by ●October 29, 20032003-10-29
In article <1n9rPBtvB+Ib@eisner.encompasserve.org>, briggs@encompasserve.org wrote:> My single published contribution to mathematical lore is the affirmative > answer to the question of whether, given a continuous (and > continuously differentiable to all degrees as I recall) function > that is strictly positive over [most of] an open interval, there is > an interpolating polynomial of any chosen degree that is strictly > positive over that same interval. > > The proof technique I employed was iterative (or inductive if you > prefer). Given a suitable interpolating polynomial of degree n, > you adjust that polynomial until at least one more intersection point > is obtained, thus producing an interpolating polynomial of degree n+1.MR0613872 (82e:41005) Briggs, John; Rubel, Lee A. Interpolation by nonnegative polynomials. J. Approx. Theory 30 (1980), no. 3, 160--168. 41A05 C. K. Chui posed the following problem: Let $f$ be a continuous and real-valued function on $[0,1]$; if $f$ is nonnegative, is it possible to choose $n+1$ points on the graph of $f$ so that the interpolating polynomial $p_n$ is also nonnegative? The main result of the paper under review reads as follows: Let $f$ be a continuous and nonnegative real function on the closed interval $[0,1]$. For each $n=0,1,\cdots$, there exist $n+1$ points $0\leq x_0<x_1<\cdots<x_n\leq 1$ and a polynomial $p_n$ of degree $\leq n$ that is also nonnegative on $[0,1]$ such that $p_n(x_k)=f(x_k)$ for $k=0,1,\cdots,n$. Reviewed by E. Neuman -- Gerry Myerson (gerry@maths.mq.edi.ai) (i -> u for email)
Reply by ●October 29, 20032003-10-29
jaco_versfeld@hotmail.com (Jaco Versfeld) writes:>Is there a similar iterative method for finding the polynomial, given the >roots of the polynomial?I'm not sure why you'd need an iterative method for this. If x1,x2,x3,...,xn are the roots of the polynomial f(x), then f(x) may be written as f(x) = (x-x1)(x-x2)(x-x3)...(x-xn) Doug
Reply by ●October 29, 20032003-10-29






