machine learning vs interpolation

Started by Cagdas Ozgenc April 3, 2010
Greetings.

I have been using neural networks and other machine learning tools for
sometime time. Yesterday the following question popped up in my mind
however:

Why do we use machine learning tools when we could achive similar
results with plain interpolation? Let's assume a noise free regression
scenario (not classification and no measurement errors). In the case
of infinite samples and appropriate band limitedness Shannon
interpolation is the exact recovery of the function. If there are
finite samples isn't Shannon interpolation still the best estimator?
If so, why do we use neural networks for example?

Thanks in advance.
On 4/3/2010 10:20 AM, Cagdas Ozgenc wrote:
> Greetings. > > I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example? > > Thanks in advance.
What problems are you addressing? Jerry -- "It does me no injury for my neighbor to say there are 20 gods, or no God. It neither picks my pocket nor breaks my leg." Thomas Jefferson to the Virginia House of Delegates in 1776. ���������������������������������������������������������������������

Cagdas Ozgenc wrote:

> Greetings. > > I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example?
Mumbo-jumbo methods are attempted when: 1) It is hard to develop deterministic algorithm (too many lines of code, too little knowledge) 2) In naive hope to solve all problems in the world by some magic without getting into the boring and complicated details 3) because it is new, cool and somebody sponsors it Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Cagdas Ozgenc wrote:
> Greetings. > > I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example?
"We" who? I have yet to have occasion to use neural nets to solve a problem that comes my way, although I'm not entirely closed to it where it seems indicated. Ditto fuzzy logic, and whatever the Next Big Thing is that I haven't yet heard about. If neural net practitioners are using them to solve problems that could well be done by more mundane means, perhaps it's because if your only tool is a hammer, then every problem looks like a nail. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On 4/3/2010 12:29 PM, Tim Wescott wrote:
> Cagdas Ozgenc wrote: >> Greetings. >> >> I have been using neural networks and other machine learning tools for >> sometime time. Yesterday the following question popped up in my mind >> however: >> >> Why do we use machine learning tools when we could achive similar >> results with plain interpolation? Let's assume a noise free regression >> scenario (not classification and no measurement errors). In the case >> of infinite samples and appropriate band limitedness Shannon >> interpolation is the exact recovery of the function. If there are >> finite samples isn't Shannon interpolation still the best estimator? >> If so, why do we use neural networks for example? > > "We" who? I have yet to have occasion to use neural nets to solve a > problem that comes my way, although I'm not entirely closed to it where > it seems indicated. Ditto fuzzy logic, and whatever the Next Big Thing > is that I haven't yet heard about. > > If neural net practitioners are using them to solve problems that could > well be done by more mundane means, perhaps it's because if your only > tool is a hammer, then every problem looks like a nail.
There's a lot of fuzzy thinking-- if not fuzzy logic -- in the original question. Why would anyone be interested in a case with infinite (I assume that means an infinite number of) samples? How would one process them one at a time? Have machine learning tools ever been applied to interpolation and recovery of sampled functions? If so, by whom? I suspect we have a lot of buzzwords combined into an elaborate troll. Jerry -- "It does me no injury for my neighbor to say there are 20 gods, or no God. It neither picks my pocket nor breaks my leg." Thomas Jefferson to the Virginia House of Delegates in 1776. ���������������������������������������������������������������������
On 3 Apr, 15:20, Cagdas Ozgenc <cagdas.ozg...@gmail.com> wrote:
> Greetings. > > I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example? > > Thanks in advance.
The reason is that most interesting problems for machine learning are not noise free and there is a finite (often rather small) amount of data. If you fit an interpolation method to a finite sample of noisy data you will get a model that has very poor generalization performance as it will over-fit the training sample (c.f. the bias- variance dilemma). As it happens kernel learning methods (e.g. the support vector machine) are related to spline-fitting methods, but use regularization to avoid over-fitting (and hence no longer exactly interpolate the data). Bishop's book on Neural Nets for Pattern Recognition explains over-fitting very well, I'm sure if you read that you would have a much better idea of the benefits of neural nets. HTH Gavin
On 3 Apr, 17:54, Jerry Avins <j...@ieee.org> wrote:
> On 4/3/2010 12:29 PM, Tim Wescott wrote: > > > > > Cagdas Ozgenc wrote: > >> Greetings. > > >> I have been using neural networks and other machine learning tools for > >> sometime time. Yesterday the following question popped up in my mind > >> however: > > >> Why do we use machine learning tools when we could achive similar > >> results with plain interpolation? Let's assume a noise free regression > >> scenario (not classification and no measurement errors). In the case > >> of infinite samples and appropriate band limitedness Shannon > >> interpolation is the exact recovery of the function. If there are > >> finite samples isn't Shannon interpolation still the best estimator? > >> If so, why do we use neural networks for example? > > > "We" who? I have yet to have occasion to use neural nets to solve a > > problem that comes my way, although I'm not entirely closed to it where > > it seems indicated. Ditto fuzzy logic, and whatever the Next Big Thing > > is that I haven't yet heard about. > > > If neural net practitioners are using them to solve problems that could > > well be done by more mundane means, perhaps it's because if your only > > tool is a hammer, then every problem looks like a nail. > > There's a lot of fuzzy thinking-- if not fuzzy logic -- in the original > question. > > Why would anyone be interested in a case with infinite (I assume that > means an infinite number of) samples? How would one process them one at > a time? > > Have machine learning tools ever been applied to interpolation and > recovery of &#2013266080;sampled functions? If so, by whom?
I suspect that O'Hagan's work on modelling computer codes may be an example (with Gaussian Processes IIRC).
On Apr 4, 4:29&#2013266080;am, Tim Wescott <t...@seemywebsite.now> wrote:
> Cagdas Ozgenc wrote: > > Greetings. > > > I have been using neural networks and other machine learning tools for > > sometime time. Yesterday the following question popped up in my mind > > however: > > > Why do we use machine learning tools when we could achive similar > > results with plain interpolation? Let's assume a noise free regression > > scenario (not classification and no measurement errors). In the case > > of infinite samples and appropriate band limitedness Shannon > > interpolation is the exact recovery of the function. If there are > > finite samples isn't Shannon interpolation still the best estimator? > > If so, why do we use neural networks for example? > > "We" who? &#2013266080;I have yet to have occasion to use neural nets to solve a > problem that comes my way, although I'm not entirely closed to it where > it seems indicated. &#2013266080;Ditto fuzzy logic, and whatever the Next Big
Thing
> is that I haven't yet heard about. > > If neural net practitioners are using them to solve problems that could > well be done by more mundane means, perhaps it's because if your only > tool is a hammer, then every problem looks like a nail. > > -- > Tim Wescott > Control system and signal processing consultingwww.wescottdesign.com
They are used for example face recognition. all sorts of pattern recognition. Also Blind Source Separation (BSS). There is no "intelligence" to it though. It's not a brain doing thinking, only another algorithm minimising some form of criterion. Hardy
On  3-Apr-2010, Cagdas Ozgenc <cagdas.ozgenc@gmail.com> wrote:

> I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation?
I hate to be the one to tell you there is no Easter Bunny, but neural networks _are_ just interpolation. When you train a neural network, all you're doing is adjusting parameters that fit a function to a set of n-dimensional data points. The resulting fitted function is just an algebraic expression (possibly fairly long) with additions, multiplications, and calls exp or atan functions. The only difference between fitting a polynomial to data and fitting a neural network is that the resulting neural network function is (usually) more complicated. But there is nothing magic about a neural network function: it is just an algebraic expression with parameters that have been adjusted to make the function fit the data. Once a neural network function has been fitted to the data, the prediction operation is just ordinary interpolation using the fitted function. While we are discussing interpolation, remember that a sufficiently complicated neural network can be trained to fit a function over a specified domain to an arbitrary precision. However, if you attempt to use the NN to predict a value outside of the domain it was trained on, then you are doing extrapolation rather than interpolation and all bets are off. It is very likely that the network will go wildly wrong outside of its training domain. If you use nonlinear regression to fit an analytical function to data and there is a theoretical basis for the association of the function with the data, then you can expect reasonable results when extrapolating the function. For example, that's how they predict the future position of planets. But since a neural network has no theory to tie it to the data, you are just doing arbitrary interpolation as if you were using a French curve to connect some points. -- Phil Sherrod http://www.dtreg.com -- Neural networks, SVM, Decision trees
On Apr 4, 10:45&#2013266080;pm, "Phil Sherrod" <PhilSher...@NOSPAMcomcast.net>
wrote:
> On &#2013266080;3-Apr-2010, Cagdas Ozgenc <cagdas.ozg...@gmail.com> wrote: > > > I have been using neural networks and other machine learning tools for > > sometime time. Yesterday the following question popped up in my mind > > however: > > > Why do we use machine learning tools when we could achive similar > > results with plain interpolation? > > I hate to be the one to tell you there is no Easter Bunny, but neural > networks _are_ just interpolation. &#2013266080;When you train a neural network,
all
> you're doing is adjusting parameters that fit a function to a set of > n-dimensional data points. &#2013266080;The resulting fitted function is just an > algebraic expression (possibly fairly long) with additions, multiplications, > and calls exp or atan functions. &#2013266080;The only difference between fitting
a
> polynomial to data and fitting a neural network is that the resulting neural > network function is (usually) more complicated. &#2013266080;But there is nothing
magic
> about a neural network function: it is just an algebraic expression with > parameters that have been adjusted to make the function fit the data. > > Once a neural network function has been fitted to the data, the prediction > operation is just ordinary interpolation using the fitted function. > > While we are discussing interpolation, remember that a sufficiently > complicated neural network can be trained to fit a function over a specified > domain to an arbitrary precision. &#2013266080;However, if you attempt to use the
NN to
> predict a value outside of the domain it was trained on, then you are doing > extrapolation rather than interpolation and all bets are off. &#2013266080;It is
very
> likely that the network will go wildly wrong outside of its training domain. > > If you use nonlinear regression to fit an analytical function to data and > there is a theoretical basis for the association of the function with the > data, then you can expect reasonable results when extrapolating the > function. &#2013266080;For example, that's how they predict the future position
of
> planets. &#2013266080;But since a neural network has no theory to tie it to the
data,
> you are just doing arbitrary interpolation as if you were using a French > curve to connect some points. > > -- > Phil Sherrodhttp://www.dtreg.com-- Neural networks, SVM, Decision trees
I possess fair knowledge of all those things you mentioned in your answer. My question is a little deeper than that, however. Let me try to eloborate further: Shannon's interpolation uses a sinc function and leads to exact recovery asymptotically given that signal is sufficiently sampled. Does this mean that for example using a different kernel is suboptimal? I understand that as far as theoretical results go NNs approximate L2 space (probably I am not quite accurate here, basically results of universal approximation theory) fairly well. However statistically isn't Shannon's interpolation the better approach or maybe I should say isn't it the best non linear estimator (i.e. maybe a faster convergence or less bias or better accuracy say under N samples)?