On Apr 3, 8:54�pm, Jerry Avins <j...@ieee.org> wrote:> On 4/3/2010 12:29 PM, Tim Wescott wrote: > > > > > > > Cagdas Ozgenc wrote: > >> Greetings. > > >> I have been using neural networks and other machine learning tools for > >> sometime time. Yesterday the following question popped up in my mind > >> however: > > >> Why do we use machine learning tools when we could achive similar > >> results with plain interpolation? Let's assume a noise free regression > >> scenario (not classification and no measurement errors). In the case > >> of infinite samples and appropriate band limitedness Shannon > >> interpolation is the exact recovery of the function. If there are > >> finite samples isn't Shannon interpolation still the best estimator? > >> If so, why do we use neural networks for example? > > > "We" who? I have yet to have occasion to use neural nets to solve a > > problem that comes my way, although I'm not entirely closed to it where > > it seems indicated. Ditto fuzzy logic, and whatever the Next Big Thing > > is that I haven't yet heard about. > > > If neural net practitioners are using them to solve problems that could > > well be done by more mundane means, perhaps it's because if your only > > tool is a hammer, then every problem looks like a nail. > > There's a lot of fuzzy thinking-- if not fuzzy logic -- in the original > question. > > Why would anyone be interested in a case with infinite (I assume that > means an infinite number of) samples? How would one process them one at > a time? > > Have machine learning tools ever been applied to interpolation and > recovery of �sampled functions? If so, by whom? > > I suspect we have a lot of buzzwords combined into an elaborate troll. > > Jerry > -- > "It does me no injury for my neighbor to say there are 20 gods, or no > God. It neither picks my pocket nor breaks my leg." > � � � � � Thomas Jefferson to the Virginia House of Delegates in 1776.- Hide quoted text - > > - Show quoted text -"Have machine learning tools ever been applied to interpolation and recovery of sampled functions? If so, by whom?" You must be kidding. Ignorance is bliss. There is no troll here. Question is genuine, though I might have verbalized it not so clearly I admit.

# machine learning vs interpolation

Started by ●April 3, 2010

Reply by ●April 5, 20102010-04-05

Reply by ●April 5, 20102010-04-05

On 4/5/2010 5:23 AM, Cagdas Ozgenc wrote:> On Apr 3, 8:54 pm, Jerry Avins<j...@ieee.org> wrote: >> On 4/3/2010 12:29 PM, Tim Wescott wrote:...> "Have machine learning tools ever been applied to interpolation and > recovery of sampled functions? If so, by whom?" > > You must be kidding. Ignorance is bliss.Apparently so.> There is no troll here. Question is genuine, though I might have > verbalized it not so clearly I admit.The "infinite samples" made me wonder. Jerry -- "It does me no injury for my neighbor to say there are 20 gods, or no God. It neither picks my pocket nor breaks my leg." Thomas Jefferson to the Virginia House of Delegates in 1776. ���������������������������������������������������������������������

Reply by ●April 6, 20102010-04-06

On Apr 4, 2:45�pm, "Phil Sherrod" <PhilSher...@NOSPAMcomcast.net> wrote:> On �3-Apr-2010, Cagdas Ozgenc <cagdas.ozg...@gmail.com> wrote: > > > I have been using neural networks and other machine learning tools for > > sometime time. Yesterday the following question popped up in my mind > > however: > > > Why do we use machine learning tools when we could achive similar > > results with plain interpolation? > > I hate to be the one to tell you there is no Easter Bunny, but neural > networks _are_ just interpolation. �When you train a neural network, all > you're doing is adjusting parameters that fit a function to a set of > n-dimensional data points. �The resulting fitted function is just an > algebraic expression (possibly fairly long) with additions, multiplications, > and calls exp or atan functions. �The only difference between fitting a > polynomial to data and fitting a neural network is that the resulting neural > network function is (usually) more complicated. �But there is nothing magic > about a neural network function: it is just an algebraic expression with > parameters that have been adjusted to make the function fit the data. > > Once a neural network function has been fitted to the data, the prediction > operation is just ordinary interpolation using the fitted function. > > While we are discussing interpolation, remember that a sufficiently > complicated neural network can be trained to fit a function over a specified > domain to an arbitrary precision. �However, if you attempt to use the NN to > predict a value outside of the domain it was trained on, then you are doing > extrapolation rather than interpolation and all bets are off. �It is very > likely that the network will go wildly wrong outside of its training domain. > > If you use nonlinear regression to fit an analytical function to data and > there is a theoretical basis for the association of the function with the > data, then you can expect reasonable results when extrapolating the > function. �For example, that's how they predict the future position of > planets. �But since a neural network has no theory to tie it to the data, > you are just doing arbitrary interpolation as if you were using a French > curve to connect some points. > > Phil Sherrodhttp://www.dtreg.com-- Neural networks, SVM, Decision treesExcellent description! Greg

Reply by ●April 6, 20102010-04-06

Tim Wescott wrote:> Ditto fuzzy logic, and whatever the Next Big Thing > is that I haven't yet heard about.It took about two years to find our first application where fuzzy logic (multivalued logic, linguistic variables) to be the appropriate solution for a problem, once the first one there were a flood of appropriate problems. Multivalued logic works well in non linear systems and applications that have several competing solutions depending on operational mode. http://bytecraft.com/Fuzzy_Logic Regards, Walter Banks -- Byte Craft Limited http://www.bytecraft.com

Reply by ●September 8, 20172017-09-08

Reply by ●September 8, 20172017-09-08

Am 08.09.17 um 13:29 schrieb xiaodunhui@gmail.com:> Hi > Which machine learning method is good for interpolation do you think? Thanks. >SVM? Look for "SVM regression" Christian

Reply by ●September 9, 20172017-09-09

On Saturday, September 9, 2017 at 6:12:22 AM UTC+12, Christian Gollwitzer wrote:> Am 08.09.17 um 13:29 schrieb xiaodunhui@gmail.com: > > Hi > > Which machine learning method is good for interpolation do you think? Thanks. > > > SVM? Look for "SVM regression" > > ChristianSupport vector machines (SVM) gave us the best results with a tie with deep learning neural networks.

Reply by ●September 12, 20172017-09-12

https://devtalk.nvidia.com/default/topic/1023786/cuda-programming-and-performance/walsh-hadamard-transform-based-ai/

Reply by ●September 12, 20172017-09-12

On Saturday, April 3, 2010 at 7:20:28 AM UTC-7, Cagdas Ozgenc wrote: (snip)> Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example?I realize this is an old thread revived, but I might have learned some things in the intervening years. First, interpolation (or, more often, fitting) is a machine learning tool. It is simpler than many, but often useful. As to which tool to use on sampled data, it is useful to remember that very rarely do we have idealized sample data. I was thinking not so long ago about digitized images, from image sensors that collect data over most of the area of an image pixel, and so are not ideal samplers. Many digital cameras use an optical low-pass filter in front of the sensor, made from birefringent material, to avoid aliasing. (Because of the Bayer array, the first thing that aliasing does is to generate colors, and so is very visible.) Cheaper cameras depend on the lens resolution being low enough. But anyway, since we don't have ideal delta function sampling, we shouldn't be surprised if methods that are ideal for perfect sampling are not ideal for the data we actually have. Also, consider the processing rate required for digital video. Methods that take many seconds or minutes per frame won't be popular processing digital video signals.

Reply by ●January 7, 20192019-01-07

Le samedi 3 avril 2010 16:20:28 UTC+2, Cagdas Ozgenc a écrit :> Greetings. > > I have been using neural networks and other machine learning tools for > sometime time. Yesterday the following question popped up in my mind > however: > > Why do we use machine learning tools when we could achive similar > results with plain interpolation? Let's assume a noise free regression > scenario (not classification and no measurement errors). In the case > of infinite samples and appropriate band limitedness Shannon > interpolation is the exact recovery of the function. If there are > finite samples isn't Shannon interpolation still the best estimator? > If so, why do we use neural networks for example? > > Thanks in advance.an old question, certainly fuzzy in its wording (most NN are interpolations indeed)but quite on spot regarding recent ML results https://arxiv.org/abs/1812.11118v1