Forums

Audio morphing?

Started by Verified by Kerberos November 27, 2003
Hi,

I'm reading this list for quite some time trying to learn something
about DSP. Most of it is too complicated for me. I'm a guitarist and
musician from Switzerland and not a DSP guy so please apologize if I'm
asking stupid or obvious questions.

I recently discovered a product by Prosoniq (http://www.prosoniq.com)
that does morphing with audio signals. I've listened to the demo files
which are simply amazing and I tried to find out more about how this
is done. They say they are using an "adaptive basis transform" to do
it and not a Fourier transform like the Kyma system does
(http://www.symbolicsound.com).

Now. I've looked everywhere, from my local library to google but I
can't seem to find any explanation as to what this is and how morphing
can be done. In the general literature on DSP I only find tutorials
about filter design and such.

Is there no site or book that covers the "interesting" stuff? Does any
of you know this product or used it? What do you think how this is
done? Is this the right group to ask?

The audio files are at http://www.prosoniq.com/html/morphaudioex.html,
the demo (Mac only, unfortunately) is at
http://www.prosoniq.com/html/demos.html.

Thansk!
--tahome


In article yviad6hsmql.fsf@jpff.cs.bath.ac.uk, tahome at tahome@postino.ch
wrote on 11/27/2003 13:13:

> I'm reading this list for quite some time trying to learn something > about DSP. Most of it is too complicated for me. I'm a guitarist and > musician from Switzerland and not a DSP guy so please apologize if I'm > asking stupid or obvious questions. > > I recently discovered a product by Prosoniq (http://www.prosoniq.com) > that does morphing with audio signals. I've listened to the demo files > which are simply amazing and I tried to find out more about how this > is done. They say they are using an "adaptive basis transform" to do > it and not a Fourier transform like the Kyma system does > (http://www.symbolicsound.com).
there's another old product from E-mu called the Morpheus. i think i have an idea how that one worked.
> Now. I've looked everywhere, from my local library to google but I > can't seem to find any explanation as to what this is and how morphing > can be done. In the general literature on DSP I only find tutorials > about filter design and such.
i haven't listened to the prosoniq stuff (but i think i will soon). i have an idea how i might do it but i am hesitant to talk about it.
> Is there no site or book that covers the "interesting" stuff? Does any > of you know this product or used it? What do you think how this is > done? Is this the right group to ask?
i am thinking about such a book and was encouraged to work on one by a friend who oughta know about the Morpheus.
> The audio files are at http://www.prosoniq.com/html/morphaudioex.html, > the demo (Mac only, unfortunately) is at > http://www.prosoniq.com/html/demos.html. > > Thansk!
yer welcoem. r b-j
I suspect that they should be using wavelet bases. Any application
ranging from compression, SNR improvement or for that matter, this
kind of morphing, all that matters is "representation" or "model". In
any case, what they should be doing while morphing is to get the two
or more signals to be morphed into the transform domain. then just
interpolate between these two representations. and fourier bases
though good for studying the spectral content of signals but quite too
bad because it consideres all signals to be sum of sinusoids(the
sinusoids are the bases of fourier transform). it needs infinite no.
of components to satisfactorily represent the signal under
consideration. so, any practical transform for a real time signal is
lossy and does not catch the exact signal in its representation. but a
transform like wavlet transform better represents the signal. they
have bases that are frequency and timelimited. wavelet bases are not
unique. there are a who family of wavelet bases among which each one
suits a certain scenario. and these bases have flexible parameters in
time and freq giving many degrees of freedom. by "adaptive", they
should be meaning the parameters for those bases depending on the
sound file under consideration.

bala


tahome@postino.ch (tahome) wrote in message news:<yviad6hsmql.fsf@jpff.cs.bath.ac.uk>...
> Hi, > > I'm reading this list for quite some time trying to learn something > about DSP. Most of it is too complicated for me. I'm a guitarist and > musician from Switzerland and not a DSP guy so please apologize if I'm > asking stupid or obvious questions. > > I recently discovered a product by Prosoniq (http://www.prosoniq.com) > that does morphing with audio signals. I've listened to the demo files > which are simply amazing and I tried to find out more about how this > is done. They say they are using an "adaptive basis transform" to do > it and not a Fourier transform like the Kyma system does > (http://www.symbolicsound.com). > > Now. I've looked everywhere, from my local library to google but I > can't seem to find any explanation as to what this is and how morphing > can be done. In the general literature on DSP I only find tutorials > about filter design and such. > > Is there no site or book that covers the "interesting" stuff? Does any > of you know this product or used it? What do you think how this is > done? Is this the right group to ask? > > The audio files are at http://www.prosoniq.com/html/morphaudioex.html, > the demo (Mac only, unfortunately) is at > http://www.prosoniq.com/html/demos.html. > > Thansk! > --tahome
tahome wrote:

> I recently discovered a product by Prosoniq > (http://www.prosoniq.com) that does morphing with audio signals. > I've listened to the demo files which are simply amazing and I > tried to find out more about how this is done. They say they are > using an "adaptive basis transform" to do it and not a Fourier > transform like the Kyma system does > (http://www.symbolicsound.com).
Hi! "Adaptive basis transform" does not say much about precisely what is really happening -- after all, we can't expect a company to blurt out their secrets just like that, right? ;) For instance, here's something you can try at home: Record yourself saying, "DSP is great". Make two copies of the file. In each recording, mute two words so that adding them all together yields exactly the original sentence. The three sample sequences, including zero samples, are independent; that is, you cannot make one by scaling and adding the other two. So they are a basis for that particular recording (and a number of admittedly somewhat dull variants), adapted by virtue of human intelligence. Now, I don't know what Prosoniq are doing either, but one method coming to mind is Independent Component Analysis. Google for it, and be warned that judging from your self-description it will look rather heavy ;) Once a basis for each source sound has been obtained, there's of course still a multitude of possible ways to construct the actual transition between them for resynthesis. You might also browse the DAFx conference series' proceedings. They are available online. Martin