Reply by robert bristow-johnson●June 2, 20062006-06-02
louis wrote:
>
> I wonder if there are any applications that deal with the same problem I
> currently have? That is, synthesizing a complex sound made up of many
> individual sinusoids, via additive synthesis, but synthesizing only a
> small fraction of a second at a time (because the
> frequencies/amplitudes/phases are not known in advance), and appending
> these to the previous 'sound frame' as the information becomes known.
> Maybe I can look these applications up and see how they solve this.
>
> Robert, you had said in your first response that if the frequencies are
> not related harmonically, then I have problems. I thought it was due to my
> lack of knowledge in this field why I couldn't solve this, but why do you
> believe this is so difficult?
if you just randomly delay these sound segments and cross-fade them
together as the means splicing the tape, you will likely have some
sinusoidal components close in frequency and 180 degrees out of phase
and that might be a glitch, i dunno.
r b-j
Reply by robert bristow-johnson●June 2, 20062006-06-02
louis wrote:
> >> How can I append these sound frames, if they were created using
> additive
> >> synthesis, without hearing any artifacts (i.e. clicking or..?).
>
> >this is the same problem as "time-scaling". are your sinusoids
> >harmonically related? that is, are the frequencies of the sinusoids
> >you're adding up integer multiples of a common fundamental frequency?
> >if yes, time-scaling is easy. if no, you have problems and maybe
> >should just synthesize the whole 10 seconds.
>
> Okay, they are not harmonically related.. some sinusoids have very low
> frequencies, but most are around a few KHz lets say. Does this mean I am
> doomed if they are not harmonically related!!!??
not doomed, but the time-scaling processing that time-stretches (or,
more generally, time-scaling) this without glitches is something
similar to the phase-vocoder or sinusoidal-modeling (both
frequency-domain processes) essentially that takes the sound apart into
sinusoidal components and mathematically time-scales each component.
now i think the computational cost to synthesize that directly is about
the same as applying the time-scaling process or less, so saying we
have such a nice algorithm that will time-scale the sound isn't really
a gift. i can't see how it helps your problem.
> I really cant' synthesize
> the whole 10 seconds at once, I am programming an application which has to
> do real-time and on-the-fly sound synthesis, which means that we never
> really know in advance what all the frequencies are going to be, (which is
> why i cant just program for example, 10 seconds of sound all at once. I was
> just using 10 seconds as an example)
>
> There must be another way! :s (Why do you say I have problems?)
>
> btw, the reason we used additive synthesis, is that we originally knew the
> frequency and amplitude componentes of each sinusoid, but didnt know the
> phase information, and had to simulate it. Each sinusoid is slightly
> offset in phase, and is slightly random in nature (not completely of
> course otherwise it woudl just be white noise), anyawys so we were able to
> do this with additive synthesis.
but if the sound is a single quasi-periodic tone, synthesizing that
(which allows you to time-scale it) with wavetale synthesis is cheaper
than additive.
>
> >> Do I have to use some kind of window? Or would I do cross-fading?
> >
> >windowing and cross-fading have a lot in common. sometimes they are
> >the same thing.
>
> Okay..! I guess cross-fading is done primarily in time domain, whereas
> windowing can be done in both..?
no, i'm just comparing functionally what you're doing when you are
cross-fading. some piece of audio gets faded up (from a cross-fade)
and later faded down (from the next x-fade). that is the same as
applying a window to that chunk of audio.
haven't looked at your later posts, yet.
r b-j
Reply by Jerry Avins●June 2, 20062006-06-02
louis wrote:
...
> Robert, you had said in your first response that if the frequencies are
> not related harmonically, then I have problems. I thought it was due to my
> lack of knowledge in this field why I couldn't solve this, but why do you
> believe this is so difficult?
I'm not R.B-J. and I don't presume to answer for him.
If the frequencies are harmonically related, the period of the created
waveform is that of the lowest frequency. By including a whole number of
periods in each sound frame, you avoid phase discontinuities.
It is possible for the waveform to be periodic even if its components
are not harmonically related, but the period is then the reciprocal of
the greatest common divisor of the components, which can be rather long.
(The period of 100 Hz plus 201 Hz is one second.)
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by louis●June 2, 20062006-06-02
thanks again Robert and Jerry for your replies,
I wonder if there are any applications that deal with the same problem I
currently have? That is, synthesizing a complex sound made up of many
individual sinusoids, via additive synthesis, but synthesizing only a
small fraction of a second at a time (because the
frequencies/amplitudes/phases are not known in advance), and appending
these to the previous 'sound frame' as the information becomes known.
Maybe I can look these applications up and see how they solve this.
Robert, you had said in your first response that if the frequencies are
not related harmonically, then I have problems. I thought it was due to my
lack of knowledge in this field why I couldn't solve this, but why do you
believe this is so difficult?
Thank you kindly.
Reply by Jerry Avins●June 2, 20062006-06-02
louis wrote:
> Jerry, thank you for your response!
> i probably should rephrase the question a little! I was just using the "10
> seconds of sound" as an example, but the reason I need to be able to append
> sound frames is that I dont know in advance what the individual frequencies
> and amplitudes of each sinusoid will be. This is because I am programmign a
> real-time simulation application which genearates synthesized audio
> on-the-fly, depending on the constant updating of another variable..
>
> So my initial problem was how do I simulate this particular sound in the
> first place. The main problem with this was that we didnt know the phase
> information of the type of sound that we were trying to simulate. We
> figured out how to simulate this, and were able to use additive synthesis.
> However, this was not done on-the-fly, it was done by assigning in advance
> the frequencies and amplitudes over the entire 10 seconds, lets say.
>
> Now the next step of the problem is to be able to do this on-the-fly..
> where the syntehsized audio would have to update at every fraction of a
> second, in real-time..
Abruptly starting and stopping components of a sound will create clicks.
Cross fading will likely produce the attack and decay profiles that you
need to avoid them, bur even there, phase incongruities can create
strange effects. Depending on how you choose to see them, windowing,
cross fading and attack-decay can be the same thing.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by louis●June 2, 20062006-06-02
Jerry, thank you for your response!
i probably should rephrase the question a little! I was just using the "10
seconds of sound" as an example, but the reason I need to be able to append
sound frames is that I dont know in advance what the individual frequencies
and amplitudes of each sinusoid will be. This is because I am programmign a
real-time simulation application which genearates synthesized audio
on-the-fly, depending on the constant updating of another variable..
So my initial problem was how do I simulate this particular sound in the
first place. The main problem with this was that we didnt know the phase
information of the type of sound that we were trying to simulate. We
figured out how to simulate this, and were able to use additive synthesis.
However, this was not done on-the-fly, it was done by assigning in advance
the frequencies and amplitudes over the entire 10 seconds, lets say.
Now the next step of the problem is to be able to do this on-the-fly..
where the syntehsized audio would have to update at every fraction of a
second, in real-time..
I really appreciate your help!!
Thank you,
Reply by louis●June 2, 20062006-06-02
>> How can I append these sound frames, if they were created using
additive
>> synthesis, without hearing any artifacts (i.e. clicking or..?).
>this is the same problem as "time-scaling". are your sinusoids
>harmonically related? that is, are the frequencies of the sinusoids
>you're adding up integer multiples of a common fundamental frequency?
>if yes, time-scaling is easy. if no, you have problems and maybe
>should just synthesize the whole 10 seconds.
Okay, they are not harmonically related.. some sinusoids have very low
frequencies, but most are around a few KHz lets say. Does this mean I am
doomed if they are not harmonically related!!!?? I really cant' synthesize
the whole 10 seconds at once, I am programming an application which has to
do real-time and on-the-fly sound synthesis, which means that we never
really know in advance what all the frequencies are going to be, (which is
why i cant just program for example, 10 seconds of sound all at once. I was
just using 10 seconds as an example)
There must be another way! :s (Why do you say I have problems?)
btw, the reason we used additive synthesis, is that we originally knew the
frequency and amplitude componentes of each sinusoid, but didnt know the
phase information, and had to simulate it. Each sinusoid is slightly
offset in phase, and is slightly random in nature (not completely of
course otherwise it woudl just be white noise), anyawys so we were able to
do this with additive synthesis.
>> Do I have to use some kind of window? Or would I do cross-fading?
>
>windowing and cross-fading have a lot in common. sometimes they are
>the same thing.
Okay..! I guess cross-fading is done primarily in time domain, whereas
windowing can be done in both..?
Okay I really appreciate your assistance!!!
Thank you very much!!!
Reply by Jerry Avins●June 2, 20062006-06-02
louis wrote:
> Hi there,
>
> I am synthesizing a particular sound using additive synthesis. Hence I
> dont have to do any FFT's or IFFT's, I basically sum the individual
> sinusoids whose phase, frequency and amplitude components I know, to
> produce a complex sound.
>
> My problem is that instead of synthesizing the full 10 seconds of sound
> all at once, I need to be able to create them in small 'sound frames',
> which would be only a small fraction of a second. I then need to append
> each of these, to then produce the full 10 second sound.
>
> How can I append these sound frames, if they were created using additive
> synthesis, without hearing any artifacts (i.e. clicking or..?).
>
> Do I have to use some kind of window? Or would I do cross-fading?
>
> I cant seem to wrap my head around this, and thank you very much for your
> help,
> I really appreciate it..
Set up your program to create the full ten seconds, but suspend it when
a sound frame has been constructed. Resume the program instead of
starting over to make the next sound frame. Continue until done. You can
then stitch the frames together without further treatment because they
are already part of a unified whole.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by robert bristow-johnson●June 2, 20062006-06-02
louis wrote:
>
> I am synthesizing a particular sound using additive synthesis. Hence I
> dont have to do any FFT's or IFFT's, I basically sum the individual
> sinusoids whose phase, frequency and amplitude components I know, to
> produce a complex sound.
>
> My problem is that instead of synthesizing the full 10 seconds of sound
> all at once, I need to be able to create them in small 'sound frames',
> which would be only a small fraction of a second. I then need to append
> each of these, to then produce the full 10 second sound.
>
> How can I append these sound frames, if they were created using additive
> synthesis, without hearing any artifacts (i.e. clicking or..?).
this is the same problem as "time-scaling". are your sinusoids
harmonically related? that is, are the frequencies of the sinusoids
you're adding up integer multiples of a common fundamental frequency?
if yes, time-scaling is easy. if no, you have problems and maybe
should just synthesize the whole 10 seconds.
if they are harmonically related, you should consider wavetable
synthesis (*not* the same thing as PCM sample playback that these
"wavetable" sound cards do) as a cheaper to implement equivalent to
additive synthesis.
> Do I have to use some kind of window? Or would I do cross-fading?
windowing and cross-fading have a lot in common. sometimes they are
the same thing.
r b-j
Reply by louis●June 2, 20062006-06-02
Hi there,
I am synthesizing a particular sound using additive synthesis. Hence I
dont have to do any FFT's or IFFT's, I basically sum the individual
sinusoids whose phase, frequency and amplitude components I know, to
produce a complex sound.
My problem is that instead of synthesizing the full 10 seconds of sound
all at once, I need to be able to create them in small 'sound frames',
which would be only a small fraction of a second. I then need to append
each of these, to then produce the full 10 second sound.
How can I append these sound frames, if they were created using additive
synthesis, without hearing any artifacts (i.e. clicking or..?).
Do I have to use some kind of window? Or would I do cross-fading?
I cant seem to wrap my head around this, and thank you very much for your
help,
I really appreciate it..
Louis