Reply by Jerry Avins July 26, 20082008-07-26
jadhav_rahul wrote:

   ...

> Thanks jerry,
You're welcome.
> Does any one have any link or document for voice conference using fixed > point DSP? > Rahul.
Sadly, not me. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by jadhav_rahul July 25, 20082008-07-25
>jadhav_rahul wrote: >>> jadhav_rahul wrote: >>> >>> ... >>> >>>> Thanks jerry, >>>> but which other design decisions you are talking about? >>>> Rahul. >>> You can dynamically adjust the scaling to allow any sum. (Dynamic >>> compression.) You can allow enough headroom for two people to shout at
>>> each other, but no more. Clipping overloads would be better than >>> wrapping around, but either works. There is not much reason to allow
for
>> >>> everyone talking at once because it wouldn't be intelligible anyway. >>> >>> >>> Jerry >>> -- >>> Engineering is the art of making what you want from things you can
get.
>>>
???????????????????????????????????????????????????????????????????????
>> >>> Thanks jerry >> what should be the level of buffering (of decompressed voice data)
before
>> adding them? >> i mean in case of y[n]=a.x1[n]+b.x2[n]+c.x3[n] >> we have to buffer all the inputs or we can add first two and buffer
the
>> result to add with third one? >> which one will be better? >> is there any standard algorithm availabe for entire processing? >> Thanks in advance, > >Getting the result with minimum delay is the only important point. >Assuming that the data are already in linear twos-complement form (what >I think you mean by "PCM"), I would add first and scale the result back >to the allowed bit count. The computer can add only two numbers in one >operation. That may be obscured by a high-level language, but the >processor isn't fooled by fancy notation. :-) > >Jerry >-- >Engineering is the art of making what you want from things you can get. >??????????????????????????????????????????????????????????????????????? >
Thanks jerry, Does any one have any link or document for voice conference using fixed point DSP? Rahul.
Reply by Greg Berchin July 16, 20082008-07-16
On Jul 16, 10:05&#4294967295;am, Jerry Avins <j...@ieee.org> wrote:

> I don't know how DSPRelated works, but you have reached me through my > home email. Please put your questions so that they appear in the > newsgroup comp.dsp where there can be a general discussion rather than a > private conversation.
I received the same message through DSPRelated, Jerry, so I think that he just shotgunned everyone who responded on comp.dsp. Is school back in session? We seem to be getting a lot of homework questions lately. Greg
Reply by Jerry Avins July 16, 20082008-07-16
rahul.jadhav@spectross.com wrote:
 > Rahul Jadhav visited DSPRelated.com and clicked on your name from 
this page:
 > 		http://www.dsprelated.com/showmessage/99660.php
 > 		to contact you.  His message follows:
 > 		
 > 				Hi Jerry,
 >             Thanks for your reply on my Topic of Audio mixing.
 >             I have one doubt, suppose we have conference between 3
 > users (PCM companded voice channels) then we add the streams together
 > with scaling but data which a user can receive will include his own
 > voice information also or i think we should substract his info. from
 > the combined data, also as the total sum of scaling factors should be
 > 1 how we decide these scaling factors becoz these factors decides 
audio gain of each channel?
 >             Can you plz suggest me steps to follow to implenent voice 
conference using DSP(I am using Fixed point DSP TMS320c55x) and 
Components to use from DSP and level of buffering for incoming data.
 >           Thanks in advance.
 > Rahul jadhav.

I don't know how DSPRelated works, but you have reached me through my 
home email. Please put your questions so that they appear in the 
newsgroup comp.dsp where there can be a general discussion rather than a 
private conversation. There, others can correct any errors I might make 
and contribute ideas I might overlook.

Ordinary telephones (and aircraft and military tank intercoms) feed the 
speaker's voice back to his ears, usually at reduced volume. That is 
called "sidetone". The system sounds "dead" without it. Do a web search 
for details. If you are in a position to keep the voices separate until 
they are mixed for each user, you can generate reduced-loudness sidetone 
easily. If not, you have two choices: reduce the sidetone locally with a 
cancellation arrangement that can be difficult to make stable, or let it 
ride through at full loudness. Techniques for locally reducing 
(suppressing, in that case) sidetone are used in speaker phones.

Remember that the individual must be converted to linear before being 
added and re-encoded for distribution.

Jerry
-- 
        "The rights of the best of men are secured only as the
        rights of the vilest and most abhorrent are protected."
            - Chief Justice Charles Evans Hughes, 1927
&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by jadhav_rahul July 14, 20082008-07-14
>jadhav_rahul wrote: >>> jadhav_rahul wrote: >>> >>> ... >>> >>>> Thanks jerry, >>>> but which other design decisions you are talking about? >>>> Rahul. >>> You can dynamically adjust the scaling to allow any sum. (Dynamic >>> compression.) You can allow enough headroom for two people to shout at
>>> each other, but no more. Clipping overloads would be better than >>> wrapping around, but either works. There is not much reason to allow
for
>> >>> everyone talking at once because it wouldn't be intelligible anyway. >>> >>> >>> Jerry >>> -- >>> Engineering is the art of making what you want from things you can
get.
>>>
???????????????????????????????????????????????????????????????????????
>> >>> Thanks jerry >> what should be the level of buffering (of decompressed voice data)
before
>> adding them? >> i mean in case of y[n]=a.x1[n]+b.x2[n]+c.x3[n] >> we have to buffer all the inputs or we can add first two and buffer
the
>> result to add with third one? >> which one will be better? >> is there any standard algorithm availabe for entire processing? >> Thanks in advance, > >Getting the result with minimum delay is the only important point. >Assuming that the data are already in linear twos-complement form (what >I think you mean by "PCM"), I would add first and scale the result back >to the allowed bit count. The computer can add only two numbers in one >operation. That may be obscured by a high-level language, but the >processor isn't fooled by fancy notation. :-) > >Jerry >-- >Engineering is the art of making what you want from things you can get. >??????????????????????????????????????????????????????????????????????? >thanks jerry
Reply by Jerry Avins July 9, 20082008-07-09
jadhav_rahul wrote:
>> jadhav_rahul wrote: >> >> ... >> >>> Thanks jerry, >>> but which other design decisions you are talking about? >>> Rahul. >> You can dynamically adjust the scaling to allow any sum. (Dynamic >> compression.) You can allow enough headroom for two people to shout at >> each other, but no more. Clipping overloads would be better than >> wrapping around, but either works. There is not much reason to allow for > >> everyone talking at once because it wouldn't be intelligible anyway. >> >> >> Jerry >> -- >> Engineering is the art of making what you want from things you can get. >> ??????????????????????????????????????????????????????????????????????? > >> Thanks jerry > what should be the level of buffering (of decompressed voice data) before > adding them? > i mean in case of y[n]=a.x1[n]+b.x2[n]+c.x3[n] > we have to buffer all the inputs or we can add first two and buffer the > result to add with third one? > which one will be better? > is there any standard algorithm availabe for entire processing? > Thanks in advance,
Getting the result with minimum delay is the only important point. Assuming that the data are already in linear twos-complement form (what I think you mean by "PCM"), I would add first and scale the result back to the allowed bit count. The computer can add only two numbers in one operation. That may be obscured by a high-level language, but the processor isn't fooled by fancy notation. :-) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Greg Berchin July 9, 20082008-07-09
On Jul 8, 12:11&#4294967295;am, "jadhav_rahul" <rahul.jad...@spectross.com> wrote:

> &#4294967295; &#4294967295; &#4294967295; I have one doubt, suppose we have conference between 3 users (PCM > encoded voice channels) then we add the streams together with scaling but > data which a user can receive will include his own voice information also > or i think we should substract his info. from the combined data, > also as the total sum of scaling factors should be 1 how we decide these > scaling factors becoz these factors decides audio gain of each channel?
Search for "automixer" and "mix-minus". Greg
Reply by jadhav_rahul July 9, 20082008-07-09
>jadhav_rahul wrote: > > ... > >> Thanks jerry, >> but which other design decisions you are talking about? >> Rahul. > >You can dynamically adjust the scaling to allow any sum. (Dynamic >compression.) You can allow enough headroom for two people to shout at >each other, but no more. Clipping overloads would be better than >wrapping around, but either works. There is not much reason to allow for
> everyone talking at once because it wouldn't be intelligible anyway. > > >Jerry >-- >Engineering is the art of making what you want from things you can get. >???????????????????????????????????????????????????????????????????????
>Thanks jerry
what should be the level of buffering (of decompressed voice data) before adding them? i mean in case of y[n]=a.x1[n]+b.x2[n]+c.x3[n] we have to buffer all the inputs or we can add first two and buffer the result to add with third one? which one will be better? is there any standard algorithm availabe for entire processing? Thanks in advance, Rahul
Reply by Jerry Avins July 8, 20082008-07-08
jadhav_rahul wrote:

   ...

> Thanks jerry, > but which other design decisions you are talking about? > Rahul.
You can dynamically adjust the scaling to allow any sum. (Dynamic compression.) You can allow enough headroom for two people to shout at each other, but no more. Clipping overloads would be better than wrapping around, but either works. There is not much reason to allow for everyone talking at once because it wouldn't be intelligible anyway. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by jadhav_rahul July 8, 20082008-07-08
>jadhav_rahul wrote: >>>> On Jul 7, 8:35=A0am, "jadhav_rahul" <rahul.jad...@spectross.com>
wrote:
>>>>> Hello everyone, >>>>> =A0 =A0 =A0 =A0 =A0 =A0 =A0 i want to implement audio mixing for
call
>>> con= >>>> ference with >>>>> PCM encoded voice channels, >>>>> I want to know how audio mixing is done by DSP ? =A0and any
suitable
>>>>> algorithm or document for it. >>>>> also plz guide me how to proceed for its implementation? >>>>> >>>>> Thanks in advance, >>>>> >>>>> Rahul Jadhav. >>>> Depends on the PCM. If it is not companded or otherwise encoded,
just
>>>> add the streams together with scaling (if necessary) to prevent >>>> overflow. If companded such is commonly the case with telephony (mu- >>>> law or A-law), convert to linear, add and then convert back. If your >>>> streams are ADPCM, then you have more work ahead of you. >>>> >>>> Clay >>>> >>> Thanks clay, >>> I have one doubt, suppose we have conference between 3 users
(PCM
>>> encoded voice channels) then we add the streams together with scaling
but
>> >>> data which a user can receive will include his own voice information >> also >>> or i think we should substract his info. from the combined data, >>> also as the total sum of scaling factors should be 1 how we decide
these
>>> scaling factors becoz these factors decides audio gain of each
channel?
>>> >>> regards, >>> Rahul. >>> >> >> HI, >> Is it correct that more the number of channels less will be the
received
>> strength of the combined voice? > >Yes, if the scaling allows for all users talking simultaneously. There >are other design decisions that can be made. > >Jerry >-- >Engineering is the art of making what you want from things you can get. >??????????????????????????????????????????????????????????????????????? >
Thanks jerry, but which other design decisions you are talking about? Rahul.