DSPRelated.com
Forums

Audio signal sync

Started by hakpax November 30, 2009
Hi,

I'm developing a real-time audio processing project which consists of 2 DSP separated systems.
One system is encoder and other decoder.
between the systems there is a regular audio line.
the encoder samples the input audio data (8/16/24 bits, 16Khz sampling rate) separates it into frames and process each frame individually. The processed data is played by the D/A to the audio line and is the input to the decoder system. the decoder system samples the data and decode the data.

My problem is that the data is being processed in separate frames and the decoder must know when each frame begin in order to process it correctly.
How can i sync those 2 system using only the audio signal line which carries the actual data?
Is it possible some data bits to the data somehow?
Can i use special frames to sync?

Thanks
Nadav
Nadav-

> I'm developing a real-time audio processing project which
> consists of 2 DSP separated systems.
> One system is encoder and other decoder.
> between the systems there is a regular audio line.
> the encoder samples the input audio data (8/16/24 bits,
> 16Khz sampling rate) separates it into frames and
> process each frame individually. The processed data is
> played by the D/A to the audio line and is the input to
> the decoder system.
> the decoder system samples the data and decode the data.
>
> My problem is that the data is being processed in separate
> frames and the decoder must know when each frame begin in
> order to process it correctly.
> How can i sync those 2 system using only the audio signal
> line which carries the actual data?
> Is it possible some data bits to the data somehow?
> Can i use special frames to sync?

Something is not making sense here. Normally two systems doing audio encode/decode communicate via digital transport
-- that's the whole point of audio compression, right? It should like this:

_______ Dig I/O _______
Audio <--> | codec | <-------> | codec | <--> Audio
|_______| |_______|

where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite, etc), serial port (e.g. RS-422), etc.

If you're connecting two systems via audio, then why do you need compression?

-Jeff
Navad - I agree with Jeff - your system description is a bit confusing.
Nevertheless, in a traditional audio-streaming application, such as
A2DP, each frame consists of digital audio data preceded by header bytes
in a pre-defined format that contains information about the audio data
itself, such as number of channels, bit rate, encoding format, length of
data, sample rate etc.

________________________________

From: a... [mailto:a...] On
Behalf Of Jeff Brower
Sent: Monday, November 30, 2009 1:01 PM
To: Nadav
Cc: a...
Subject: Re: [audiodsp] Audio signal sync

Nadav-

> I'm developing a real-time audio processing project which
> consists of 2 DSP separated systems.
> One system is encoder and other decoder.
> between the systems there is a regular audio line.
> the encoder samples the input audio data (8/16/24 bits,
> 16Khz sampling rate) separates it into frames and
> process each frame individually. The processed data is
> played by the D/A to the audio line and is the input to
> the decoder system.
> the decoder system samples the data and decode the data.
>
> My problem is that the data is being processed in separate
> frames and the decoder must know when each frame begin in
> order to process it correctly.
> How can i sync those 2 system using only the audio signal
> line which carries the actual data?
> Is it possible some data bits to the data somehow?
> Can i use special frames to sync?

Something is not making sense here. Normally two systems doing audio
encode/decode communicate via digital transport
-- that's the whole point of audio compression, right? It should like
this:

_______ Dig I/O _______
Audio <--> | codec | <-------> | codec | <--> Audio
|_______| |_______|

where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
etc), serial port (e.g. RS-422), etc.

If you're connecting two systems via audio, then why do you need
compression?

-Jeff
--- In a..., "Jeff Brower" wrote:
>
> Nadav-
>
> > I'm developing a real-time audio processing project which
> > consists of 2 DSP separated systems.
> > One system is encoder and other decoder.
> > between the systems there is a regular audio line.
> > the encoder samples the input audio data (8/16/24 bits,
> > 16Khz sampling rate) separates it into frames and
> > process each frame individually. The processed data is
> > played by the D/A to the audio line and is the input to
> > the decoder system.
> > the decoder system samples the data and decode the data.
> >
> > My problem is that the data is being processed in separate
> > frames and the decoder must know when each frame begin in
> > order to process it correctly.
> > How can i sync those 2 system using only the audio signal
> > line which carries the actual data?
> > Is it possible some data bits to the data somehow?
> > Can i use special frames to sync?
>
> Something is not making sense here. Normally two systems doing audio encode/decode communicate via digital transport
> -- that's the whole point of audio compression, right? It should like this:
>
> _______ Dig I/O _______
> Audio <--> | codec | <-------> | codec | <--> Audio
> |_______| |_______|
>
> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite, etc), serial port (e.g. RS-422), etc.
>
> If you're connecting two systems via audio, then why do you need compression?
>
> -Jeff
>

Hi,

Thanks for the quick reply.
My project is dealing with scrambling and de-scrambling of audio signal. Therefore the System diagram looks like this:

Audio line (Telephone, cell etc)
\ /
Input Audio <-> Scrambler <-------> de-scrambler <-> Audio Output

For now, i'm just connecting regular audio line between the 2 systems.
(I'm working with DSP Development boards).
The problem is that the scrambling process is working on each frame individually.
Thats why i need to find a way to sync those 2 separated systems using only the audio line.
Some more details: I'm sampling at 16Khz and my input/ouput BW is regular telephone BW about 4Khz.

Thanks
Nadav
Hi all,

Although Jeff's point is a certain one, maybe the project is kind of demo, and needs some sort of link (say a DSP's serial port) to demonstrate that the encoding and decoding is working on the DSP sides, and the link is out of the question. The issue here is that the link is what is causing the trouble. In other words, the problem is to get to DSPs to talk to each other through some link, that link can be a serial port (of the kind where people usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI, UARTs, whatever. And if this is the case, usually the HRM (Hardware Reference Manual) is all what the OP (original poster) needs.

Maybe I am 100% lost here, but this is how I interpreted the question.

Regards,


Jaime Andrés Aranguren Cardona
j...@ieee.org
j...@computer.org

________________________________
Von: Jeff Brower
An: Nadav
CC: a...
Gesendet: Montag, den 30. November 2009, 19:00:55 Uhr
Betreff: Re: [audiodsp] Audio signal sync


Nadav-

> I'm developing a real-time audio processing project which
> consists of 2 DSP separated systems.
> One system is encoder and other decoder.
> between the systems there is a regular audio line.
> the encoder samples the input audio data (8/16/24 bits,
> 16Khz sampling rate) separates it into frames and
> process each frame individually. The processed data is
> played by the D/A to the audio line and is the input to
> the decoder system.
> the decoder system samples the data and decode the data.
>
> My problem is that the data is being processed in separate
> frames and the decoder must know when each frame begin in
> order to process it correctly.
> How can i sync those 2 system using only the audio signal
> line which carries the actual data?
> Is it possible some data bits to the data somehow?
> Can i use special frames to sync?

Something is not making sense here. Normally two systems doing audio encode/decode communicate via digital transport
-- that's the whole point of audio compression, right? It should like this:

_______ Dig I/O _______
Audio <--> | codec | <-------> | codec | <--> Audio
|_______| |_______|

where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite, etc), serial port (e.g. RS-422), etc.

If you're connecting two systems via audio, then why do you need compression?

-Jeff

_____________________________________
Hi,

That is correct. For now, the project is intended for some sort of
demonstration.
So the telephone line noise is irrelevant (i'll be only using 1 meter std
audio line).
Although, i was hoping to find a way to sync those two systems without any
other link besides the audio line.
I guessed that syncing framed audio signals between systems is a common
problem and there must be a simple solution (using special sync frame which
can be played and detected, adding FSK to the unused bw etc)

Thanks
Nadav

On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
j...@yahoo.com> wrote:

> Hi all,
>
> Although Jeff's point is a certain one, maybe the project is kind of demo,
> and needs some sort of link (say a DSP's serial port) to demonstrate that
> the encoding and decoding is working on the DSP sides, and the link is out
> of the question. The issue here is that the link is what is causing the
> trouble. In other words, the problem is to get to DSPs to talk to each other
> through some link, that link can be a serial port (of the kind where people
> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI,
> UARTs, whatever. And if this is the case, usually the HRM (Hardware
> Reference Manual) is all what the OP (original poster) needs.
>
> Maybe I am 100% lost here, but this is how I interpreted the question.
>
> Regards,
> Jaime Andr Aranguren Cardona
> j...@ieee.org
> j...@computer.org
> ------------------------------
> *Von:* Jeff Brower
> *An:* Nadav
> *CC:* a...
> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
> *Betreff:* Re: [audiodsp] Audio signal sync
>
> Nadav-
>
> > I'm developing a real-time audio processing project which
> > consists of 2 DSP separated systems.
> > One system is encoder and other decoder.
> > between the systems there is a regular audio line.
> > the encoder samples the input audio data (8/16/24 bits,
> > 16Khz sampling rate) separates it into frames and
> > process each frame individually. The processed data is
> > played by the D/A to the audio line and is the input to
> > the decoder system.
> > the decoder system samples the data and decode the data.
> >
> > My problem is that the data is being processed in separate
> > frames and the decoder must know when each frame begin in
> > order to process it correctly.
> > How can i sync those 2 system using only the audio signal
> > line which carries the actual data?
> > Is it possible some data bits to the data somehow?
> > Can i use special frames to sync?
>
> Something is not making sense here. Normally two systems doing audio
> encode/decode communicate via digital transport
> -- that's the whole point of audio compression, right? It should like this:
>
> _______ Dig I/O _______
> Audio <--> | codec | <-------> | codec | <--> Audio
> |_______| |_______|
>
> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
> etc), serial port (e.g. RS-422), etc.
>
> If you're connecting two systems via audio, then why do you need
> compression?
>
> -Jeff
>

_____________________________________
Nadav,

Since you have custom equipment at both ends of the audio line why not
add a modem chip or software and send the audio digitally like an old
dial up modem. This will sort out many of your problems, frame sync
recovery etc, although you will need compress the audio. You could
start by looking at the compression algorithms used by VoIP, many of
which are freely available.

However, you may still have problems. If your audio line is a modern
telephone system, cell phone or land line, remember that they already
digitise and compress the audio. The compression algorithms used are
designed for voice audio and often do not work well with other types
of signal. I'm told trying to use a external old fashioned dial up
modem over a cell phone gives very poor performance. Your scrambled
audio may not look like normal audio to the telephone system and could
arrive at the other end with significant distortion.

Good luck with your project,
John
> --- In a..., "Jeff Brower" wrote:
> >
> > Nadav-
> >
> > > I'm developing a real-time audio processing project which
> > > consists of 2 DSP separated systems.
> > > One system is encoder and other decoder.
> > > between the systems there is a regular audio line.
> > > the encoder samples the input audio data (8/16/24 bits,
> > > 16Khz sampling rate) separates it into frames and
> > > process each frame individually. The processed data is
> > > played by the D/A to the audio line and is the input to
> > > the decoder system.
> > > the decoder system samples the data and decode the data.
> > >
> > > My problem is that the data is being processed in separate
> > > frames and the decoder must know when each frame begin in
> > > order to process it correctly.
> > > How can i sync those 2 system using only the audio signal
> > > line which carries the actual data?
> > > Is it possible some data bits to the data somehow?
> > > Can i use special frames to sync?
> >
> > Something is not making sense here. Normally two systems doing
> audio encode/decode communicate via digital transport
> > -- that's the whole point of audio compression, right? It should
> like this:
> >
> > _______ Dig I/O _______
> > Audio <--> | codec | <-------> | codec | <--> Audio
> > |_______| |_______|
> >
> > where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi,
> satellite, etc), serial port (e.g. RS-422), etc.
> >
> > If you're connecting two systems via audio, then why do you need
> compression?
> >
> > -Jeff
> > Hi,
>
> Thanks for the quick reply.
> My project is dealing with scrambling and de-scrambling of audio
> signal. Therefore the System diagram looks like this:
>
> Audio line (Telephone, cell etc)
> \ /
> Input Audio <-> Scrambler <-------> de-scrambler <-> Audio Output
>
> For now, i'm just connecting regular audio line between the 2 systems.
> (I'm working with DSP Development boards).
> The problem is that the scrambling process is working on each frame
> individually.
> Thats why i need to find a way to sync those 2 separated systems
> using only the audio line.
> Some more details: I'm sampling at 16Khz and my input/ouput BW is
> regular telephone BW about 4Khz.
>
> Thanks
> Nadav
>
Yes,
I think thats what i will use for the sync.
I will try to add sync data in the unused BW.

Thank you all for the assistance.

Nadav

On Tue, Dec 1, 2009 at 7:52 PM, Jeff Brower wrote:

> Nadav-
>
> > That is correct. For now, the project is intended for some sort of
> > demonstration.
> > So the telephone line noise is irrelevant (i'll be only using 1 meter std
> > audio line).
> > Although, i was hoping to find a way to sync those two systems without
> any
> > other link besides the audio line.
> > I guessed that syncing framed audio signals between systems is a common
> > problem and there must be a simple solution (using special sync frame
> which
> > can be played and detected, adding FSK to the unused bw etc)
>
> I don't want to sound negative, as you're trying hard and seem to be
> persistent. But we have to apply rational
> thinking, not wishful thinking. I would ask you this: if you think this
> is a common method, then what audio systems
> can you name that do this?
>
> The basic problem you're facing: if you modify your audio signal "in
> band", then you will significantly affect the
> quality of what people hear. Whatever modification you would make needs
> substantial energy to be detected reliably
> and accurately enough to provide precise frame synchronization -- listeners
> *will* hear it. But you haven't specified
> what quality you need and what degradation you're willing to accept.
>
> RF communication systems often require precise synchronization. RF methods
> typically use bandwidth outside of the
> audio or data signal. For example, if you sample at 16 kHz but only
> transmit telephony quality audio (i.e. speech)
> limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of
> band" information. You could send a 7 kHz
> tone short burst every frame, detect that and use its center, or peak
> power, as the frame marker. In your
> receive-side DSP code you would apply a sharp LPF to remove this from the
> audio signal.
>
> -Jeff
>
> > On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
> > j...@yahoo.com> wrote:
> >
> >> Hi all,
> >>
> >> Although Jeff's point is a certain one, maybe the project is kind of
> demo,
> >> and needs some sort of link (say a DSP's serial port) to demonstrate
> that
> >> the encoding and decoding is working on the DSP sides, and the link is
> out
> >> of the question. The issue here is that the link is what is causing the
> >> trouble. In other words, the problem is to get to DSPs to talk to each
> other
> >> through some link, that link can be a serial port (of the kind where
> people
> >> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or
> TWI,
> >> UARTs, whatever. And if this is the case, usually the HRM (Hardware
> >> Reference Manual) is all what the OP (original poster) needs.
> >>
> >> Maybe I am 100% lost here, but this is how I interpreted the question.
> >>
> >> Regards,
> >>
> >>
> >> Jaime Andr Aranguren Cardona
> >> j...@ieee.org
> >> j...@computer.org
> >>
> >>
> >> ------------------------------
> >> *Von:* Jeff Brower
> >> *An:* Nadav
> >> *CC:* a...
> >> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
> >> *Betreff:* Re: [audiodsp] Audio signal sync
> >>
> >>
> >>
> >> Nadav-
> >>
> >> > I'm developing a real-time audio processing project which
> >> > consists of 2 DSP separated systems.
> >> > One system is encoder and other decoder.
> >> > between the systems there is a regular audio line.
> >> > the encoder samples the input audio data (8/16/24 bits,
> >> > 16Khz sampling rate) separates it into frames and
> >> > process each frame individually. The processed data is
> >> > played by the D/A to the audio line and is the input to
> >> > the decoder system.
> >> > the decoder system samples the data and decode the data.
> >> >
> >> > My problem is that the data is being processed in separate
> >> > frames and the decoder must know when each frame begin in
> >> > order to process it correctly.
> >> > How can i sync those 2 system using only the audio signal
> >> > line which carries the actual data?
> >> > Is it possible some data bits to the data somehow?
> >> > Can i use special frames to sync?
> >>
> >> Something is not making sense here. Normally two systems doing audio
> >> encode/decode communicate via digital transport
> >> -- that's the whole point of audio compression, right? It should like
> this:
> >>
> >> _______ Dig I/O _______
> >> Audio <--> | codec | <-------> | codec | <--> Audio
> >> |_______| |_______|
> >>
> >> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
> >> etc), serial port (e.g. RS-422), etc.
> >>
> >> If you're connecting two systems via audio, then why do you need
> >> compression?
> >>
> >> -Jeff
Nadav-

> That is correct. For now, the project is intended for some sort of
> demonstration.
> So the telephone line noise is irrelevant (i'll be only using 1 meter std
> audio line).
> Although, i was hoping to find a way to sync those two systems without any
> other link besides the audio line.
> I guessed that syncing framed audio signals between systems is a common
> problem and there must be a simple solution (using special sync frame which
> can be played and detected, adding FSK to the unused bw etc)

I don't want to sound negative, as you're trying hard and seem to be persistent. But we have to apply rational
thinking, not wishful thinking. I would ask you this: if you think this is a common method, then what audio systems
can you name that do this?

The basic problem you're facing: if you modify your audio signal "in band", then you will significantly affect the
quality of what people hear. Whatever modification you would make needs substantial energy to be detected reliably
and accurately enough to provide precise frame synchronization -- listeners *will* hear it. But you haven't specified
what quality you need and what degradation you're willing to accept.

RF communication systems often require precise synchronization. RF methods typically use bandwidth outside of the
audio or data signal. For example, if you sample at 16 kHz but only transmit telephony quality audio (i.e. speech)
limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of band" information. You could send a 7 kHz
tone short burst every frame, detect that and use its center, or peak power, as the frame marker. In your
receive-side DSP code you would apply a sharp LPF to remove this from the audio signal.

-Jeff

> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
> j...@yahoo.com> wrote:
>
>> Hi all,
>>
>> Although Jeff's point is a certain one, maybe the project is kind of demo,
>> and needs some sort of link (say a DSP's serial port) to demonstrate that
>> the encoding and decoding is working on the DSP sides, and the link is out
>> of the question. The issue here is that the link is what is causing the
>> trouble. In other words, the problem is to get to DSPs to talk to each other
>> through some link, that link can be a serial port (of the kind where people
>> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI,
>> UARTs, whatever. And if this is the case, usually the HRM (Hardware
>> Reference Manual) is all what the OP (original poster) needs.
>>
>> Maybe I am 100% lost here, but this is how I interpreted the question.
>>
>> Regards,
>> Jaime Andr Aranguren Cardona
>> j...@ieee.org
>> j...@computer.org
>> ------------------------------
>> *Von:* Jeff Brower
>> *An:* Nadav
>> *CC:* a...
>> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
>> *Betreff:* Re: [audiodsp] Audio signal sync
>>
>> Nadav-
>>
>> > I'm developing a real-time audio processing project which
>> > consists of 2 DSP separated systems.
>> > One system is encoder and other decoder.
>> > between the systems there is a regular audio line.
>> > the encoder samples the input audio data (8/16/24 bits,
>> > 16Khz sampling rate) separates it into frames and
>> > process each frame individually. The processed data is
>> > played by the D/A to the audio line and is the input to
>> > the decoder system.
>> > the decoder system samples the data and decode the data.
>> >
>> > My problem is that the data is being processed in separate
>> > frames and the decoder must know when each frame begin in
>> > order to process it correctly.
>> > How can i sync those 2 system using only the audio signal
>> > line which carries the actual data?
>> > Is it possible some data bits to the data somehow?
>> > Can i use special frames to sync?
>>
>> Something is not making sense here. Normally two systems doing audio
>> encode/decode communicate via digital transport
>> -- that's the whole point of audio compression, right? It should like this:
>>
>> _______ Dig I/O _______
>> Audio <--> | codec | <-------> | codec | <--> Audio
>> |_______| |_______|
>>
>> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
>> etc), serial port (e.g. RS-422), etc.
>>
>> If you're connecting two systems via audio, then why do you need
>> compression?
>>
>> -Jeff
First thank Jeff Brower very much for your clear explanation to my
answer why we have to use windows for signal processing before.

For
Nadav Haklai's question, i think it makes sense in some cases, for
example when we want to encrypt the telephone voice, do not want other
people can hear it, to my experince the way to do that is quite the
same as Nadav expressed, we do some changes, of course the result is
still in the voice band, and at the receiver side we undo the change to
recover the original speech.

Base on the way Jeff gave, I
believe we can do that without affecting the voice quality. We still
use tone in voice band, for telephone often 0-4K. There are 2 ways:
1)
we send first only the sync tone for curtain interval of time and after
that only the voice. At the receiver side, we base on the received
signal, used some kind of filers, it is easy to know whether the coming
signal is sync tone or voice(based spectral for example), if it is syn
tone, do recover tone by VCO for example and get time-synchronized with
the sender.
2) We send both syn tone and voice but with known SNR of syn tone and voice, assume the transmission line
is good enough, we can recover the original speech and extract the tone
by spectral subtraction (the simplest) and do the same as the 1)

Above is what I think, if there is any thing wrong please feel free to tell me
--- On Tue, 12/1/09, Jeff Brower wrote:

From: Jeff Brower
Subject: Re: [audiodsp] Audio signal sync
To: "Nadav Haklai"
Cc: a...
Date: Tuesday, December 1, 2009, 9:52
AM

 





Nadav-

> That is correct. For now, the project is intended for some sort of

> demonstration.

> So the telephone line noise is irrelevant (i'll be only using 1 meter std

> audio line).

> Although, i was hoping to find a way to sync those two systems without any

> other link besides the audio line.

> I guessed that syncing framed audio signals between systems is a common

> problem and there must be a simple solution (using special sync frame which

> can be played and detected, adding FSK to the unused bw etc)

I don't want to sound negative, as you're trying hard and seem to be persistent. But we have to apply rational

thinking, not wishful thinking. I would ask you this: if you think this is a common method, then what audio systems

can you name that do this?

The basic problem you're facing: if you modify your audio signal "in band", then you will significantly affect the

quality of what people hear. Whatever modification you would make needs substantial energy to be detected reliably

and accurately enough to provide precise frame synchronization -- listeners *will* hear it. But you haven't specified

what quality you need and what degradation you're willing to accept.

RF communication systems often require precise synchronization. RF methods typically use bandwidth outside of the

audio or data signal. For example, if you sample at 16 kHz but only transmit telephony quality audio (i.e. speech)

limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of band" information. You could send a 7 kHz

tone short burst every frame, detect that and use its center, or peak power, as the frame marker. In your

receive-side DSP code you would apply a sharp LPF to remove this from the audio signal.

-Jeff

> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <

> jaime_aranguren@ yahoo.com> wrote:

>

>> Hi all,

>>

>> Although Jeff's point is a certain one, maybe the project is kind of demo,

>> and needs some sort of link (say a DSP's serial port) to demonstrate that

>> the encoding and decoding is working on the DSP sides, and the link is out

>> of the question. The issue here is that the link is what is causing the

>> trouble. In other words, the problem is to get to DSPs to talk to each other

>> through some link, that link can be a serial port (of the kind where people

>> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI,

>> UARTs, whatever. And if this is the case, usually the HRM (Hardware

>> Reference Manual) is all what the OP (original poster) needs.

>>

>> Maybe I am 100% lost here, but this is how I interpreted the question.

>>

>> Regards,

>>

>>

>> Jaime Andrés Aranguren Cardona

>> jaime.aranguren@ ieee.org

>> jaime.aranguren@ computer. org

>>

>>

>> ------------ --------- ---------

>> *Von:* Jeff Brower

>> *An:* Nadav

>> *CC:* audiodsp@yahoogroup s.com

>> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr

>> *Betreff:* Re: [audiodsp] Audio signal sync

>>

>>

>>

>> Nadav-

>>

>> > I'm developing a real-time audio processing project which

>> > consists of 2 DSP separated systems.

>> > One system is encoder and other decoder.

>> > between the systems there is a regular audio line.

>> > the encoder samples the input audio data (8/16/24 bits,

>> > 16Khz sampling rate) separates it into frames and

>> > process each frame individually. The processed data is

>> > played by the D/A to the audio line and is the input to

>> > the decoder system.

>> > the decoder system samples the data and decode the data.

>> >

>> > My problem is that the data is being processed in separate

>> > frames and the decoder must know when each frame begin in

>> > order to process it correctly.

>> > How can i sync those 2 system using only the audio signal

>> > line which carries the actual data?

>> > Is it possible some data bits to the data somehow?

>> > Can i use special frames to sync?

>>

>> Something is not making sense here. Normally two systems doing audio

>> encode/decode communicate via digital transport

>> -- that's the whole point of audio compression, right? It should like this:

>>

>> _______ Dig I/O _______

>> Audio <--> | codec | <-------> | codec | <--> Audio

>> |_______| |_______|

>>

>> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,

>> etc), serial port (e.g. RS-422), etc.

>>

>> If you're connecting two systems via audio, then why do you need

>> compression?

>>

>> -Jeff













_____________________________________