Reply by Duc Nguyen Anh●December 4, 20092009-12-04
Hi Jeff,
Just try to find a way that can handle the problems, to my experience, this has
applications in our real life. After rethinking about what I suggested I realize
that the second case is not feasible to implement due to the extraction of the
syn tone and get synchronized with it. The first case is just one-time
synchronization so we can not obtain frame to frame synchronization. However, we
can use some tricks, for example, we process the voice to limit it just in range
of 300 Hz to 3400 Hz (quality still good), choose the syn tone in range 0-300 Hz
or 3400 - 4000 Hz (keep away from 50,60 Hz and their harmonics due to the
characteristics of electricity system) and add it to the voice. One problem with
this approach is that since we can not recover exactly the syn tone at receiver
side, so we have to accept some error (deviation), and should pay attention to
this when do processing.
Looking forward to hearing your comments
Have a nice weekend to all of you
Reply by Jeff Brower●December 3, 20092009-12-03
Duc-
> The application I wanted to express is like this,
you and I
> want to communicate safely through the public telephone
> service, we dont want any other can eavesdrop what we
> exchange, and of course we dont want the use some special
> service from telephone providers, we just design 2 devices
> (phones) as the normal ones added some features help us to
> do that.
>
> About quality of voice, for the first way I said, syn tone
> (signaling) is just sent the starting time for
> synchronization purpose, after that only voice is sent and
> the tone is removed by device so it is ok for voice quality
> but it comes with a bit delay.
What purpose is served by a "one time" sync tone? Nadav wants to do
frame-by-frame synchronization. That would mean
a sync tone every 10 to 50 msec or so.
> For the second method, if we mix speech and syn tone
at a
> given SNR we can recover
> speech by subtracting the syn tone at the received side,
> and the quality is not much less than the original I believe
Sorry but voice quality will be significantly reduced. If your method was
possible then it would have already been
used a long time ago.
> Audio watermarking technique is new concept to me, it
seems
> interesting and I have just taken a look, but it sounds
> just suitable for digital audio media like audio files,...
> for transmitting signal like telephone line, I guess it
> will have some problems like tramission quality, noise and
> for spread spectrum method, how to get the reference
> (original speech) to get back watermarking data from the
> processed signal.
Yes I agree that it would be difficult to implement watermarking reliably for
real-time telephony communications. But
the main idea -- "hiding" data inside the main audio signal, via spread
spectrum, low bitrate communication -- might
be something that Nadav can use. He doesn't need to embed much, just a "1
or 0" marker, which he could toggle every
frame, and then exclusive-OR successive frames until he finds the precise
location in each frame that consistently
toggles.
Reply by Duc Nguyen Anh●December 3, 20092009-12-03
Jeff Brower,
The application I wanted to express is like this, you and I want to communicate
safely through the public telephone service, we dont want any other can
eavesdrop what we exchange, and of course we dont want the use some special
service from telephone providers, we just design 2 devices (phones) as the
normal ones added some features help us to do that.
About quality of voice, for the first way I said, syn tone (signaling) is just
sent the starting time for synchronization purpose, after that only voice is
sent and the tone is removed by device so it is ok for voice quality but it
comes with a bit delay. For the second method, if we mix speech and syn tone at
a given SNR we can recover speech by subtracting the syn tone at the received
side, and the quality is not much less than the original I believe
Audio watermarking technique is new concept to me, it seems interesting and I
have just taken a look, but it sounds just suitable for digital audio media like
audio files,... for transmitting signal like telephone line, I guess it will
have some problems like tramission quality, noise and for spread spectrum
method, how to get the reference (original speech) to get back watermarking data
from the processed signal.
hope can get some of your comments, very nice to discuss with u
--- On Thu, 12/3/09, Jeff Brower wrote:
From: Jeff Brower
Subject: Re: [audiodsp] Audio signal sync
To: "Duc Nguyen Anh"
Cc: a...
Date: Thursday, December 3, 2009, 5:52 AM
Duc-
> First thank Jeff Brower very much for your clear
explanation
> to my answer why we have to use windows for signal
> processing before.
>
> For Nadav Haklai's question, i think it makes
sense in some
> cases, for example when we want to encrypt the
telephone
> voice, do not want other people can hear it, to my
experince
> the way to do that is quite the same as Nadav
expressed,
> we do some changes, of course the result is still in
the
> voice band, and at the receiver side we undo the
change to
> recover the original speech.
>
> Base on the way Jeff gave, I believe we can do that
without
> affecting the voice quality. We still use tone in
voice
> band, for telephone often 0-4K. There are 2 ways:
> 1) we send first only the sync tone for curtain
interval of
> time and after that only the voice. At the receiver
side,
> we base on the received signal, used some kind of
filers,
> it is easy to know whether the coming signal is sync
tone or
> voice(based spectral for example), if it is syn tone,
do
> recover tone by VCO for example and get
time-synchronized
> with the sender.
> 2) We send both syn tone and voice but with known SNR
of syn
> tone and voice, assume the transmission line is
good
> enough, we can recover the original speech and
extract the
> tone by spectral subtraction (the simplest) and do
the same
> as the 1)
>
> Above is what I think, if there is any thing wrong
please
> feel free to tell me
These methods are not a good idea to implement for the same reason that
telephone companies never combine signaling
and voice at the same time: voice quality is unacceptable. Even going the
other way -- adding voice to signaling --
is a notoriously difficult problem (you can search on "talk off" to study
this).
To convey some amount of non-speech data in-band, you might look at audio
watermarking. Watermarking techniques
"spectrally spread" small amounts of data (such as coypright info), and mix with
audio. The idea is the data is added
as very low level white noise and is (hopefully) imperceptible to listeners.
But I don't think the time precision in
this method would be good enough for the exact frame synchronization that Nadav
needs. Also watermarking would
significantly increase the DSP resources required (MIPS and memory).
-Jeff
> --- On Tue, 12/1/09, Jeff Brower wrote:
>
> From: Jeff Brower
> Subject: Re: [audiodsp] Audio signal sync
> To: "Nadav Haklai"
> Cc: audiodsp@yahoogroup s.com
> Date: Tuesday, December 1, 2009, 9:52 AM
>
>
>
>
>
>
>
> Â
>
>
>
>
>
>
>
>
>
> Nadav-
>
>
>
>> That is correct. For now, the project is intended
for some sort of
>
>> demonstration.
>
>> So the telephone line noise is irrelevant (i'll
be only using 1 meter std
>
>> audio line).
>
>> Although, i was hoping to find a way to sync those
two systems without any
>
>> other link besides the audio line.
>
>> I guessed that syncing framed audio signals between
systems is a common
>
>> problem and there must be a simple solution (using
special sync frame which
>
>> can be played and detected, adding FSK to the unused
bw etc)
>
>
>
> I don't want to sound negative, as you're
trying hard and seem to be persistent. But we have to apply rational
>
> thinking, not wishful thinking. I would ask you
this: if you think this is a common method, then what audio systems
>
> can you name that do this?
>
>
>
> The basic problem you're facing: if you modify
your audio signal "in band", then you will significantly affect the
>
> quality of what people hear. Whatever modification
you would make needs substantial energy to be detected reliably
>
> and accurately enough to provide precise frame
synchronization -- listeners *will* hear it. But you haven't specified
>
> what quality you need and what degradation
you're willing to accept.
>
>
>
> RF communication systems often require precise
synchronization. RF methods typically use bandwidth outside of the
>
> audio or data signal. For example, if you sample at
16 kHz but only transmit telephony quality audio (i.e. speech)
>
> limited to 4 kHz bandwidth, then you might use the
upper 4 kHz for "out of band" information. You could send a 7 kHz
>
> tone short burst every frame, detect that and use its
center, or peak power, as the frame marker. In your
>
> receive-side DSP code you would apply a sharp LPF to
remove this from the audio signal.
>
>
>
> -Jeff
>
>
>
>> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres
Aranguren Cardona <
>
>> jaime_aranguren@ yahoo.com> wrote:
>
>>
>
>>> Hi all,
>
>>>
>
>>> Although Jeff's point is a certain one, maybe
the project is kind of demo,
>
>>> and needs some sort of link (say a DSP's
serial port) to demonstrate that
>
>>> the encoding and decoding is working on the DSP
sides, and the link is out
>
>>> of the question. The issue here is that the link is
what is causing the
>
>>> trouble. In other words, the problem is to get to
DSPs to talk to each other
>
>>> through some link, that link can be a serial port
(of the kind where people
>
>>> usually attach ADCs, DACs, Codecs) or an SPI
peripheral on the DSPs or TWI,
>
>>> UARTs, whatever. And if this is the case, usually
the HRM (Hardware
>
>>> Reference Manual) is all what the OP (original
poster) needs.
>
>>>
>
>>> Maybe I am 100% lost here, but this is how I
interpreted the question.
>>> where digital I/O is network (e.g. VoIP), RF (e.g.
GSM, WiFi, satellite,
>
>>> etc), serial port (e.g. RS-422), etc.
>
>>>
>
>>> If you're connecting two systems via audio,
then why do you need
>
>>> compression?
>
>>>
>
>>> -Jeff
_____________________________________
Reply by Jeff Brower●December 3, 20092009-12-03
Duc-
> First thank Jeff Brower very much for your clear
explanation
> to my answer why we have to use windows for signal
> processing before.
>
> For Nadav Haklai's question, i think it makes sense in some
> cases, for example when we want to encrypt the telephone
> voice, do not want other people can hear it, to my experince
> the way to do that is quite the same as Nadav expressed,
> we do some changes, of course the result is still in the
> voice band, and at the receiver side we undo the change to
> recover the original speech.
>
> Base on the way Jeff gave, I believe we can do that without
> affecting the voice quality. We still use tone in voice
> band, for telephone often 0-4K. There are 2 ways:
> 1) we send first only the sync tone for curtain interval of
> time and after that only the voice. At the receiver side,
> we base on the received signal, used some kind of filers,
> it is easy to know whether the coming signal is sync tone or
> voice(based spectral for example), if it is syn tone, do
> recover tone by VCO for example and get time-synchronized
> with the sender.
> 2) We send both syn tone and voice but with known SNR of syn
> tone and voice, assume the transmission line is good
> enough, we can recover the original speech and extract the
> tone by spectral subtraction (the simplest) and do the same
> as the 1)
>
> Above is what I think, if there is any thing wrong please
> feel free to tell me
These methods are not a good idea to implement for the same reason that
telephone companies never combine signaling
and voice at the same time: voice quality is unacceptable. Even going the
other way -- adding voice to signaling --
is a notoriously difficult problem (you can search on "talk off" to study
this).
To convey some amount of non-speech data in-band, you might look at audio
watermarking. Watermarking techniques
"spectrally spread" small amounts of data (such as coypright info), and mix with
audio. The idea is the data is added
as very low level white noise and is (hopefully) imperceptible to listeners.
But I don't think the time precision in
this method would be good enough for the exact frame synchronization that Nadav
needs. Also watermarking would
significantly increase the DSP resources required (MIPS and memory).
-Jeff
> --- On Tue, 12/1/09, Jeff Brower
wrote:
>
> From: Jeff Brower
> Subject: Re: [audiodsp] Audio signal sync
> To: "Nadav Haklai"
> Cc: a...
> Date: Tuesday, December 1, 2009, 9:52 AM
>
>
>
> Nadav-
>
>> That is correct. For now, the project is intended for some sort of
>
>> demonstration.
>
>> So the telephone line noise is irrelevant (i'll be only using 1 meter
std
>
>> audio line).
>
>> Although, i was hoping to find a way to sync those two systems without any
>
>> other link besides the audio line.
>
>> I guessed that syncing framed audio signals between systems is a common
>
>> problem and there must be a simple solution (using special sync frame
which
>
>> can be played and detected, adding FSK to the unused bw etc)
>
> I don't want to sound negative, as you're trying hard and seem to be
persistent. But we have to apply rational
>
> thinking, not wishful thinking. I would ask you this: if you think this is a
common method, then what audio systems
>
> can you name that do this?
>
> The basic problem you're facing: if you modify your audio signal "in
band", then you will significantly affect the
>
> quality of what people hear. Whatever modification you would make needs
substantial energy to be detected reliably
>
> and accurately enough to provide precise frame synchronization -- listeners
*will* hear it. But you haven't specified
>
> what quality you need and what degradation you're willing to accept.
>
> RF communication systems often require precise synchronization. RF methods
typically use bandwidth outside of the
>
> audio or data signal. For example, if you sample at 16 kHz but only transmit
telephony quality audio (i.e. speech)
>
> limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of
band" information. You could send a 7 kHz
>
> tone short burst every frame, detect that and use its center, or peak power,
as the frame marker. In your
>
> receive-side DSP code you would apply a sharp LPF to remove this from the
audio signal.
>
> -Jeff
>
>> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
>
>> jaime_aranguren@ yahoo.com> wrote:
>
>>>> Hi all,
>
>>>>> Although Jeff's point is a certain one, maybe the project is kind of
demo,
>
>>> and needs some sort of link (say a DSP's serial port) to demonstrate
that
>
>>> the encoding and decoding is working on the DSP sides, and the link is
out
>
>>> of the question. The issue here is that the link is what is causing the
>
>>> trouble. In other words, the problem is to get to DSPs to talk to each
other
>
>>> through some link, that link can be a serial port (of the kind where
people
>
>>> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or
TWI,
>
>>> UARTs, whatever. And if this is the case, usually the HRM (Hardware
>
>>> Reference Manual) is all what the OP (original poster) needs.
>
>>>>> Maybe I am 100% lost here, but this is how I interpreted the question.
>
>>>>> Regards,
>
>>>>>>> Jaime Andrés Aranguren Cardona
>
>>> jaime.aranguren@ ieee.org
>
>>> jaime.aranguren@ computer. org
>
>>>>>>> ------------ --------- ---------
>
>>> *Von:* Jeff Brower >> *An:* Nadav >> *CC:* audiodsp@yahoogroup s.com
>
>>> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
>
>>> *Betreff:* Re: [audiodsp] Audio signal sync
>
>>>>>>>>> Nadav-
>
>>>>> > I'm developing a real-time audio processing project which
>
>>> > consists of 2 DSP separated systems.
>
>>> > One system is encoder and other decoder.
>
>>> > between the systems there is a regular audio line.
>
>>> > the encoder samples the input audio data (8/16/24 bits,
>
>>> > 16Khz sampling rate) separates it into frames and
>
>>> > process each frame individually. The processed data is
>
>>> > played by the D/A to the audio line and is the input to
>
>>> > the decoder system.
>
>>> > the decoder system samples the data and decode the data.
>
>>> >>> > My problem is that the data is being processed in separate
>
>>> > frames and the decoder must know when each frame begin in
>
>>> > order to process it correctly.
>
>>> > How can i sync those 2 system using only the audio signal
>
>>> > line which carries the actual data?
>
>>> > Is it possible some data bits to the data somehow?
>
>>> > Can i use special frames to sync?
>
>>>>> Something is not making sense here. Normally two systems doing audio
>
>>> encode/decode communicate via digital transport
>
>>> -- that's the whole point of audio compression, right? It should like
this:
>
>>>>> _______ Dig I/O _______
>
>>> Audio <--> | codec | <-------> | codec | <--> Audio
>
>>> |_______| |_______|
>
>>>>> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi,
satellite,
>
>>> etc), serial port (e.g. RS-422), etc.
>
>>>>> If you're connecting two systems via audio, then why do you need
>
>>> compression?
>
>>>>> -Jeff
Reply by Duc Nguyen Anh●December 3, 20092009-12-03
First thank Jeff Brower very much for your clear explanation to my
answer why we have to use windows for signal processing before.
For
Nadav Haklai's question, i think it makes sense in some cases, for
example when we want to encrypt the telephone voice, do not want other
people can hear it, to my experince the way to do that is quite the
same as Nadav expressed, we do some changes, of course the result is
still in the voice band, and at the receiver side we undo the change to
recover the original speech.
Base on the way Jeff gave, I
believe we can do that without affecting the voice quality. We still
use tone in voice band, for telephone often 0-4K. There are 2 ways:
1)
we send first only the sync tone for curtain interval of time and after
that only the voice. At the receiver side, we base on the received
signal, used some kind of filers, it is easy to know whether the coming
signal is sync tone or voice(based spectral for example), if it is syn
tone, do recover tone by VCO for example and get time-synchronized with
the sender.
2) We send both syn tone and voice but with known SNR of syn tone and voice,
assume the transmission line
is good enough, we can recover the original speech and extract the tone
by spectral subtraction (the simplest) and do the same as the 1)
Above is what I think, if there is any thing wrong please feel free to tell
me
--- On Tue, 12/1/09, Jeff Brower wrote:
From: Jeff Brower
Subject: Re: [audiodsp] Audio signal sync
To: "Nadav Haklai"
Cc: a...
Date: Tuesday, December 1, 2009, 9:52
AM
Nadav-
> That is correct. For now, the project is intended for
some sort of
> demonstration.
> So the telephone line noise is irrelevant (i'll
be only using 1 meter std
> audio line).
> Although, i was hoping to find a way to sync those
two systems without any
> other link besides the audio line.
> I guessed that syncing framed audio signals between
systems is a common
> problem and there must be a simple solution (using
special sync frame which
> can be played and detected, adding FSK to the unused
bw etc)
I don't want to sound negative, as you're trying hard and seem to be
persistent. But we have to apply rational
thinking, not wishful thinking. I would ask you this: if you think this is a
common method, then what audio systems
can you name that do this?
The basic problem you're facing: if you modify your audio signal "in
band", then you will significantly affect the
quality of what people hear. Whatever modification you would make needs
substantial energy to be detected reliably
and accurately enough to provide precise frame synchronization -- listeners
*will* hear it. But you haven't specified
what quality you need and what degradation you're willing to accept.
RF communication systems often require precise synchronization. RF methods
typically use bandwidth outside of the
audio or data signal. For example, if you sample at 16 kHz but only transmit
telephony quality audio (i.e. speech)
limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of band"
information. You could send a 7 kHz
tone short burst every frame, detect that and use its center, or peak power, as
the frame marker. In your
receive-side DSP code you would apply a sharp LPF to remove this from the audio
signal.
-Jeff
> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres
Aranguren Cardona <
> jaime_aranguren@ yahoo.com> wrote:
>
>> Hi all,
>>
>> Although Jeff's point is a certain one, maybe
the project is kind of demo,
>> and needs some sort of link (say a DSP's serial
port) to demonstrate that
>> the encoding and decoding is working on the DSP
sides, and the link is out
>> of the question. The issue here is that the link is
what is causing the
>> trouble. In other words, the problem is to get to
DSPs to talk to each other
>> through some link, that link can be a serial port
(of the kind where people
>> usually attach ADCs, DACs, Codecs) or an SPI
peripheral on the DSPs or TWI,
>> UARTs, whatever. And if this is the case, usually
the HRM (Hardware
>> Reference Manual) is all what the OP (original
poster) needs.
>>
>> Maybe I am 100% lost here, but this is how I
interpreted the question.
>>
>> Regards,
>>
>>
>> Jaime Andrés Aranguren Cardona
>> jaime.aranguren@ ieee.org
>> jaime.aranguren@ computer. org
>>
>>
>> ------------ --------- ---------
>> *Von:* Jeff Brower
>> *An:* Nadav
>> *CC:* audiodsp@yahoogroup s.com
>> *Gesendet:* Montag, den 30. November 2009, 19:00:55
Uhr
>> *Betreff:* Re: [audiodsp] Audio signal sync
>>
>>
>>
>> Nadav-
>>
>> > I'm developing a real-time audio processing
project which
>> > consists of 2 DSP separated systems.
>> > One system is encoder and other decoder.
>> > between the systems there is a regular audio
line.
>> > the encoder samples the input audio data (8/16/24
bits,
>> > 16Khz sampling rate) separates it into frames
and
>> > process each frame individually. The processed
data is
>> > played by the D/A to the audio line and is the
input to
>> > the decoder system.
>> > the decoder system samples the data and decode the
data.
>> >
>> > My problem is that the data is being processed in
separate
>> > frames and the decoder must know when each frame
begin in
>> > order to process it correctly.
>> > How can i sync those 2 system using only the audio
signal
>> > line which carries the actual data?
>> > Is it possible some data bits to the data
somehow?
>> > Can i use special frames to sync?
>>
>> Something is not making sense here. Normally two
systems doing audio
>> encode/decode communicate via digital transport
>> -- that's the whole point of audio compression,
right? It should like this:
>> where digital I/O is network (e.g. VoIP), RF (e.g.
GSM, WiFi, satellite,
>> etc), serial port (e.g. RS-422), etc.
>>
>> If you're connecting two systems via audio,
then why do you need
>> compression?
>>
>> -Jeff
_____________________________________
Reply by Jeff Brower●December 1, 20092009-12-01
Nadav-
> That is correct. For now, the project is intended for
some sort of
> demonstration.
> So the telephone line noise is irrelevant (i'll be only using 1 meter
std
> audio line).
> Although, i was hoping to find a way to sync those two systems without any
> other link besides the audio line.
> I guessed that syncing framed audio signals between systems is a common
> problem and there must be a simple solution (using special sync frame which
> can be played and detected, adding FSK to the unused bw etc)
I don't want to sound negative, as you're trying hard and seem to be
persistent. But we have to apply rational
thinking, not wishful thinking. I would ask you this: if you think this is a
common method, then what audio systems
can you name that do this?
The basic problem you're facing: if you modify your audio signal "in
band", then you will significantly affect the
quality of what people hear. Whatever modification you would make needs
substantial energy to be detected reliably
and accurately enough to provide precise frame synchronization -- listeners
*will* hear it. But you haven't specified
what quality you need and what degradation you're willing to accept.
RF communication systems often require precise synchronization. RF methods
typically use bandwidth outside of the
audio or data signal. For example, if you sample at 16 kHz but only transmit
telephony quality audio (i.e. speech)
limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of band"
information. You could send a 7 kHz
tone short burst every frame, detect that and use its center, or peak power, as
the frame marker. In your
receive-side DSP code you would apply a sharp LPF to remove this from the audio
signal.
-Jeff
> On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres
Aranguren Cardona <
> j...@yahoo.com> wrote:
>
>> Hi all,
>>
>> Although Jeff's point is a certain one, maybe the project is kind of
demo,
>> and needs some sort of link (say a DSP's serial port) to demonstrate
that
>> the encoding and decoding is working on the DSP sides, and the link is out
>> of the question. The issue here is that the link is what is causing the
>> trouble. In other words, the problem is to get to DSPs to talk to each
other
>> through some link, that link can be a serial port (of the kind where
people
>> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or
TWI,
>> UARTs, whatever. And if this is the case, usually the HRM (Hardware
>> Reference Manual) is all what the OP (original poster) needs.
>>
>> Maybe I am 100% lost here, but this is how I interpreted the question.
>>
>> Regards,
>> Jaime Andr Aranguren Cardona
>> j...@ieee.org
>> j...@computer.org
>> ------------------------------
>> *Von:* Jeff Brower
>> *An:* Nadav
>> *CC:* a...
>> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
>> *Betreff:* Re: [audiodsp] Audio signal sync
>>
>> Nadav-
>>
>> > I'm developing a real-time audio processing project which
>> > consists of 2 DSP separated systems.
>> > One system is encoder and other decoder.
>> > between the systems there is a regular audio line.
>> > the encoder samples the input audio data (8/16/24 bits,
>> > 16Khz sampling rate) separates it into frames and
>> > process each frame individually. The processed data is
>> > played by the D/A to the audio line and is the input to
>> > the decoder system.
>> > the decoder system samples the data and decode the data.
>> >
>> > My problem is that the data is being processed in separate
>> > frames and the decoder must know when each frame begin in
>> > order to process it correctly.
>> > How can i sync those 2 system using only the audio signal
>> > line which carries the actual data?
>> > Is it possible some data bits to the data somehow?
>> > Can i use special frames to sync?
>>
>> Something is not making sense here. Normally two systems doing audio
>> encode/decode communicate via digital transport
>> -- that's the whole point of audio compression, right? It should like
this:
>>
>> _______ Dig I/O _______
>> Audio <--> | codec | <-------> | codec | <--> Audio
>> |_______| |_______|
>>
>> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
>> etc), serial port (e.g. RS-422), etc.
>>
>> If you're connecting two systems via audio, then why do you need
>> compression?
>>
>> -Jeff
Reply by Nadav Haklai●December 1, 20092009-12-01
Yes,
I think thats what i will use for the sync.
I will try to add sync data in the unused BW.
Thank you all for the assistance.
Nadav
On Tue, Dec 1, 2009 at 7:52 PM, Jeff Brower wrote:
> Nadav-
>
> > That is correct. For now, the project is intended for some sort of
> > demonstration.
> > So the telephone line noise is irrelevant (i'll be only using 1 meter
std
> > audio line).
> > Although, i was hoping to find a way to sync those two systems without
> any
> > other link besides the audio line.
> > I guessed that syncing framed audio signals between systems is a common
> > problem and there must be a simple solution (using special sync frame
> which
> > can be played and detected, adding FSK to the unused bw etc)
>
> I don't want to sound negative, as you're trying hard and seem to
be
> persistent. But we have to apply rational
> thinking, not wishful thinking. I would ask you this: if you think this
> is a common method, then what audio systems
> can you name that do this?
>
> The basic problem you're facing: if you modify your audio signal "in
> band", then you will significantly affect the
> quality of what people hear. Whatever modification you would make needs
> substantial energy to be detected reliably
> and accurately enough to provide precise frame synchronization -- listeners
> *will* hear it. But you haven't specified
> what quality you need and what degradation you're willing to accept.
>
> RF communication systems often require precise synchronization. RF methods
> typically use bandwidth outside of the
> audio or data signal. For example, if you sample at 16 kHz but only
> transmit telephony quality audio (i.e. speech)
> limited to 4 kHz bandwidth, then you might use the upper 4 kHz for "out of
> band" information. You could send a 7 kHz
> tone short burst every frame, detect that and use its center, or peak
> power, as the frame marker. In your
> receive-side DSP code you would apply a sharp LPF to remove this from the
> audio signal.
>
> -Jeff
>
> > On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
> > j...@yahoo.com> wrote:
> >
> >> Hi all,
> >>
> >> Although Jeff's point is a certain one, maybe the project is kind
of
> demo,
> >> and needs some sort of link (say a DSP's serial port) to
demonstrate
> that
> >> the encoding and decoding is working on the DSP sides, and the link is
> out
> >> of the question. The issue here is that the link is what is causing the
> >> trouble. In other words, the problem is to get to DSPs to talk to each
> other
> >> through some link, that link can be a serial port (of the kind where
> people
> >> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or
> TWI,
> >> UARTs, whatever. And if this is the case, usually the HRM (Hardware
> >> Reference Manual) is all what the OP (original poster) needs.
> >>
> >> Maybe I am 100% lost here, but this is how I interpreted the question.
> >>
> >> Regards,
> >>
> >>
> >> Jaime Andr Aranguren Cardona
> >> j...@ieee.org
> >> j...@computer.org
> >>
> >>
> >> ------------------------------
> >> *Von:* Jeff Brower
> >> *An:* Nadav
> >> *CC:* a...
> >> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
> >> *Betreff:* Re: [audiodsp] Audio signal sync
> >>
> >>
> >>
> >> Nadav-
> >>
> >> > I'm developing a real-time audio processing project which
> >> > consists of 2 DSP separated systems.
> >> > One system is encoder and other decoder.
> >> > between the systems there is a regular audio line.
> >> > the encoder samples the input audio data (8/16/24 bits,
> >> > 16Khz sampling rate) separates it into frames and
> >> > process each frame individually. The processed data is
> >> > played by the D/A to the audio line and is the input to
> >> > the decoder system.
> >> > the decoder system samples the data and decode the data.
> >> >
> >> > My problem is that the data is being processed in separate
> >> > frames and the decoder must know when each frame begin in
> >> > order to process it correctly.
> >> > How can i sync those 2 system using only the audio signal
> >> > line which carries the actual data?
> >> > Is it possible some data bits to the data somehow?
> >> > Can i use special frames to sync?
> >>
> >> Something is not making sense here. Normally two systems doing audio
> >> encode/decode communicate via digital transport
> >> -- that's the whole point of audio compression, right? It should
like
> this:
> >>
> >> _______ Dig I/O _______
> >> Audio <--> | codec | <-------> | codec | <--> Audio
> >> |_______| |_______|
> >>
> >> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
> >> etc), serial port (e.g. RS-422), etc.
> >>
> >> If you're connecting two systems via audio, then why do you need
> >> compression?
> >>
> >> -Jeff
Reply by John Pote●December 1, 20092009-12-01
Nadav,
Since you have custom equipment at both ends of the audio line why not
add a modem chip or software and send the audio digitally like an old
dial up modem. This will sort out many of your problems, frame sync
recovery etc, although you will need compress the audio. You could
start by looking at the compression algorithms used by VoIP, many of
which are freely available.
However, you may still have problems. If your audio line is a modern
telephone system, cell phone or land line, remember that they already
digitise and compress the audio. The compression algorithms used are
designed for voice audio and often do not work well with other types
of signal. I'm told trying to use a external old fashioned dial up
modem over a cell phone gives very poor performance. Your scrambled
audio may not look like normal audio to the telephone system and could
arrive at the other end with significant distortion.
Good luck with your project,
John > --- In a..., "Jeff Brower" wrote:
> >
> > Nadav-
> >
> > > I'm developing a real-time audio processing project which
> > > consists of 2 DSP separated systems.
> > > One system is encoder and other decoder.
> > > between the systems there is a regular audio line.
> > > the encoder samples the input audio data (8/16/24 bits,
> > > 16Khz sampling rate) separates it into frames and
> > > process each frame individually. The processed data is
> > > played by the D/A to the audio line and is the input to
> > > the decoder system.
> > > the decoder system samples the data and decode the data.
> > >
> > > My problem is that the data is being processed in separate
> > > frames and the decoder must know when each frame begin in
> > > order to process it correctly.
> > > How can i sync those 2 system using only the audio signal
> > > line which carries the actual data?
> > > Is it possible some data bits to the data somehow?
> > > Can i use special frames to sync?
> >
> > Something is not making sense here. Normally two systems doing
> audio encode/decode communicate via digital transport
> > -- that's the whole point of audio compression, right? It should
> like this:
> >
> > _______ Dig I/O _______
> > Audio <--> | codec | <-------> | codec | <--> Audio
> > |_______| |_______|
> >
> > where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi,
> satellite, etc), serial port (e.g. RS-422), etc.
> >
> > If you're connecting two systems via audio, then why do you need
> compression?
> >
> > -Jeff
> > Hi,
>
> Thanks for the quick reply.
> My project is dealing with scrambling and de-scrambling of audio
> signal. Therefore the System diagram looks like this:
>
> Audio line (Telephone, cell etc)
> \ /
> Input Audio <-> Scrambler <-------> de-scrambler <-> Audio Output
>
> For now, i'm just connecting regular audio line between the 2 systems.
> (I'm working with DSP Development boards).
> The problem is that the scrambling process is working on each frame
> individually.
> Thats why i need to find a way to sync those 2 separated systems
> using only the audio line.
> Some more details: I'm sampling at 16Khz and my input/ouput BW is
> regular telephone BW about 4Khz.
>
> Thanks
> Nadav
>
Reply by Nadav Haklai●December 1, 20092009-12-01
Hi,
That is correct. For now, the project is intended for some sort of
demonstration.
So the telephone line noise is irrelevant (i'll be only using 1 meter
std
audio line).
Although, i was hoping to find a way to sync those two systems without any
other link besides the audio line.
I guessed that syncing framed audio signals between systems is a common
problem and there must be a simple solution (using special sync frame which
can be played and detected, adding FSK to the unused bw etc)
Thanks
Nadav
On Tue, Dec 1, 2009 at 12:39 AM, Jaime Andres Aranguren Cardona <
j...@yahoo.com> wrote:
> Hi all,
>
> Although Jeff's point is a certain one, maybe the project is kind of
demo,
> and needs some sort of link (say a DSP's serial port) to demonstrate
that
> the encoding and decoding is working on the DSP sides, and the link is out
> of the question. The issue here is that the link is what is causing the
> trouble. In other words, the problem is to get to DSPs to talk to each
other
> through some link, that link can be a serial port (of the kind where people
> usually attach ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI,
> UARTs, whatever. And if this is the case, usually the HRM (Hardware
> Reference Manual) is all what the OP (original poster) needs.
>
> Maybe I am 100% lost here, but this is how I interpreted the question.
>
> Regards,
> Jaime Andr Aranguren Cardona
> j...@ieee.org
> j...@computer.org
> ------------------------------
> *Von:* Jeff Brower
> *An:* Nadav
> *CC:* a...
> *Gesendet:* Montag, den 30. November 2009, 19:00:55 Uhr
> *Betreff:* Re: [audiodsp] Audio signal sync
>
> Nadav-
>
> > I'm developing a real-time audio processing project which
> > consists of 2 DSP separated systems.
> > One system is encoder and other decoder.
> > between the systems there is a regular audio line.
> > the encoder samples the input audio data (8/16/24 bits,
> > 16Khz sampling rate) separates it into frames and
> > process each frame individually. The processed data is
> > played by the D/A to the audio line and is the input to
> > the decoder system.
> > the decoder system samples the data and decode the data.
> >
> > My problem is that the data is being processed in separate
> > frames and the decoder must know when each frame begin in
> > order to process it correctly.
> > How can i sync those 2 system using only the audio signal
> > line which carries the actual data?
> > Is it possible some data bits to the data somehow?
> > Can i use special frames to sync?
>
> Something is not making sense here. Normally two systems doing audio
> encode/decode communicate via digital transport
> -- that's the whole point of audio compression, right? It should like
this:
>
> _______ Dig I/O _______
> Audio <--> | codec | <-------> | codec | <--> Audio
> |_______| |_______|
>
> where digital I/O is network (e.g. VoIP), RF (e.g. GSM, WiFi, satellite,
> etc), serial port (e.g. RS-422), etc.
>
> If you're connecting two systems via audio, then why do you need
> compression?
>
> -Jeff
>
_____________________________________
Reply by Jaime Andres Aranguren Cardona●November 30, 20092009-11-30
Hi all,
Although Jeff's point is a certain one, maybe the project is kind of demo,
and needs some sort of link (say a DSP's serial port) to demonstrate that
the encoding and decoding is working on the DSP sides, and the link is out of
the question. The issue here is that the link is what is causing the trouble. In
other words, the problem is to get to DSPs to talk to each other through some
link, that link can be a serial port (of the kind where people usually attach
ADCs, DACs, Codecs) or an SPI peripheral on the DSPs or TWI, UARTs, whatever.
And if this is the case, usually the HRM (Hardware Reference Manual) is all what
the OP (original poster) needs.
Maybe I am 100% lost here, but this is how I interpreted the question.
________________________________
Von: Jeff Brower
An: Nadav
CC: a...
Gesendet: Montag, den 30. November 2009, 19:00:55 Uhr
Betreff: Re: [audiodsp] Audio signal sync
Nadav-
> I'm developing a real-time audio processing
project which
> consists of 2 DSP separated systems.
> One system is encoder and other decoder.
> between the systems there is a regular audio line.
> the encoder samples the input audio data (8/16/24 bits,
> 16Khz sampling rate) separates it into frames and
> process each frame individually. The processed data is
> played by the D/A to the audio line and is the input to
> the decoder system.
> the decoder system samples the data and decode the data.
>
> My problem is that the data is being processed in separate
> frames and the decoder must know when each frame begin in
> order to process it correctly.
> How can i sync those 2 system using only the audio signal
> line which carries the actual data?
> Is it possible some data bits to the data somehow?
> Can i use special frames to sync?
Something is not making sense here. Normally two systems doing audio
encode/decode communicate via digital transport
-- that's the whole point of audio compression, right? It should like
this: