DSPRelated.com
Forums

About asynchronous sample rate conversion

Started by nagual.hsu March 15, 2006
nagual.hsu wrote:
> If you don't mind, can you give me the code? Or what referenced books > you would suggest that I should study in order to know Interpolated > polyphase? >
The most accessible multirate filter tutorial with which I am familiar can be found at dspguru.com. Or more specifically, at http://www.dspguru.com/info/faqs/mrfaq.htm It comes with source code and explanations. (Note to Grant: Welcome back to comp.dsp!) -- Jim Thomas Principal Applications Engineer Bittware, Inc jthomas@bittware.com http://www.bittware.com (603) 226-0404 x536 Any sufficiently advanced technology is indistinguishable from magic. - Arthur C. Clarke
Jim Thomas wrote:
> nagual.hsu wrote: > > If you don't mind, can you give me the code? Or what referenced books > > you would suggest that I should study in order to know Interpolated > > polyphase? > > > > The most accessible multirate filter tutorial with which I am familiar > can be found at dspguru.com. Or more specifically, at > http://www.dspguru.com/info/faqs/mrfaq.htm
That is for one type of multistage implementation which works for integer ratios. For the basic equations for reconstructing any sample, look at rbj's previous postings in this group "How to resample fast": http://groups.google.com/group/comp.dsp/msg/e9b6488aef1e2580 No ratio required, works for any arbitrary point given a known bandlimit and enough surrounding sample points for the S/N desired. The pre-computed tables mentioned are really only an optimization for implementation efficiency; if you don't need efficiency and are using a simple window function, you can just compute all the FIR coefficients directly as needed. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M
in article Fo2dnai6sN5NwITZRVn-rg@giganews.com, nagual.hsu at
nagual.hsu@gmail.com wrote on 03/16/2006 07:45:

>> robert bristow-johnson wrote:
i didn't say this (someone else did):
>> Since the capture and playback use two different crystals, the OP >> does need to do ASRC to match durations in real-time. It sounds >> like the OP doesn't want to drop samples, but instead something >> like interpolate N+-1 evenly spaced points from an 2N point buffer, >> with the +-1 (or +0) adjusted in real-time as needed. This can be >> done by windowed sinc reconstruction. Interpolated polyphase is >> one optimization possible for performance and power savings; >> but I've done it by calculating windowed sinc taps on the fly with >> just a few lines of code and no phase table (a fast PC can calculate >> several trancendental math functions faster than a VAX could do >> a single memory load from an array). > > In fact, I want to record multiple captured audio streams and play one > of the captured streams at the same time at the sampling rate of 8kHz. Due > to my software architecture, I have to do SRC to decimate the captured > audio samples to 8k HZ. Then I share one of the 8k HZ audio streams for > both recording and playback. Unlike the "near-realtime" playback, > recording(s) do not need ASRC.
if it's not ASRC, then you cannot have two independent clocks. but i thought that having two independent clocks was your premise?
> If you don't mind, can you give me the code?
Erik de Castro Lopo <nospam@mega-nerd.com> (he claims the email is valid) has "Secret Rabbit Code" (abbreviated SRC) that does SRC.
> Or what referenced books you would suggest that I should study in order > to know Interpolated polyphase?
is what you're trying to figure out is the basic Sample Rate Conversion or intersample bandlimited interpolation?
> Thanks. :)
FWIW. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
nagual.hsu wrote:

>> >>Better: always use the latest available sample from the 16k stream for > the >>output. This avoids every additional ASRC efforts and gives almost a >>perfect result. >>Side effects: Low frequency distortions depending on the clock > relations. >>And the usual 8kHz tone on the output. >>Both might be removed with a bandpass filter on the output. > > This sounds like a good method though this is not my software > architecture.(Recording decimated captured streams at 8 kHZ while > "near-realtime" playing one of the streams). > > If I understand you correctly, you probably mean this method would > have some audio's frequency higher than 4 kHZ.
If you convert a signal from digital to analog realm using a DAC with constant sample rate, this will produce a tone with the frequency of that sample rate: conversion with 8kS/s will thus produce a tone of 8kHz. And, yes, this will be audible, and very annoying. Your audio card may have such a filter built-in. If not, you'll have to apply it __after__ the DAC, in the analog realm. You cannot filter the signal before the DAC, because it's the DAC itself, which produces the distortion. If you are able to listen to the unfiltered signal, do it - it's very impressive. With a sample rate of 8kS/s, you should be able to hear the tone - with the commonly used higher rates of more than 20kS/s, you won't.
> Is this the "low frequency distortions" you mentioned above?
Imagine a rectangular stream at 4kHz, entering your ADC (@8kS/s). You'll get 0-1-0-1-0-1-0-1-0-1-0-1-... Now, if you output this stream with a synchronous DAC, it will perfectly reproduce the stream and provide 0-1-0-1-0-1-0-1-0-1-0-1-... (I'll adjist this assumption later, take it for now ...) Now, let's assume that your DAC goes with a slightly faster clock. It will reproduce the signal as 0-1-0-0-1-0-1-0-1-0-1-0-1-1-... because there's no other sample available when one is required from the ADC stream, it will just double the current value. Since this happens regulary after a certain couple of samples, you'll experience some distortion. The fundamental frequency of this distortion is related to the difference of the two clocks. If your ADC is clocked with 8.000kS/s and your DAC with 8.001kS/s, the difference is 1Hz and the distortion will have a fundamental frequency of 1Hz (with additional portions at the overtone frequencies). Since these overtones reach into your pass band, you'll always have some distortions if your clocks differ. You can improve the behaviour with digital filters. However, it is not possible to completely remove these distortions.
> Does "amplitude" sometimes mean sound volume?
Yes.
> And what is "phase response"?
Look again at the above rectangular signal 0-1-0-1-0-1-0-1-0-1-0-1-0-1-... The output will be reproduced as 0-1-0-0-1-0-1-0-1-0-1-0-1-1-... Draw both signals on a sheet of paper (with respect to their individual clocks) and try to map them. You'll find that they are identical, except that the transition is delayed on the second. This "wrong" transition is a phase error, where a simplified translation for phase can be the timely position of an event. Since the ear is very sensible to phase errors, especially if both ears hear them differently - this may or may not be an important issue. If it is, compare the filter types, and you'll find out, that they behave differently with respect to the phase. Simply spoken: filters with both, good amplitude response and good phase response, require more computation time, higher precision, more memory etc.
> Since the playback devices are the audio chips in the mother boards of > a computer, I can't have the tone surpressed by the hardware.
In this case, I don't quite understand why you talk of 8kS/s. I guess that the sound card supports a sample rate of at least 48kS/s. If it is a Sigma-Delta-DAC, it will process the conversion at even higher rates internally. And it will certainly have a filter after the DAC which is good enough for typical audio purposes. However, if you work them at a sample rate of 8kS/s, you'll end up with a bad solution: the analog playback channel is probably able to perfectly transport frequencies up to 20kHz, and would not help you to suppress the distortions which you generate at 8kHz. Instead, it would ideally pass them to the speakers. I'd propose that you start with a thorough specification of your requirements: available hardware: - frequency precision of the ADCs and DACs - maximum frequency deviation of possible pairs of ADC and DAC - amplitude resolution (bits) of the ADCs and DACs source signal: - where does it come from (purity, noise content) - which are the important properties of the signal - which properties can be neglected (e.g. for speech recognition purposes background music is unwanted noise) playback signal: - what's its purpose - required purity (acceptable amplitude/phase distortions) - how important is the phase response with original signal (filters insert considerable delay) - which properties can be neglected (e.g. for peak monitoring purposes the behaviour at low levels doesn't matter, nor does phase) After such a thorough analysis, it will be much easier to decide which approach will be the best - and it might help you to avoid a lot of unnecessary work. Sorry for this lengthy response, I hope that it helps you... Bernhard