DSPRelated.com
Forums

G.729 with different sampling rates?

Started by Jack October 4, 2004
Jack wrote:

>>It's what's used in digital telephony. That's what the standard covers. >> >>Jerry > > > I realize that if I change the sampling rate I won't strictly be > adhering to the standard any more. But when I write both the encoder > and the decoder, it's doesn't seem like such an issue (at least in my > case). If I increase the rate from 8 KHz (to, say, 8.1 or 8.2) will it > sound at least as good as the standard? Or is the algorithm somehow > "optimized" for that sampling rate so that it actually sounds worse at > a slightly higher rate?
If you vary the sampling rate slightly everything should be fine, but what sound card samples at 8200 Hz? Perhaps if you explain WHY you want to change the sample rate there is another way to do it. However, if you doubled the sampling rate the tone would greatly change for some people due to the filters in the encoder. -- Phil Frisbie, Jr. Hawk Software http://www.hawksoft.com
Jerry Avins wrote:

> Jon Harris wrote: > > ... > >> I don't think the original researches were so "dumb" as to not >> realize that >> speech had higher frequency components. Given the technology limits >> of the >> time, they chose the sample rate that allowed for "intelligible" >> speech (not >> perfect speech) at a reasonable cost. In other words, the criteria >> for choosing >> the frequency response was "what is the minimum frequency response >> that is still >> intelligible in normal speech" vs. "what is the minimum frequency >> response for >> full fidelity speech". That's what engineering is all >> about--trade-offs! > > > Originally, there was no sample rate involved. Hybrids had to be > terminated with dummy lines that closely matched the real line impedance > over the bandwidth of intended use. Ear pieces and carbon microphones > had to cover the band. In all respects, bandwidth cost money. This is > also easy to see with analog frequency-division multiplexing. The actual > guaranteed analog high frequency was 3600 Hz, if I remember correctly, > but actual response was usually better starting around 1950. The 8 KHz > sample rate was adequate to preserve the quality of the analog service. > > Jerry
There was no sample rate, but early on there were FDM stacks. That demanded the same choices about permitted bandwidth, and that is where the choices we live with today were set in (somewhat flaky) concrete. On simple local loop analogue lines, saying the bandwidth is 3600Hz is more a quality of service issues, than a hard engineering one. In 99% of cases the bandwidth there is pretty much arbitrary. Regards, Steve
Can't think of a reason why sampling at different rate won't work with
the algorithm. The only thing I can think of which might be adversely
affected is the VQ part. The codebook is trained with speech at 10ms
per frame (at 8KHz). So maybe putting a 5 ms frame (sampled at 16KHz)
would make a mess of the codebook.
All this is just a guess. Never really tried anything like this. 


Regards
Piyush

Jack <jack8051@lightspawn.removethisbit.org> wrote in message news:<ahj5m0hf7b6at2vh2apskntn3bg0so1cni@4ax.com>...
> >It's what's used in digital telephony. That's what the standard covers. > > > >Jerry > > I realize that if I change the sampling rate I won't strictly be > adhering to the standard any more. But when I write both the encoder > and the decoder, it's doesn't seem like such an issue (at least in my > case). If I increase the rate from 8 KHz (to, say, 8.1 or 8.2) will it > sound at least as good as the standard? Or is the algorithm somehow > "optimized" for that sampling rate so that it actually sounds worse at > a slightly higher rate?