The spectrogram is a basic tool in audio spectral analysis and other fields. It has been applied extensively in speech analysis [18,64]. The spectrogram can be defined as an intensity plot (usually on a log scale, such as dB) of the Short-Time Fourier Transform (STFT) magnitude. The STFT is simply a sequence of FFTs of windowed data segments, where the windows are usually allowed to overlap in time, typically by 25-50% [3,70]. It is an important representation of audio data because human hearing is based on a kind of real-time spectrogram encoded by the cochlea of the inner ear . The spectrogram has been used extensively in the field of computer music as a guide during the development of sound synthesis algorithms. When working with an appropriate synthesis model, matching the spectrogram often corresponds to matching the sound extremely well. In fact, spectral modeling synthesis (SMS) is based on synthesizing the short-time spectrum directly by some means .
Spectrogram of Speech
An example spectrogram for recorded speech data is shown in Fig.8.10. It was generated using the Matlab code displayed in Fig.8.11. The function spectrogram is listed in §I.5. The spectrogram is computed as a sequence of FFTs of windowed data segments. The spectrogram is plotted by spectrogram using imagesc.
[y,fs,bits] = wavread('SpeechSample.wav'); soundsc(y,fs); % Let's hear it % for classic look: colormap('gray'); map = colormap; imap = flipud(map); M = round(0.02*fs); % 20 ms window is typical N = 2^nextpow2(4*M); % zero padding for interpolation w = 0.54 - 0.46 * cos(2*pi*[0:M-1]/(M-1)); % w = hamming(M); colormap(imap); % Octave wants it here spectrogram(y,N,fs,w,-M/8,1,60); colormap(imap); % Matlab wants it here title('Hi - This is <you-know-who> '); ylim([0,(fs/2)/1000]); % don't plot neg. frequencies
In this example, the Hamming window length was chosen to be 20 ms, as is typical in speech analysis. This is short enough so that any single 20 ms frame will typically contain data from only one phoneme,8.6 yet long enough that it will include at least two periods of the fundamental frequency during voiced speech, assuming the lowest voiced pitch to be around 100 Hz.
- The harmonics should be resolved.
- Pitch and formant variations should be closely followed.
Filters and Convolution
Spectrum Analysis of a Sinusoid: Windowing, Zero-Padding, and FFT