DSPRelated.com
Forums

Speech Recognition using Butterworth filters

Started by Mandar Gokhale December 27, 2007
On Jan 15, 4:36 am, dbell <bellda2...@cox.net> wrote:
> On Jan 14, 3:55 am, jnarino <jnar...@gmail.com> wrote: > > > > > First, the one who should GFY is Vassily, for being such a rude > > ignorant idiot. If you do not know anything aboutspeechrecognition, > > just shut up. > > > This question onspeechrecognitionhas many possible answers. > > > Please first define the domain. I will assume you are only trying to > > recognize a few words, so you will be doing limited vocabulary > >recognition. In this case, your best bet forSpeechrecognitionwould > > be neural networks. You train the neural network with a few samples of > > the intended words. > > > However, it is not that simple and I will explain why. First, as > > somebody already said, you should do DTW (dynamic time warping), to > > normalize the length of the utterance. Aftwerwards, you should do > > cepstral analysis to obtain a feature vector to feed your neural > > network. A simple PIC maybe would not suffice. > > > The preprocessing stage, with the filters and such, is just for > > increasing robustness and getting rid of the information we are not > > interested into. > > > I recommend you to read the HTK Book introductory part to understood > > Hidden Markov Models basedspeechrecognition. The book is available > > for free (after simple registration) on http://htk.eng.cam.ac.uk/. > > > Another solution would be looking for those specialized ICs, but have > > not tried them and maybe they are not cheap or readily available. > > > So basically your system should consist of this, in this order, > > connected in cascade > > > signal adquisition (microphone) > > Bandpass filter (can be a butterworth filter) between 100Hz and 4000Hz > > (the rest is redundant) > > Sampling A/D converter, sampling at least at 8Khz (recommended) > > > Once the signal is into the microprocessor, the first thing you should > > do is voice activity detection (VAD). There are some algorithms for > > this, please google it. > > > Once you have detected the beginning and the end of a utterance, you > > should do Dynamic Time warping to normalize its length, so it can be > > compared. > > > Then, do framming and obtain cepstral coefficients. > > > Feed your neural network and wait for the result. > > > Of course, first you will need to train the neural network. > > > If you have more doubts, do not hesitate to ask. > > > Regards > > > Juan Pablo > > Dynamic time warping on the time domain signal? Have you tried that? >
DTW was fairly common in the 70s in the beginning of speech recognition research. It is somewhat obsolete for Large Vocabulary Continuous Speech Recognition (LVCSR) and has been superseded by HMMs, but still it is used for simple commands like the original poster wants. For simple tasks, it is very effective. Regards Juan
On Jan 15, 5:16&#4294967295;am, jnarino <jnar...@gmail.com> wrote:
> On Jan 15, 4:36 am, dbell <bellda2...@cox.net> wrote: > > > > > > > On Jan 14, 3:55 am, jnarino <jnar...@gmail.com> wrote: > > > > First, the one who should GFY is Vassily, for being such a rude > > > ignorant idiot. If you do not know anything aboutspeechrecognition, > > > just shut up. > > > > This question onspeechrecognitionhas many possible answers. > > > > Please first define the domain. I will assume you are only trying to > > > recognize a few words, so you will be doing limited vocabulary > > >recognition. In this case, your best bet forSpeechrecognitionwould > > > be neural networks. You train the neural network with a few samples of > > > the intended words. > > > > However, it is not that simple and I will explain why. First, as > > > somebody already said, you should do DTW (dynamic time warping), to > > > normalize the length of the utterance. Aftwerwards, you should do > > > cepstral analysis to obtain a feature vector to feed your neural > > > network. A simple PIC maybe would not suffice. > > > > The preprocessing stage, with the filters and such, is just for > > > increasing robustness and getting rid of the information we are not > > > interested into. > > > > I recommend you to read the HTK Book introductory part to understood > > > Hidden Markov Models basedspeechrecognition. The book is available > > > for free (after simple registration) on &#4294967295;http://htk.eng.cam.ac.uk/. > > > > Another solution would be looking for those specialized ICs, but have > > > not tried them and maybe they are not cheap or readily available. > > > > So basically your system should consist of this, in this order, > > > connected in cascade > > > > signal adquisition (microphone) > > > Bandpass filter (can be a butterworth filter) between 100Hz and 4000Hz > > > (the rest is redundant) > > > Sampling A/D converter, sampling at least at 8Khz (recommended) > > > > Once the signal is into the microprocessor, the first thing you should > > > do is voice activity detection (VAD). There are some algorithms for > > > this, please google it. > > > > Once you have detected the beginning and the end of a utterance, you > > > should do Dynamic Time warping to normalize its length, so it can be > > > compared. > > > > Then, do framming and obtain cepstral coefficients. > > > > Feed your neural network and wait for the result. > > > > Of course, first you will need to train the neural network. > > > > If you have more doubts, do not hesitate to ask. > > > > Regards > > > > Juan Pablo > > > Dynamic time warping on the time domain signal? &#4294967295;Have you tried that? > > DTW was fairly common in the 70s in the beginning of speech > recognition research. It is somewhat obsolete for Large Vocabulary > Continuous Speech Recognition (LVCSR) and has been superseded by > HMMs, &#4294967295; but still it is used for simple commands like the original > poster wants. For simple tasks, it is very effective. > > Regards > > Juan- Hide quoted text - > > - Show quoted text -
Juan, I am actually familiar with DTW. Do you think that it is appropriate for an actual waveform as opposed to a sequence of parameters derived from such a waveform (like formants, ...) ? Do you think that a 1 second utterance of a word can be meaningfully brought into waveform alignment with a 1.5 uttererance of the same word using DTW? Dirk
Hi Dirk
When I mean use DTW, I do not actually mean comparing directly the two
waveforms. DTW should be applied to normalize length, and then
cepstral coefficients should be obtained. In my opinion, if you are
intending to use a very limited vocabulary, DTW with cepstral analysis
and neural networks can be an interesting option. Of course there will
be distortion by warping the samples, but for limited vocabulary a
neural network can learn really well. Please correct me if I am wrong.

regards
On Jan 16, 3:13&#4294967295;am, jnarino <jnar...@gmail.com> wrote:
> Hi Dirk > When I mean use DTW, I do not actually mean comparing directly the two > waveforms. DTW should be applied to normalize length, and then > cepstral coefficients should be obtained. In my opinion, if you are > intending to use a very limited vocabulary, DTW with cepstral analysis > and neural networks can be an interesting option. Of course there will > be distortion by warping the samples, but for limited vocabulary a > neural network can learn really well. Please correct me if I am wrong. > > regards
Juan, I am not following what you are saying. What exactly are you going to apply the DTW to, in order to normalize the lengths of the waveforms, prior to cepstral analysis? Dirk
On Jan 17, 8:07 am, dbell <bellda2...@cox.net> wrote:
> On Jan 16, 3:13 am, jnarino <jnar...@gmail.com> wrote: > > > Hi Dirk > > When I mean use DTW, I do not actually mean comparing directly the two > > waveforms. DTW should be applied to normalize length, and then > > cepstral coefficients should be obtained. In my opinion, if you are > > intending to use a very limited vocabulary, DTW with cepstral analysis > > and neural networks can be an interesting option. Of course there will > > be distortion by warping the samples, but for limited vocabulary a > > neural network can learn really well. Please correct me if I am wrong. > > > regards > > Juan, > > I am not following what you are saying. What exactly are you going to > apply the DTW to, in order to normalize the lengths of the waveforms, > prior to cepstral analysis? > > Dirk
Hi Dirk You are correct Dirk. First you apply DTW to normalize the lengths of the waveforms prior to cepstral analysis. Juan
On Jan 17, 3:05&#4294967295;am, jnarino <jnar...@gmail.com> wrote:
> On Jan 17, 8:07 am, dbell <bellda2...@cox.net> wrote: > > > > > > > On Jan 16, 3:13 am, jnarino <jnar...@gmail.com> wrote: > > > > Hi Dirk > > > When I mean use DTW, I do not actually mean comparing directly the two > > > waveforms. DTW should be applied to normalize length, and then > > > cepstral coefficients should be obtained. In my opinion, if you are > > > intending to use a very limited vocabulary, DTW with cepstral analysis > > > and neural networks can be an interesting option. Of course there will > > > be distortion by warping the samples, but for limited vocabulary a > > > neural network can learn really well. Please correct me if I am wrong. > > > > regards > > > Juan, > > > I am not following what you are saying. &#4294967295;What exactly are you going to > > apply the DTW to, in order to normalize the lengths of the waveforms, > > prior to cepstral analysis? > > > Dirk > > Hi Dirk > > You are correct Dirk. First you apply DTW to normalize the lengths of > the waveforms prior to cepstral analysis. > > Juan- Hide quoted text - > > - Show quoted text -
Juan, By actually performing DTW on the waveform samples? The reason I am questioning this, is that in order to do this, if you intend what I am asking, it seems that you actually have to find a way to remove or insert pitch periods if you have any hope of producing similar waveforms. So what exactly is it you intend to align from the waveform while performing the DTW to normalize the waveform lengths? Dirk
On Dec 27 2007, 8:38 am, Mandar Gokhale <stallo...@gmail.com> wrote:
> I am aiming to build a very basicspeechrecognition system around an > 8-bit microcontroller(PIC / AVR), which is capable of 'recognizing' > four to eight words..(i.e, give a specific string output when it > receives the corresponding input data through a mic.) > > Someone told me that designing Butterworth filters forprocessingthe > input data and then sampling it at different points is a pretty good > strategy.... However, all I can find on the Net regarding this is a > lot of highly obfuscated jargon....So, could anyone please direct me > to a good, clear source explaining this (or any other)speech > recogntion algorithm in minute detail?..... > > Hope you people can throw some light on this............ > > Thanks...
Mandar .... yoyo mostly u are doing this for the iit B project ... so here is a suggestion since there are only 4 words to be recognitioned and they are all distinct use the DTW approach , it is much simpler to implement on a microcontroller can be implemented on a psoc as well pretty easily and u wont require much ram/rom memory or it too .. use the zero crossings of the speech signal in the time domain to make the feature vector of the wave then apply dtw to make it time independent and compare the resultant values :D neways see u arnd in the campus BITS - Pilani Goa Campus \m/ FTW \m/