Hello. I have a problem in Simulink. What i want to do is take one signal from workspace (which initially was a .wav file), and aply a sliding window on it. The signal is frame based, single channel. Next i want to compare each window that slides thru the signal with a second signal, which size is exactly the sliding window size. In other words, let's say we have the word "abracadabra" represented in the first signal, and the vocal "a" represented in the second signal. What i want to do, it to compare the first signal with the second one using a sliding window for accuracy, and to know at the end how many times the vocal "a" appears in the first signal. Untill now i know how to create the sliding window using the buffer block, but i don't know how to compare the second signal with each sliding window. Please help !! Thank you for reading.
sliding window and compare signals in simulink
Started by ●April 5, 2006
Reply by ●April 5, 20062006-04-05
Bogdan skrev:> Hello. I have a problem in Simulink. What i want to do is take one > signal from workspace (which initially was a .wav file), and aply a > sliding window on it. The signal is frame based, single channel. Next i > want to compare each window that slides thru the signal with a second > signal, which size is exactly the sliding window size. In other words, > let's say we have the word "abracadabra" represented in the first > signal, and the vocal "a" represented in the second signal. What i want > to do, it to compare the first signal with the second one using a > sliding window for accuracy, and to know at the end how many times the > vocal "a" appears in the first signal. Untill now i know how to create > the sliding window using the buffer block, but i don't know how to > compare the second signal with each sliding window. Please help !! > Thank you for reading.>From a *quantitative* point of view, correlation is the naive method touse for searching for one signature inside a longer signal. However, you ask a *qualitative* question, you want to find the wovel "a". If your reference signature was uttered by a man, there is little reason to expect that you will get a *quantitative* match if the test signal was uttered by a woman or a child. I am not sure if this can be solved by quantitative means. I suspect you will have to check out some aspects of speech processing and maybe some neural networks. Rune