There are **24** messages in this thread.

You are currently looking at messages 1 to .

**Is this discussion worth a thumbs up?**

Hi, I am using Matlab to do ffts for some data collected in the lab of electrophysiology recordings. When I plot the power spectrums, I get huge peaks from 0-1 Hz. Are these peaks representative of the data or artifacts carried over/created from the fft? Any help would be appreciated. Thanks.

Maria wrote: > Hi, > > I am using Matlab to do ffts for some data collected in the lab of > electrophysiology recordings. When I plot the power spectrums, I get > huge peaks from 0-1 Hz. Are these peaks representative of the data or > artifacts carried over/created from the fft? > > Any help would be appreciated. > > Thanks. Probably the latter. A common technique is to remove the DC component from the buffer before the FFT. e.g. fx = fft( x - mean(x) );

_____________________________

"Mark Borgerding" <m...@borgerding.net> wrote in message news:7KJac.100982$8...@fe3.columbus.rr.com... > Maria wrote: > > Hi, > > > > I am using Matlab to do ffts for some data collected in the lab of > > electrophysiology recordings. When I plot the power spectrums, I get > > huge peaks from 0-1 Hz. Are these peaks representative of the data or > > artifacts carried over/created from the fft? > > > > Any help would be appreciated. > > > > Thanks. > > Probably the latter. > > A common technique is to remove the DC component from the buffer before > the FFT. > > e.g. > > fx = fft( x - mean(x) ); > If you don't remove the mean then the result around 0Hz isn't an artifact it's representative of the data. Somewhat related, if you don't taper the ends of the temporal data sequence then there will be a strong periodic component at the lowest frequency but not at 0Hz. Example: the temporal sample is highly positive at the beginning and highly negative at the end (after removing the mean). The Discrete Fourier Transform / FFT treats the single sequence as if it's periodic. Let's say the temporal epoch is 1 second and the sample rate is 512Hz. So, there are 512 samples in that 1 second and the assumption is that it repeats. The repetition is at 1Hz which is the fundamental frequency of the Fourier Series represented in the DFT. So, a large 1Hz component will emerge from the sharp discontinuity that repeats at 1Hz. Tapering the ends of the temporal sequence (windowing) will remove the sharpness of the discontinuity and, thus, remove the artificial 1Hz component that's introduced by it. Fred

_____________________________

Thanks for the info. Also curious, there are several windowing options in Matlab, would you recommend one? Also, my sampling rate is 2000Hz. Would it be better to do ffts with shorter time intervals (~1-5s) and thus (smaller amount of points in the fft) vs larger time intervals (~60s) and thus (larger amount of points in the fft)? Any help would be appreciated. THanks alot. Maria "Fred Marshall" <f...@remove_the_x.acm.org> wrote in message news:<r...@centurytel.net>... > "Mark Borgerding" <m...@borgerding.net> wrote in message > news:7KJac.100982$8...@fe3.columbus.rr.com... > > Maria wrote: > > > Hi, > > > > > > I am using Matlab to do ffts for some data collected in the lab of > > > electrophysiology recordings. When I plot the power spectrums, I get > > > huge peaks from 0-1 Hz. Are these peaks representative of the data or > > > artifacts carried over/created from the fft? > > > > > > Any help would be appreciated. > > > > > > Thanks. > > > > Probably the latter. > > > > A common technique is to remove the DC component from the buffer before > > the FFT. > > > > e.g. > > > > fx = fft( x - mean(x) ); > > > > If you don't remove the mean then the result around 0Hz isn't an artifact > it's representative of the data. > > Somewhat related, if you don't taper the ends of the temporal data sequence > then there will be a strong periodic component at the lowest frequency but > not at 0Hz. > Example: the temporal sample is highly positive at the beginning and highly > negative at the end (after removing the mean). The Discrete Fourier > Transform / FFT treats the single sequence as if it's periodic. Let's say > the temporal epoch is 1 second and the sample rate is 512Hz. So, there are > 512 samples in that 1 second and the assumption is that it repeats. The > repetition is at 1Hz which is the fundamental frequency of the Fourier > Series represented in the DFT. > So, a large 1Hz component will emerge from the sharp discontinuity that > repeats at 1Hz. > > Tapering the ends of the temporal sequence (windowing) will remove the > sharpness of the discontinuity and, thus, remove the artificial 1Hz > component that's introduced by it. > > Fred

_____________________________

"Maria" <m...@hotmail.com> wrote in message news:3...@posting.google.com... > Thanks for the info. Also curious, there are several windowing > options in Matlab, would you recommend one? Also, my sampling rate is > 2000Hz. Would it be better to do ffts with shorter time intervals > (~1-5s) and thus (smaller amount of points in the fft) vs larger time > intervals (~60s) and thus (larger amount of points in the fft)? > Maria, Your questions are both answered by considering what windows do in general. The time spans represent rectangular windows of different widths. Using one or another length amounts to convolving the frequency information with a sinc=sin(kx)/kx function whose width is inversely proportional to the time length. So, longer time, narrower sinc to "smear" the frequency data. Other windows have the nice property of reducing the sidelobes or "tails" of that sinc and the choice is a matter of what's easiest to use and what your requirements might be. Kaiser's window is probably a pretty good choice because you can trade off the critical characteristics in frequency. One critical trade in windows is the width of the main lobe of its frequency character vs. the height of the sidelobes. In general, the wider the main lobe, the smaller the sidelobes can be. As above, the longer the time data, the narrower the main lobe of the sinc. So, if you have lots of time data then the main lobe might be quite narrow anyway and your concern may shift to the sidelobes ("spectral leakage"). You don't have to window the entire temporal sequence either. You might keep the sequence intact for much of its span and only taper the edges. This brings the result closer to having a rectangular window but gets rid of a sharp discontinuity at the edges. So, a very simple window might look like: 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.0 .......... 1.0 0.9 0.8 ...0.1 0 A more sophisticated version of this would shape the edges according to a cosine so that the ends of the transition have zero derivative. And, you can make the transition any width you like up to having it transition up for 1/2 of the temporal record and down for the remaining 1/2 - then it would be more like the windows you can compute with Matlab. Just look at the magnitude of the FFT of the window to see what it does in frequency. No matter what, to get rid of the 1Hz component we talked about, you probably want the window to go to zero at the edges. Fred

Maria wrote: > Thanks for the info. Also curious, there are several windowing > options in Matlab, would you recommend one? Depends on whether you want sideband rejection or narrow band. Both are good things to have, but you can have everything :) A rectangular window (i.e. no windowing at all) has very sharp transition, but only 13dB sideband rejection (if memory serves). For some processing, this is enough. Especially if several consecutives fft outputs are added coherently ... but I digress. The kaiser window allows fine tuning to trade off between transition width and sideband suppression. You can use freqz(w) to see the various windows. If you don't want to spend too many brain cycles deciding, the hamming window is a decent starting point. > Also, my sampling rate is > 2000Hz. Would it be better to do ffts with shorter time intervals > (~1-5s) and thus (smaller amount of points in the fft) vs larger time > intervals (~60s) and thus (larger amount of points in the fft)? Again it depends: Shorter windows give you more sensitivity to brief features/events. Longer windows give you better frequency resolution. Decide how long an "event of interest" lasts. Decide how precisely you need to measure frequencies (subject to transition bandwidth dictated by windowing type). The answer to these two questions will tell you what range of window lengths would be acceptable. Hope this helps. -- Mark Borgerding

_____________________________

Mark Borgerding wrote: > Maria wrote: > >> Thanks for the info. Also curious, there are several windowing >> options in Matlab, would you recommend one? > > > Depends on whether you want sideband rejection or narrow band. Both are > good things to have, but you can have everything :) > errr .... you *can't* have everything. But you already knew that. (sorry, brain fade)

_____________________________

On 31 Mar 2004 15:26:06 -0800, m...@hotmail.com (Maria) wrote: >Hi, > >I am using Matlab to do ffts for some data collected in the lab of >electrophysiology recordings. When I plot the power spectrums, I get >huge peaks from 0-1 Hz. Are these peaks representative of the data or >artifacts carried over/created from the fft? > >Any help would be appreciated. > >Thanks. Hi Maria, No, those peaks at low frequencies are *not* an artifact of of the FFT algorithm. They indicate that your time-domain sequence is riding on a DC bias (that is, it means your FFT-ed sequence's average is not zero). You should subtract the sequence's average (a single value) from each sample in the sequence giving you a new sequence. Now take the FFT of that new sequence. As for which window to use, that's a tougher question. Windowing is generally used to reduce the spectral leakage that causes a strong (high amplitude) signal's spectral magnitude to cover over (swamp out) the spectral components of nearby weak signals. If a weak signal (in which you're interested) is 3-4 FFT frequency bins away from a strong signal's center frequency, use the Hamming window. If the weak signal (in which you're interested) is more than, say, 6 FFT frequency bins away from a strong signal's center frequency, use the Hanning window. Ah, there's much to learn about window functions, particularly the Kaiser and Chebyshev windows, which give you some control over their spectral leakage cahracteristics. Keep asking questions here Maria, Regards, [-Rick-]

_____________________________

"Fred Marshall" <f...@remove_the_x.acm.org> wrote in message news:<r...@centurytel.net>... > "Mark Borgerding" <m...@borgerding.net> wrote in message > news:7KJac.100982$8...@fe3.columbus.rr.com... > > Maria wrote: > > > Hi, > > > > > > I am using Matlab to do ffts for some data collected in the lab of > > > electrophysiology recordings. When I plot the power spectrums, I get > > > huge peaks from 0-1 Hz. Are these peaks representative of the data or > > > artifacts carried over/created from the fft? > > > > > > Any help would be appreciated. > > > > > > Thanks. > > > > Probably the latter. > > > > A common technique is to remove the DC component from the buffer before > > the FFT. > > > > e.g. > > > > fx = fft( x - mean(x) ); > > > > If you don't remove the mean then the result around 0Hz isn't an artifact > it's representative of the data. > > Somewhat related, if you don't taper the ends of the temporal data sequence > then there will be a strong periodic component at the lowest frequency but > not at 0Hz. > Example: the temporal sample is highly positive at the beginning and highly > negative at the end (after removing the mean). The Discrete Fourier > Transform / FFT treats the single sequence as if it's periodic. Let's say > the temporal epoch is 1 second and the sample rate is 512Hz. So, there are > 512 samples in that 1 second and the assumption is that it repeats. The > repetition is at 1Hz which is the fundamental frequency of the Fourier > Series represented in the DFT. > So, a large 1Hz component will emerge from the sharp discontinuity that > repeats at 1Hz. > > Tapering the ends of the temporal sequence (windowing) will remove the > sharpness of the discontinuity and, thus, remove the artificial 1Hz > component that's introduced by it. > > Fred Hi, When I window the data (hamming window), I lose amplitude on the power spectrum. Can i do anything for this? Also, is it better to take short FFTs (i.e. 1s intervals for 100s and then average them?) as opposed to taking one 100s segement? Any feedback/help is much appreciated. Thanks.

"Maria" <m...@hotmail.com> wrote in message news:3...@posting.google.com... > > Hi, > > When I window the data (hamming window), I lose amplitude on the power > spectrum. Can i do anything for this? Also, is it better to take > short FFTs (i.e. 1s intervals for 100s and then average them?) as > opposed to taking one 100s segement? > > Any feedback/help is much appreciated. > > Thanks. Maria, Yes, when you apply the window you are reducing the amplitudes in the time domain - so there will be corresponding reduction in the frequency domain. A good question might be: why do you want to "do something" about this? Assuming you want to do something, I guess you could scale the time samples at the same time you window so that the integral of the window is "N" like this: A rectangular window is the one you don't need to apply at all, it's what you get when to take a finite-length sample of data. All of the values in the window function are 1.0. The integral of these, their sum, is N where N is the number of samples taken over the interval. So, if you use a Hamming window or any other, you can add up the window weights (let's call this sum "W") and divide by N to see the scaling. Usually W is less than N so the scaling is less than 1.0 If your objective is to hold the integral of the window to be N, then you can multiply all the weights or coefficients of the window by N/W. This amounts to amplifying the time record by N/W and then windowing it - or vice-versa. Your question about shorter or longer FFTs has been answered in a few ways already. Most of the answers said: "it depends". You may need to ponder those responses more carefully. In the mean time, let's take a few cases: Let's assume that you already know that taking 1 second chunks (just as an example) will limit the resolution to 1Hz - and that this is acceptable to you. Saying this eliminates the issue of resolution entirely - so we can set that aside to keep the discussion simpler. Let's also assume that you know that taking a 100 second chunk will provide 0.01Hz resolution - but you don't care whether it does or doesn't because 1Hz is already deemed OK. Now, there are a couple of other considerations: - is the signal of interest stationary? That is, does it vary or not vary in its spectrum? For example, an engine running at constant rpm is pretty stationary - it doesn't vary all that much. - is there much noise in the signal? If the signal is stationary, then long samples make sense. If it isn't, then long samples may not make sense - nor would averaging spectra of short samples over longer times make as much sense either. This comes before the question of noise. SO: you need to decide how long you would either sample or average based on how stable the signal is. When you decide this, then you have an important parameter to work with. If there is noise added to the signal then: - the long record will give you really fine spectral detail of signal plus noise. If the noise is white, then there will be a grassy floor in the spectrum. If the signal is reasonably strong relative to the noise, then it will rise above that floor. Most importantly, the long record will split white noise over more samples and the signal to noise of the spectrum will improve (well, unless the signal itself is white noise and that another matter altogether!) - the short record / averaged will give you less spectral detail of signal plus noise and the noise floor will be higher relative to the long record - even with averaging because it starts out higher. I'm assuming that you have to average the magnitudes here - or take the sqrt of the sum of the squares .... something like that. What averaging will do is to throw out the occasional spectral spike that's caused by noise - and that may be important to you. So here are the steps for you to take: 1) Decide if the signal spectrum changes and, if so, decide about how often it changes. 2) Pick a temporal epoch that won't be too perturbed by the changes. That is, an epoch that is maybe 1/2 the time it takes the signal to change "very much". This can be your analysis window length. By this approach, if the signal is very stable, the epoch can be long. 3) Examine your desire for spectral resolution. If it's coarser than the reciprocal of the epoch length, then you're OK. If not, then you may have to increase the length of the epoch - which counteracts the decision you made in step (2). See, it's a trade between objectives and signal characteristics..... 4) Examine the signal to noise situation. If the noise level is so high that better resolution is necessary to pull the signal spectrum out of the noise spectrum then you have to increase the length of the epoch. Same issue as step (2) but from a different motivation. IF the resolution is already good enough then you might average spectra in order to reduce the effect of occasional noise peaks. Summary: Signal spectrum is(below): Very stable <> Moderately stable <> Quickly changing Long epochs <> Moderately long <> Short epochs Very narrow (high) <> Moderately narrow <> Broad (low) Spectral resolution needed is (above): Accordingly, your decisions about numbers will be easy if both the spectral character and the resolution "match" to yield either long or short epochs. It remains easy if the signal is very stable and broad resolution is OK. It's toughest when the signal isn't stable and there's a need for high resolution. That's because they just don't match... it's physics. ****************** Regarding averaging of spectra from a series of temporal epochs: IF the spectral resolution can be met AND Signal to noise is: (below) LowSNR (high noise) <> MedSNR <> HighSNR (low noise) Long epochs Can't average Can't ave Can't / don't need? (e.g. one only) Medium epochs Can / should? Can / should? Could (e.g. a few) Short epochs Can / should? Can / should? Could (e.g. many) Recall that the spectral signal to noise ratio obtained with a long epoch is better than with a shorter one. However, there might still be the occasional outlying spectral peak from the noise. If this is a concern, given adequate resolution, then averaging a number of spectra might be good to do. The trade could be: "Can I see the signal at all?" vs. "Can I stand an occasional erroneous peak?" This gets right to the heart of the age old trade between false alert vs. false rest. If the sensitivity is set high, I will see everything and get some bad positive readings. If the sensitivity to set low, I will miss something but won't have as many bad positive readings. The averaging is a compromise between high resolution with some errors and low resolution with fewer errors. It tends to reduce the errors IF its resolution is acceptable otherwise. That is as far as arm-waving will get. Thereafter, you may need some math if that's what's important to you. There are whole books - and certainly whole chapters - written on spectral estimation. So, after my simple arm-waving discussion that's where you might look next. Fred

_____________________________