# FFT Questions

Started by March 31, 2004
```Hi,

I am using Matlab to do ffts for some data collected in the lab of
electrophysiology recordings.  When I plot the power spectrums, I get
huge peaks from 0-1 Hz.  Are these peaks representative of the data or
artifacts carried over/created from the fft?

Any help would be appreciated.

Thanks.
```
```Maria wrote:
> Hi,
>
> I am using Matlab to do ffts for some data collected in the lab of
> electrophysiology recordings.  When I plot the power spectrums, I get
> huge peaks from 0-1 Hz.  Are these peaks representative of the data or
> artifacts carried over/created from the fft?
>
> Any help would be appreciated.
>
> Thanks.

Probably the latter.

A common technique is to remove the DC component from the buffer before
the FFT.

e.g.

fx = fft( x - mean(x) );

```
```"Mark Borgerding" <mark@borgerding.net> wrote in message
news:7KJac.100982\$8G2.69531@fe3.columbus.rr.com...
> Maria wrote:
> > Hi,
> >
> > I am using Matlab to do ffts for some data collected in the lab of
> > electrophysiology recordings.  When I plot the power spectrums, I get
> > huge peaks from 0-1 Hz.  Are these peaks representative of the data or
> > artifacts carried over/created from the fft?
> >
> > Any help would be appreciated.
> >
> > Thanks.
>
> Probably the latter.
>
> A common technique is to remove the DC component from the buffer before
> the FFT.
>
> e.g.
>
> fx = fft( x - mean(x) );
>

If you don't remove the mean then the result around 0Hz isn't an artifact
it's representative of the data.

Somewhat related, if you don't taper the ends of the temporal data sequence
then there will be a strong periodic component at the lowest frequency but
not at 0Hz.
Example: the temporal sample is highly positive at the beginning and highly
negative at the end (after removing the mean).  The Discrete Fourier
Transform / FFT treats the single sequence as if it's periodic.  Let's say
the temporal epoch is 1 second and the sample rate is 512Hz.  So, there are
512 samples in that 1 second and the assumption is that it repeats.  The
repetition is at 1Hz which is the fundamental frequency of the Fourier
Series represented in the DFT.
So, a large 1Hz component will emerge from the sharp discontinuity that
repeats at 1Hz.

Tapering the ends of the temporal sequence (windowing) will remove the
sharpness of the discontinuity and, thus, remove the artificial 1Hz
component that's introduced by it.

Fred

```
```Thanks for the info.  Also curious, there are several windowing
options in Matlab, would you recommend one?  Also, my sampling rate is
2000Hz.  Would it be better to do ffts with shorter time intervals
(~1-5s) and thus (smaller amount of points in the fft) vs larger time
intervals (~60s) and thus (larger amount of points in the fft)?

Any help would be appreciated.

THanks alot.

Maria

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<rc2dnTzW1p965_bdRVn-ug@centurytel.net>...
> "Mark Borgerding" <mark@borgerding.net> wrote in message
> news:7KJac.100982\$8G2.69531@fe3.columbus.rr.com...
> > Maria wrote:
> > > Hi,
> > >
> > > I am using Matlab to do ffts for some data collected in the lab of
> > > electrophysiology recordings.  When I plot the power spectrums, I
get
> > > huge peaks from 0-1 Hz.  Are these peaks representative of the data
or
> > > artifacts carried over/created from the fft?
> > >
> > > Any help would be appreciated.
> > >
> > > Thanks.
> >
> > Probably the latter.
> >
> > A common technique is to remove the DC component from the buffer before
> > the FFT.
> >
> > e.g.
> >
> > fx = fft( x - mean(x) );
> >
>
> If you don't remove the mean then the result around 0Hz isn't an artifact
> it's representative of the data.
>
> Somewhat related, if you don't taper the ends of the temporal data sequence
> then there will be a strong periodic component at the lowest frequency but
> not at 0Hz.
> Example: the temporal sample is highly positive at the beginning and highly
> negative at the end (after removing the mean).  The Discrete Fourier
> Transform / FFT treats the single sequence as if it's periodic.  Let's say
> the temporal epoch is 1 second and the sample rate is 512Hz.  So, there are
> 512 samples in that 1 second and the assumption is that it repeats.  The
> repetition is at 1Hz which is the fundamental frequency of the Fourier
> Series represented in the DFT.
> So, a large 1Hz component will emerge from the sharp discontinuity that
> repeats at 1Hz.
>
> Tapering the ends of the temporal sequence (windowing) will remove the
> sharpness of the discontinuity and, thus, remove the artificial 1Hz
> component that's introduced by it.
>
> Fred
```
```"Maria" <marija616@hotmail.com> wrote in message
> Thanks for the info.  Also curious, there are several windowing
> options in Matlab, would you recommend one?  Also, my sampling rate is
> 2000Hz.  Would it be better to do ffts with shorter time intervals
> (~1-5s) and thus (smaller amount of points in the fft) vs larger time
> intervals (~60s) and thus (larger amount of points in the fft)?
>

Maria,

Your questions are both answered by considering what windows do in general.
The time spans represent rectangular windows of different widths.
Using one or another length amounts to convolving the frequency information
with a sinc=sin(kx)/kx function whose width is inversely proportional to the
time length.

So, longer time, narrower sinc to "smear" the frequency data.

Other windows have the nice property of reducing the sidelobes or "tails"
of
that sinc and the choice is a matter of what's easiest to use and what your
requirements might be.
Kaiser's window is probably a pretty good choice because you can trade off
the critical characteristics in frequency.

One critical trade in windows is the width of the main lobe of its frequency
character vs. the height of the sidelobes.  In general, the wider the main
lobe, the smaller the sidelobes can be.  As above, the longer the time data,
the narrower the main lobe of the sinc.  So, if you have lots of time data
then the main lobe might be quite narrow anyway and your concern may shift
to the sidelobes ("spectral leakage").

You don't have to window the entire temporal sequence either.  You might
keep the sequence intact for much of its span and only taper the edges.
This brings the result closer to having a rectangular window but gets rid of
a sharp discontinuity at the edges.  So, a very simple window might look
like:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.0 .......... 1.0 0.9 0.8 ...0.1
0

A more sophisticated version of this would shape the edges according to a
cosine so that the ends of the transition have zero derivative.  And, you
can make the transition any width you like up to having it transition up for
1/2 of the temporal record and down for the remaining 1/2 - then it would be
more like the windows you can compute with Matlab.

Just look at the magnitude of the FFT of the window to see what it does in
frequency.  No matter what, to get rid of the 1Hz component we talked about,
you probably want the window to go to zero at the edges.

Fred

```
```Maria wrote:
> Thanks for the info.  Also curious, there are several windowing
> options in Matlab, would you recommend one?

Depends on whether you want sideband rejection or narrow band.  Both are
good things to have, but you can have everything :)

A rectangular window (i.e. no windowing at all) has very sharp
transition, but only 13dB sideband rejection (if memory serves).   For
some processing, this is enough.  Especially if several consecutives fft
outputs are added coherently ... but I digress.

The kaiser window allows fine tuning to trade off between transition
width and sideband suppression.

You can use freqz(w) to see the various windows.

If you don't want to spend too many brain cycles deciding, the hamming
window is a decent starting point.

> Also, my sampling rate is
> 2000Hz.  Would it be better to do ffts with shorter time intervals
> (~1-5s) and thus (smaller amount of points in the fft) vs larger time
> intervals (~60s) and thus (larger amount of points in the fft)?

Again it depends:

Shorter windows give you more sensitivity to brief features/events.
Longer windows give you better frequency resolution.

Decide how long an "event of interest" lasts.
Decide how precisely you need to measure frequencies (subject to
transition bandwidth dictated by windowing type).

The answer to these two questions will tell you what range of window
lengths would be acceptable.

Hope this helps.

-- Mark Borgerding

```
```Mark Borgerding wrote:
> Maria wrote:
>
>> Thanks for the info.  Also curious, there are several windowing
>> options in Matlab, would you recommend one?
>
>
> Depends on whether you want sideband rejection or narrow band.  Both are
> good things to have, but you can have everything :)
>
errr .... you *can't* have everything.  But you already knew that.

```
```On 31 Mar 2004 15:26:06 -0800, marija616@hotmail.com (Maria) wrote:

>Hi,
>
>I am using Matlab to do ffts for some data collected in the lab of
>electrophysiology recordings.  When I plot the power spectrums, I get
>huge peaks from 0-1 Hz.  Are these peaks representative of the data or
>artifacts carried over/created from the fft?
>
>Any help would be appreciated.
>
>Thanks.

Hi Maria,

No, those peaks at low frequencies are *not* an artifact
of of the FFT algorithm.  They indicate that your
time-domain sequence is riding on a DC bias (that is,
it means your FFT-ed sequence's average is not zero).

You should subtract the sequence's average (a single
value) from each sample in the sequence giving you a
new sequence.  Now take the FFT of that new sequence.

As for which window to use, that's a tougher
question.  Windowing is generally used to reduce
the spectral leakage that causes a strong (high
amplitude) signal's spectral magnitude to cover over
(swamp out) the spectral components of nearby
weak signals.  If a weak signal (in which you're
interested) is 3-4 FFT frequency bins away from a strong
signal's center frequency, use the Hamming window.

If the weak signal (in which you're
interested) is more than, say, 6 FFT frequency bins
away from a strong signal's center frequency, use the
Hanning window.

Ah, there's much to learn about window functions,
particularly the Kaiser and Chebyshev windows,
which give you some control over their
spectral leakage cahracteristics.

Regards,
[-Rick-]

```
```"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<rc2dnTzW1p965_bdRVn-ug@centurytel.net>...
> "Mark Borgerding" <mark@borgerding.net> wrote in message
> news:7KJac.100982\$8G2.69531@fe3.columbus.rr.com...
> > Maria wrote:
> > > Hi,
> > >
> > > I am using Matlab to do ffts for some data collected in the lab of
> > > electrophysiology recordings.  When I plot the power spectrums, I
get
> > > huge peaks from 0-1 Hz.  Are these peaks representative of the data
or
> > > artifacts carried over/created from the fft?
> > >
> > > Any help would be appreciated.
> > >
> > > Thanks.
> >
> > Probably the latter.
> >
> > A common technique is to remove the DC component from the buffer before
> > the FFT.
> >
> > e.g.
> >
> > fx = fft( x - mean(x) );
> >
>
> If you don't remove the mean then the result around 0Hz isn't an artifact
> it's representative of the data.
>
> Somewhat related, if you don't taper the ends of the temporal data sequence
> then there will be a strong periodic component at the lowest frequency but
> not at 0Hz.
> Example: the temporal sample is highly positive at the beginning and highly
> negative at the end (after removing the mean).  The Discrete Fourier
> Transform / FFT treats the single sequence as if it's periodic.  Let's say
> the temporal epoch is 1 second and the sample rate is 512Hz.  So, there are
> 512 samples in that 1 second and the assumption is that it repeats.  The
> repetition is at 1Hz which is the fundamental frequency of the Fourier
> Series represented in the DFT.
> So, a large 1Hz component will emerge from the sharp discontinuity that
> repeats at 1Hz.
>
> Tapering the ends of the temporal sequence (windowing) will remove the
> sharpness of the discontinuity and, thus, remove the artificial 1Hz
> component that's introduced by it.
>
> Fred

Hi,

When I window the data (hamming window), I lose amplitude on the power
spectrum.  Can i do anything for this?  Also, is it better to take
short FFTs (i.e. 1s intervals for 100s and then average them?) as
opposed to taking one 100s segement?

Any feedback/help is much appreciated.

Thanks.
```
```"Maria" <marija616@hotmail.com> wrote in message
>
> Hi,
>
> When I window the data (hamming window), I lose amplitude on the power
> spectrum.  Can i do anything for this?  Also, is it better to take
> short FFTs (i.e. 1s intervals for 100s and then average them?) as
> opposed to taking one 100s segement?
>
> Any feedback/help is much appreciated.
>
> Thanks.

Maria,

Yes, when you apply the window you are reducing the amplitudes in the time
domain - so there will be corresponding reduction in the frequency domain.
Assuming you want to do something, I guess you could scale the time samples
at the same time you window so that the integral of the window is "N"
like
this:

A rectangular window is the one you don't need to apply at all, it's what
you get when to take a finite-length sample of data.  All of the values in
the window function are 1.0.  The integral of these, their sum, is N where N
is the number of samples taken over the interval.

So, if you use a Hamming window or any other, you can add up the window
weights (let's call this sum "W") and divide by N to see the scaling.
Usually W is less than N so the scaling is less than 1.0
If your objective is to hold the integral of the window to be N, then you
can multiply all the weights or coefficients of the window by N/W.  This
amounts to amplifying the time record by N/W and then windowing it - or
vice-versa.

already.  Most of the answers said: "it depends".  You may need to ponder
those responses more carefully.
In the mean time, let's take a few cases:

Let's assume that you already know that taking 1 second chunks (just as an
example) will limit the resolution to 1Hz - and that this is acceptable to
you.  Saying this eliminates the issue of resolution entirely - so we can
set that aside to keep the discussion simpler.

Let's also assume that you know that taking a 100 second chunk will provide
0.01Hz resolution - but you don't care whether it does or doesn't because

Now, there are a couple of other considerations:
- is the signal of interest stationary?  That is, does it vary or not vary
in its spectrum?  For example, an engine running at constant rpm is pretty
stationary - it doesn't vary all that much.
- is there much noise in the signal?

If the signal is stationary, then long samples make sense.  If it isn't,
then long samples may not make sense - nor would averaging spectra of short
samples over longer times make as much sense either.
This comes before the question of noise.
SO: you need to decide how long you would either sample or average based on
how stable the signal is.
When you decide this, then you have an important parameter to work with.

If there is noise added to the signal then:

- the long record will give you really fine spectral detail of signal plus
noise.  If the noise is white, then there will be a grassy floor in the
spectrum.  If the signal is reasonably strong relative to the noise, then it
will rise above that floor.
Most importantly, the long record will split white noise over more samples
and the signal to noise of the spectrum will improve (well, unless the
signal itself is white noise and that another matter altogether!)

- the short record / averaged will give you less spectral detail of signal
plus noise and the noise floor will be higher relative to the long record -
even with averaging because it starts out higher.  I'm assuming that you
have to average the magnitudes here - or take the sqrt of the sum of the
squares .... something like that.  What averaging will do is to throw out
the occasional spectral spike that's caused by noise - and that may be
important to you.

So here are the steps for you to take:

1) Decide if the signal spectrum changes and, if so, decide about how often
it changes.

2) Pick a temporal epoch that won't be too perturbed by the changes.  That
is, an epoch that is maybe 1/2 the time it takes the signal to change "very
much".
This can be your analysis window length.  By this approach, if the signal is
very stable, the epoch can be long.

3) Examine your desire for spectral resolution.  If it's coarser than the
reciprocal of the epoch length, then you're OK.  If not, then you may have
to increase the length of the epoch - which counteracts the decision you

See, it's a trade between objectives and signal characteristics.....

4) Examine the signal to noise situation.  If the noise level is so high
that better resolution is necessary to pull the signal spectrum out of the
noise spectrum then you have to increase the length of the epoch.  Same
issue as step (2) but from a different motivation.

IF the resolution is already good enough then you might average spectra in
order to reduce the effect of occasional noise peaks.

Summary:

Signal spectrum is(below):

Very stable         <>  Moderately stable  <>  Quickly changing

Long epochs         <>  Moderately long    <>  Short epochs

Very narrow (high)  <>  Moderately narrow  <>  Broad (low)

Spectral resolution needed is (above):

Accordingly, your decisions about numbers will be easy if both the spectral
character and the resolution "match" to yield either long or short
epochs.

It remains easy if the signal is very stable and broad resolution is OK.

It's toughest when the signal isn't stable and there's a need for high
resolution.  That's because they just don't match... it's physics.

******************

Regarding averaging of spectra from a series of temporal epochs:

IF the spectral resolution can be met
AND
Signal to noise is: (below)

LowSNR (high noise)  <>  MedSNR   <>  HighSNR (low
noise)

Long epochs       Can't average        Can't ave     Can't / don't need?
(e.g. one only)
Medium epochs     Can / should?       Can / should?      Could
(e.g. a few)
Short epochs      Can / should?       Can / should?      Could
(e.g. many)

Recall that the spectral signal to noise ratio obtained with a long epoch is
better than with a shorter one.  However, there might still be the
occasional outlying spectral peak from the noise.  If this is a concern,
given adequate resolution, then averaging a number of spectra might be good
to do.
"Can I see the signal at all?" vs. "Can I stand an occasional
erroneous
peak?"
This gets right to the heart of the age old trade between false alert vs.
false rest.  If the sensitivity is set high, I will see everything and get
some bad positive readings.  If the sensitivity to set low, I will miss
The averaging is a compromise between high resolution with some errors and
low resolution with fewer errors.  It tends to reduce the errors IF its
resolution is acceptable otherwise.  That is as far as arm-waving will get.
Thereafter, you may need some math if that's what's important to you.

There are whole books - and certainly whole chapters - written on spectral
estimation.  So, after my simple arm-waving discussion that's where you
might look next.

Fred

```
```"Toby Newman" <google@asktoby.com> wrote in message
> > Maria wrote:
> >> Hi,
> >>
> >> I am using Matlab to do ffts for some data collected in the lab of
> >> electrophysiology recordings.  When I plot the power spectrums, I get
> >> huge peaks from 0-1 Hz.  Are these peaks representative of the data
or
> >> artifacts carried over/created from the fft?
> >>
> >> Any help would be appreciated.
> >>
> >> Thanks.
>
> Perhaps they are DC offsets?

Toby,

At the first spectral sample could be....  But not at any other spectral
samples unless there's a window that will spread the spectrum of the dc
component.

Fred

```
```> Maria wrote:
>> Hi,
>>
>> I am using Matlab to do ffts for some data collected in the lab of
>> electrophysiology recordings.  When I plot the power spectrums, I get
>> huge peaks from 0-1 Hz.  Are these peaks representative of the data or
>> artifacts carried over/created from the fft?
>>
>> Any help would be appreciated.
>>
>> Thanks.

Perhaps they are DC offsets?

--
Toby
BSOD VST & ME
```
```Maria wrote:
> Hi,
>
> I am using Matlab to do ffts for some data collected in the lab of
> electrophysiology recordings.  When I plot the power spectrums, I get
> huge peaks from 0-1 Hz.  Are these peaks representative of the data or
> artifacts carried over/created from the fft?
>
> Any help would be appreciated.
>
> Thanks.

Based on various posts in this thread I suggest a literature search for

Dr. Edgar Gasteiger
Veterinary Physiology
Cornell University
circa late 1960's

I worked for him and Dr. Nangeroni as a student electronics tech.
The problem description brings back vague memories but too few useful
details.

I've initiated personal contacts. If anything comes up, I will post.
```
```Hey Rick,

Thanks for clarifying where she's coming from (processing

r.lyons@_BOGUS_ieee.org (Rick Lyons) writes:
> [...]
>     "so all I know is that it
>      was recorded, filtered at 5kHz and
>      digitized at 10-20 kHz using a Digidata
>      1200 A/D board. The data has been
>      downsampled to 2000Hz."
>
> That's sounds like lowpass analog filtering
> with a cutoff freq of 5 kHz.  I wonder how sharp
> are the filter's skirts.  Then the "digitized at
> 10-20 kHz" phrase was also kinda "scary" because the
> sample rate needs to be well-known and fixed
> for FFT results to have meaning in terms of
> spectral components' frequencies.

I agree - we don't know what her sampled data situation
REALLY is.

> I didn't
> know what her phrase "downsampled to 2000Hz" meant.
> Wonder if that meant some sort of frequency
> translation, or could it have meant, "decimated
> to a sample rate of 2000 Hz"?

My first reaction is just that it was decimated - I
don't know if she meant the sample rate came down
to 2000 Hz or the data bandwidth.

> Her problem sure sounds interesting and one that
> can be solved.  I'll bet this is a university
> research project.  If she's in North Carolina
> Randy, you could pay her a visit.  If she's in Northern
> California, I could stop by and look at her test
> hardware.  Sure sounds like any interesting project
> to me.

You could probably get 10 usenet posts ahead if you
could just talk to her interactively (i.e., on the phone).

She could be at UNC/Chapel Hill since they have a biomedical
engineering program and lots of medical stuff going on there
(not to mention Duke). Of course she could be in Berkina Fasa, too.
--
%  Randy Yates                  % "Though you ride on the wheels of tomorrow,
%% Fuquay-Varina, NC            %  you still wander the fields of your
%%% 919-577-9882                %  sorrow."
%%%% <yates@ieee.org>           % '21st Century Man', *Time*, ELO
```
```On Mon, 05 Apr 2004 03:14:22 GMT, Randy Yates <yates@ieee.org> wrote:

>Rick, I hope you don't mind my jumping in here.
>
Hi, Randy,

no, you're your ideas are sure welcome.

>marija616@hotmail.com (Maria) writes:
>> [...]
>> However, my signal has a very low SNR (singal to noise ratio),
>
>Is that before or after the A/D conversion? If after, then you
>may not have the converter's input range matched properly to the
>input signal range. That is, you may be degrading the SNR
>unecessarily due to the quantization noise of the converter.
>
>Is there a programmable gain stage in analog front-end of your
>data acquisition system? If so, you might try increasing the
>gain as long as you don't digitally clip the input.
>
>Sorry if this is rehashing something already covered.

No, I don't think it was covered.
It sounds like Maria's signal jumps all over
the place in terms of its analog rms
voltage.  I hope she's driving the A/D
converter sufficiently hard.  She said.

"I'm analyzing the data, I did not perform
the experiments, ...

That's a little "spooky" to me because so many
steps must be performed properly to maximize the
"usefulness" of the signal before she gets it.
She also said:

"so all I know is that it
was recorded, filtered at 5kHz and
digitized at 10-20 kHz using a Digidata
1200 A/D board. The data has been
downsampled to 2000Hz."

That's sounds like lowpass analog filtering
with a cutoff freq of 5 kHz.  I wonder how sharp
are the filter's skirts.  Then the "digitized at
10-20 kHz" phrase was also kinda "scary" because the
sample rate needs to be well-known and fixed
for FFT results to have meaning in terms of
spectral components' frequencies.  I didn't
know what her phrase "downsampled to 2000Hz" meant.
Wonder if that meant some sort of frequency
translation, or could it have meant, "decimated
to a sample rate of 2000 Hz"?

Her problem sure sounds interesting and one that
can be solved.  I'll bet this is a university
research project.  If she's in North Carolina
Randy, you could pay her a visit.  If she's in Northern
California, I could stop by and look at her test
hardware.  Sure sounds like any interesting project
to me.

[-Rick-]

```
```"Maria" <marija616@hotmail.com> wrote in message
> Hi,
>
> My recordings are coming from rat brain slices.  During recording, the
> aim is to maintain 'clamp' the cell membrane voltage to -60mV (typical
> cell membrane voltage).  However, due to spontaneous activity in the
> cell, this membrane voltage will fluctuate.  So, I guess when I say
> (-60mV).  So, in order to adjust my signal as was suggested, the
> program that I'm using to record allows me to 'adjust' the entire
> trace or recording by subtracting the mean of the trace.  THat's what
> I've done, so that now my recordings fluctuate about 0mV.  Is this
> right?

Yes, that would be the right thing to do if that's indeed what it's doing.
One way to know for sure is to look at the zero frequency result in the
FFT - it has to be zero if the record has zero mean.

I thought you were using Matlab?  So, "the program I'm using"????
I would rather trust Matlab to remove the mean before you FFT.
In fact, what could that hurt just to make sure there's no residual dc
component coming out of that other program?  I would do it - i.e. remove the
mean (again) using Matlab before doing the FFT with Matlab.

*****************next subject

If you want to look at data from 1 to 20Hz, then the amount of data you have
will surely yield good enough resolution to split this region up into 100
cells (5 seconds of data) or more (10-100 seconds of data).

How many data points do you need to get in the spectrum between 0 and 20Hz?

Now, there will be a transient at the first sample above zero in the
spectrum if the ends of the data values are very dissimilar.  So, if you
taper the ends of the time data this will be alleviated.
Most windowing is just a more drastic type of tapering of the ends.
With all the data you have, the energy in this transient is probably small
but a bit of tapering or even full windowing won't hurt anything.
I would definitely do this.  See below about selecting the length of time
records.  Very important!

*************************next subject

I've brought this up before but I'm not sure I got it across:

If you have 100 seconds of data and you analyze all 100 seconds at once
using an FFT, then you won't be able to observe any changes in the data that
occur during that 100 second period.  You can only observe the existence of
periodic components.
So, if there's a periodic component that lasts for 4 seconds and then
another, at a different frequency, that lasts for 7 seconds, these will be
obscured in a 100 second record / FFT unless they have high SNR.  In that
case it would be better to analyze 10 second epochs separately.
What is the nature of the variation in the signal of interest?
Are you only looking for periodic components that last for 100 seconds or
nearly so?
This is a critical element in the design of the analytical approach.

I hope this helps Maria!

Fred

```
```Rick, I hope you don't mind my jumping in here.

marija616@hotmail.com (Maria) writes:
> [...]
> However, my signal has a very low SNR (singal to noise ratio),

Is that before or after the A/D conversion? If after, then you
may not have the converter's input range matched properly to the
input signal range. That is, you may be degrading the SNR
unecessarily due to the quantization noise of the converter.

Is there a programmable gain stage in analog front-end of your
data acquisition system? If so, you might try increasing the
gain as long as you don't digitally clip the input.

Sorry if this is rehashing something already covered.
--
%  Randy Yates                  % "Watching all the days go by...
%% Fuquay-Varina, NC            %  Who are you and who am I?"
%%% 919-577-9882                % 'Mission (A World Record)',
%%%% <yates@ieee.org>           % *A New World Record*, ELO
```
```Hi,

I'm recording the spontaneous activity in rat brain slices, in a
control setting, followed by perfusion with certain drugs.  I'm hoping
to see how the drugs affect the activity of the cell (i.e. cell
membrane voltage).  Without exciting the cell (i.e. cell at rest), the
dominant frequencies should lie around 0-20 Hz.  When I say highly
fluctuating, I guess I mean that the cell itself possess a lot of
intrinsic noise, so it's not very stable at all, the membrane voltage
is constantly falling and rising.

My goal is to determine the effect of the drugs on cellular activity
-> a decrease or increase, so I'm looking at the dominant frequencies
before and after the drugs are applied.  However, my signal has a very
low SNR (singal to noise ratio), so it's difficult to determine what
part is the signal.

I'm analyzing the data, I did not perform the experiments, so all I
know is that it was recorded, filtered at 5kHz and digitized at 10-20
kHz using a Digidata 1200 A/D board. The data has been downsampled to
2000Hz.

Maria

r.lyons@_BOGUS_ieee.org (Rick Lyons) wrote in message
news:<40703389.336331515@news.sf.sbcglobal.net>...
> On 3 Apr 2004 10:16:43 -0800, marija616@hotmail.com (Maria) wrote:
>
> >ok, so here's what's happening.
>
> UH oh!  here we go.
>
> >My data, is at a baseline of ~ -60mV,
> >which I've subtracted to get rid of the DC component.
>
> Was that subtraction before or after
> A-D conversion?
>
> >I'm looking at
> >spontaneous activity in the slices and my data is highly fluctuating.
>
> Please know that I don't know what "slices"
> means.  By "highly fluctuating" I'm guessing
> you mean the signal contains much high frequency
> energy.
>
> >Also my data may deviate from the baseline at most 10mV (usually 1-5).
>
> What number format are you using in the
> digital domain?  How many bits is your
> A-D converter.  Over what bit range does
> your digital time samples fluctuate?
> Can you tell us the "signal to quantization
> noise" ratio of your time samples?
>
> > So my amplitude is very small and if I were to window, I don't want
> >to lose the amplitude.
>
> Amplitude loss won't be much of a problem with
> windowing, but the broadening (widening) of narrow
> spectral components may be a problem.
>
> >Also, the data is very noisy.  So from what
> >I've read from all the great suggestions, I'm thinking of taking
> >larger time intervals (the recordings go from 30-100s) and windowing.
> >Any other suggestions?
>
> Yes.  Try different-length FFTs and try using
> no windowing and then try a hanning window.
> Compare your results.  What effect does the small/large FFT size have,
> what effect does windowing have?
>
> >Also, i'm interested in the 0 - 20 Hz range,
>
> Ah yes, unless I missed it, what is your sampling rate?
> Are you satisfying the Nyquist sampling
> criterion?
>
> >and even after getting
> >rid of the DC bias and windowing, I get huge spikes in my spectrum
> >from 0-1 Hz.
>
> My next questions are: "Are you sure you're reducing
> the DC component as much as possible?
> What's the average value of the sequences of which
>
> If you really are minimizing the DC component as much
> as possible, my guess is that the 0-1 Hz spikes are
>
> >Is it just noise I can't get rid of; is there anything
> >else I can do?  I'm getting the 'grassy' white noise look, and not
> >alot of signal coming through.
>
> Sounds like you're working with a signal
> of very low signal-to-noise ratio.
>
> Are you merely trying to detect the presence of
> low-level spectral components?  Is that your
> goal?
>
> questions here Maria,
>
> Regards,
> [-Rick-]
```
```Hi,

My recordings are coming from rat brain slices.  During recording, the
aim is to maintain 'clamp' the cell membrane voltage to -60mV (typical
cell membrane voltage).  However, due to spontaneous activity in the
cell, this membrane voltage will fluctuate.  So, I guess when I say
(-60mV).  So, in order to adjust my signal as was suggested, the
program that I'm using to record allows me to 'adjust' the entire
trace or recording by subtracting the mean of the trace.  THat's what
I've done, so that now my recordings fluctuate about 0mV.  Is this
right?

Maria

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<uMGdnawd37Ytpu3dRVn-vg@centurytel.net>...
> "Maria" <marija616@hotmail.com> wrote in message
> > ok, so here's what's happening.  My data, is at a baseline of ~ -60mV,
> > which I've subtracted to get rid of the DC component.  I'm looking at
> > spontaneous activity in the slices and my data is highly fluctuating.
> > Also my data may deviate from the baseline at most 10mV (usually 1-5).
> >  So my amplitude is very small and if I were to window, I don't want
> > to lose the amplitude. Also, the data is very noisy.  So from what
> > I've read from all the great suggestions, I'm thinking of taking
> > larger time intervals (the recordings go from 30-100s) and windowing.
> > Any other suggestions?
> >
> > Also, i'm interested in the 0 - 20 Hz range, and even after getting
> > rid of the DC bias and windowing, I get huge spikes in my spectrum
> > from 0-1 Hz.  Is it just noise I can't get rid of; is there anything
> > else I can do?  I'm getting the 'grassy' white noise look, and not
> > alot of signal coming through.
>
> Maria,
>
> You say the data is at a baseline of -60mV, which you have subtracted to get
> rid of the dc component.......
>
> Could there be a misunderstanding here?
> I might call the "baseline" the more or less lowest level one sees in
noisy
> data - in the time domain.  As on a chart recorder or oscilloscope.
> If that's what you mean, then that's *not* the dc component.
>
> To remove the dc component, you need to remove the mean of the whole signal.
> This means
> - compute the mean of the samples.  The sum of the samples divided by N, the
> number of samples.
> - subtract that mean value from every sample of the signal.
> Now you have zero mean and no dc component.
>
> So, can you confirm that this is what you've done?  Then we'll get onto the
> next things you might do to get better results.
>
> Fred
```
```"Maria" <marija616@hotmail.com> wrote in message
> ok, so here's what's happening.  My data, is at a baseline of ~ -60mV,
> which I've subtracted to get rid of the DC component.  I'm looking at
> spontaneous activity in the slices and my data is highly fluctuating.
> Also my data may deviate from the baseline at most 10mV (usually 1-5).
>  So my amplitude is very small and if I were to window, I don't want
> to lose the amplitude. Also, the data is very noisy.  So from what
> I've read from all the great suggestions, I'm thinking of taking
> larger time intervals (the recordings go from 30-100s) and windowing.
> Any other suggestions?
>
> Also, i'm interested in the 0 - 20 Hz range, and even after getting
> rid of the DC bias and windowing, I get huge spikes in my spectrum
> from 0-1 Hz.  Is it just noise I can't get rid of; is there anything
> else I can do?  I'm getting the 'grassy' white noise look, and not
> alot of signal coming through.

Maria,

You say the data is at a baseline of -60mV, which you have subtracted to get
rid of the dc component.......

Could there be a misunderstanding here?
I might call the "baseline" the more or less lowest level one sees in
noisy
data - in the time domain.  As on a chart recorder or oscilloscope.
If that's what you mean, then that's *not* the dc component.

To remove the dc component, you need to remove the mean of the whole signal.
This means
- compute the mean of the samples.  The sum of the samples divided by N, the
number of samples.
- subtract that mean value from every sample of the signal.
Now you have zero mean and no dc component.

So, can you confirm that this is what you've done?  Then we'll get onto the
next things you might do to get better results.

Fred

```