DSPRelated.com
Forums

Newbie, heart data analysis - detecting peaks?

Started by Datura January 11, 2004
Hello everyone,

Purely for fun, I am working on a project where my ultimate goal is to
calculate heart-rate variability (HRV) based on a heart-pulse graph (looks
like an ECG.)

For now, HRV for my project is going to be limited to finding the standard
deviation based on the intervals between each heartbeat, which corresponds
to the peak-to-peak interval.

I have little or no knowledge of DSP techniques.  I'm a computer programmer,
and have had some calculus in college, so I should be able to learn enough
about DSP to accomplish my goal, but I need some outside help to point me in
the right direction.

I follow my projects wherever they lead me, and this one has brought me into
the world of DSP.  First time for me.  I'm finding it really interesting, if
not overwhelming!

So far, I've discovered there are techniques for calculating HRV based on
both time-domain and frequency-domain graphs of the heart data.

I'd like to work on techniques, just to broaden my knowledge. But here is
where I'm having trouble.

1) The time-domain techniques involve taking the Normal-to-Normal intervals
between peaks and performing statistical functions on them.  The problem I'm
having here, is how to correctly find the "peaks."   Each heart beat has one
large spike, I believe in the R-phase (heart beats can be described as QRS
complexes), but there are other much smaller spikes which may contain peaks,
but are not of the same magnitude.  Whichever technique I use must
compensate for those smaller peaks, and ignore them.

The other issue is that because of my sampling device, the scale of my data
may not always be the same, so any type of "threshold" used to determine if
a peak is of high enough magnitude must be adaptive in some way.

So, for this problem of accurately detecting heart-beat peaks- I could
probably hobble some hokey algorithm together, that did a fairly good job
under most circumstances, but I'm interested in learning about more accurate
and/or sophisticated ways to do this. After all, it's supposed to be a
learning experience for me.  Hope someone can help.

2) Frequency-Domain analysis of heart-data
My intention here, for now, is to use someone elses FFT API in my own
softare, and simply pass in the data to be transformed.   I'm dimly aware
that if I don't have evenly spaced samples, I better compensate before I run
an FFT on the data.  Otherwise, I'm looking at some kind of inaccuracy.

I actually have a lot of questions on this topic, but perhaps I should start
at a few very basic, hopefully easy to understands ones ..

Lets say my sampled heart-data contains real-numbers over time.  Every
25-30ms in this case.  Lets say, when plotted, my heart-data graph has a Y
scale of -5 to +5.  Most beats have their major peak at about 2.8xx - 3.2xx
in the scale that the device samples with.

Also- I'm assuming I can somehow use the FFT to find the location of the
peaks in the data, or if I can't do that directly, to calculate how many
peaks of a certain magnitude, at least. (Obviously, not really sure what I
can do here.)  Remember, the ultimate goal is that I want to take the
standard deviation of the length of time between each heart beat.

When I manage to successfully transform the time-domain data into
frequency-domain, how do I determine at what frequency the peaks in the
time-domain are represented at in the new frequency-domain graph? Is it
based on the Y values in the time-domain graph?

Well, I have more ground to cover, but I think I'll wait on a response to
that, just to see if I'm anywhere close to an understanding of what is going
on.

Thanks everyone.
Datura














Datura:

You have encountered the normal problem with ECG signals--every electrode
has different amplitude response, and that response varies over time. Your
best bet is to first "AGC" the ECG data prior to doing any time or frequency
domain analysis on it. In casy you don' know, AGC means "Automatic Gain
Control", which means to follow the ECG signal over time, and a apply a
variable gain to it so that the R peaks achieve about the same amplitude
each beat. The AGC can be done digitally, prior to your other processing.

Also, it goes without saying (ok, I'm saying it), that you should BPF the
electrode signal prior to any analysis. This is usually a 5-30 Hz filter, or
there abouts. The more agressive the better. You need to try and get rid of
everything out of the normal ECG band., as there is a lot of it, especially
when the patient is being operated on.

As for the other processing, the one I have used is to, in the time domain,
and after the AGC, look for a very sharp rise in amplitude. This works
fairly well under normal circumstances, but is tricked by a "pacer", which
creates a huge electrical spike. However, my application was in cardiac
assist, where we have to identify each QRS complex instantaneously. I think
that if your goal is to get an average heart rate over time, you would be
better off just AGC'ing and the FFT'ing the ECG. The centroid of the
fundamental frequency (withing limits--don't look near DC, since electrodes
and patient movement create ample very low (less than 2Hz) frequency noise)
will give you the average heartrate over the data that you transformed.

Hope this helps,
Jim Gort

"Datura" <nospam@stopspam.com> wrote in message
news:qHfMb.5269489$Id.844968@news.easynews.com...
> Hello everyone, > > Purely for fun, I am working on a project where my ultimate goal is to > calculate heart-rate variability (HRV) based on a heart-pulse graph (looks > like an ECG.) > > For now, HRV for my project is going to be limited to finding the standard > deviation based on the intervals between each heartbeat, which corresponds > to the peak-to-peak interval. > > I have little or no knowledge of DSP techniques. I'm a computer
programmer,
> and have had some calculus in college, so I should be able to learn enough > about DSP to accomplish my goal, but I need some outside help to point me
in
> the right direction. > > I follow my projects wherever they lead me, and this one has brought me
into
> the world of DSP. First time for me. I'm finding it really interesting,
if
> not overwhelming! > > So far, I've discovered there are techniques for calculating HRV based on > both time-domain and frequency-domain graphs of the heart data. > > I'd like to work on techniques, just to broaden my knowledge. But here is > where I'm having trouble. > > 1) The time-domain techniques involve taking the Normal-to-Normal
intervals
> between peaks and performing statistical functions on them. The problem
I'm
> having here, is how to correctly find the "peaks." Each heart beat has
one
> large spike, I believe in the R-phase (heart beats can be described as QRS > complexes), but there are other much smaller spikes which may contain
peaks,
> but are not of the same magnitude. Whichever technique I use must > compensate for those smaller peaks, and ignore them. > > The other issue is that because of my sampling device, the scale of my
data
> may not always be the same, so any type of "threshold" used to determine
if
> a peak is of high enough magnitude must be adaptive in some way. > > So, for this problem of accurately detecting heart-beat peaks- I could > probably hobble some hokey algorithm together, that did a fairly good job > under most circumstances, but I'm interested in learning about more
accurate
> and/or sophisticated ways to do this. After all, it's supposed to be a > learning experience for me. Hope someone can help. > > 2) Frequency-Domain analysis of heart-data > My intention here, for now, is to use someone elses FFT API in my own > softare, and simply pass in the data to be transformed. I'm dimly aware > that if I don't have evenly spaced samples, I better compensate before I
run
> an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > I actually have a lot of questions on this topic, but perhaps I should
start
> at a few very basic, hopefully easy to understands ones .. > > Lets say my sampled heart-data contains real-numbers over time. Every > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a Y > scale of -5 to +5. Most beats have their major peak at about 2.8xx -
3.2xx
> in the scale that the device samples with. > > Also- I'm assuming I can somehow use the FFT to find the location of the > peaks in the data, or if I can't do that directly, to calculate how many > peaks of a certain magnitude, at least. (Obviously, not really sure what I > can do here.) Remember, the ultimate goal is that I want to take the > standard deviation of the length of time between each heart beat. > > When I manage to successfully transform the time-domain data into > frequency-domain, how do I determine at what frequency the peaks in the > time-domain are represented at in the new frequency-domain graph? Is it > based on the Y values in the time-domain graph? > > Well, I have more ground to cover, but I think I'll wait on a response to > that, just to see if I'm anywhere close to an understanding of what is
going
> on. > > Thanks everyone. > Datura > > > > > > > > > > > > > >
"Datura" <nospam@stopspam.com> wrote in message
news:qHfMb.5269489$Id.844968@news.easynews.com...
> Hello everyone, > > Purely for fun, I am working on a project where my ultimate goal is to > calculate heart-rate variability (HRV) based on a heart-pulse graph (looks > like an ECG.) > > For now, HRV for my project is going to be limited to finding the standard > deviation based on the intervals between each heartbeat, which corresponds > to the peak-to-peak interval. > > I have little or no knowledge of DSP techniques. I'm a computer
programmer,
> and have had some calculus in college, so I should be able to learn enough > about DSP to accomplish my goal, but I need some outside help to point me
in
> the right direction. > > I follow my projects wherever they lead me, and this one has brought me
into
> the world of DSP. First time for me. I'm finding it really interesting,
if
> not overwhelming! > > So far, I've discovered there are techniques for calculating HRV based on > both time-domain and frequency-domain graphs of the heart data. > > I'd like to work on techniques, just to broaden my knowledge. But here is > where I'm having trouble. > > 1) The time-domain techniques involve taking the Normal-to-Normal
intervals
> between peaks and performing statistical functions on them. The problem
I'm
> having here, is how to correctly find the "peaks." Each heart beat has
one
> large spike, I believe in the R-phase (heart beats can be described as QRS > complexes), but there are other much smaller spikes which may contain
peaks,
> but are not of the same magnitude. Whichever technique I use must > compensate for those smaller peaks, and ignore them. > > The other issue is that because of my sampling device, the scale of my
data
> may not always be the same, so any type of "threshold" used to determine
if
> a peak is of high enough magnitude must be adaptive in some way. > > So, for this problem of accurately detecting heart-beat peaks- I could > probably hobble some hokey algorithm together, that did a fairly good job > under most circumstances, but I'm interested in learning about more
accurate
> and/or sophisticated ways to do this. After all, it's supposed to be a > learning experience for me. Hope someone can help. > > 2) Frequency-Domain analysis of heart-data > My intention here, for now, is to use someone elses FFT API in my own > softare, and simply pass in the data to be transformed. I'm dimly aware > that if I don't have evenly spaced samples, I better compensate before I
run
> an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > I actually have a lot of questions on this topic, but perhaps I should
start
> at a few very basic, hopefully easy to understands ones .. > > Lets say my sampled heart-data contains real-numbers over time. Every > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a Y > scale of -5 to +5. Most beats have their major peak at about 2.8xx -
3.2xx
> in the scale that the device samples with. > > Also- I'm assuming I can somehow use the FFT to find the location of the > peaks in the data, or if I can't do that directly, to calculate how many > peaks of a certain magnitude, at least. (Obviously, not really sure what I > can do here.) Remember, the ultimate goal is that I want to take the > standard deviation of the length of time between each heart beat. > > When I manage to successfully transform the time-domain data into > frequency-domain, how do I determine at what frequency the peaks in the > time-domain are represented at in the new frequency-domain graph? Is it > based on the Y values in the time-domain graph? > > Well, I have more ground to cover, but I think I'll wait on a response to > that, just to see if I'm anywhere close to an understanding of what is
going
> on.
First, you're not really interested in frequency, you're interested in detail of time differences from one heartbeat to the next. Well, "instantaneous frequency" perhaps but not average frequency. If you look at frequency, the time differences will likely be averaged out - unless you can look at the variance of frequency. Doing that will require that you take a lot of data and comprehend the difference between spreading due to the length of the data and spreading due to variations in the data. So, since it sounds like you have good signal-to-noise ratio, I'd stay in the time domain. It appears you have a scaling problem and have to process it out - rather than receiving scale information directly. It appears that there are regular high peaks that can be detected. It seems that some amount of lost data is probably not a big deal - unless the amount of data taken is tiny. Let's deal with scaling first: I would be tempted to use an AGC (automatic gain control) that's "tuned" to the data. In your case this sounds like very fast attack (response to largest peaks) and relatively fast decay (to accomodate gain reductions from the source) so that this is what happens: The gain will be set according to the maximum pulse. If the input scale increases gain then even higher amplitudes will occur immediately and the fast attack will deal with that. In between new maxima, allow the gain to increase rapidly after that (inversely with signal amplitude), trading the likelihood that a lower intra-period pulse might be detected against holding the gain low too long and missing a maximum after a downward shift in input gain. Missing one or two might be acceptable? Then amplitude threshold at an appropriate level (80% of the last peak?) for peak detection purposes. Then find the peak point based on when the rate of change goes to zero or reverses - and perhaps including some interpolation method if the sample rate you're using isn't all that high. Take the difference between detected peak times from pulse to pulse. If the AGC decays rather fast and an occasional smaller intra-period pulse is detected, the 2 much smaller differences thus generated may allow you to throw one data point out, ending up with a single, larger difference. In some pseudo code terms: peak=0 i=1 decay=.95; a parameter to adjust for now until AGCd data is collected get signal_now if signal_now>peak then peak=signal_now ;the fastest AGC attack possible else peak=decay*peak; end if gain=1/peak; so peaks come out around 1.0 - the inverse relationship signal_out(i)=gain*signal_now i=i+1 end for ; ; this assumes that all of the data is collected above first, rather than streaming ; easy enough to change to a streaming implementation ; ; analyze signal_out for peak locations and time differences ; threshold=0.8 ; a parameter to adjust (dependent on actual expression used above for gain=1/peak) i=1 j=1 for all of the data if signal_out(i)>threshold; time to detect a peak first=signal_out(i) index=i i=i+1 if signal_out(i)>first first=signal_out(i) index=i else; have found the peak peak(j)=first time(j)=index j=j+1 end if else; not around a peak end if and so forth..... [take the differences between times represented by the indices saved in time()] [throw out intra-period peaks found in error - possibly by inspecting signal_now() which could also be stored] [analyze the statistics of the time differences] Fred Fred
Great answers! I appreciate both of you taking your time to respond!!

Since I'm taking this one step at a time, I'm going to attempt to implement
a rudimentary AGC system based on the psuedo-code provided by Fred.  If
you're interested, I'll let you know how it goes.  I'm taking and creating
graphs in real-time, so what I intend to do is have two side-by-side
graphs.. one with gain, one without ..I'm very excited to see how it comes
out!

Thanks very much!
Datura

"Datura" <nospam@stopspam.com> wrote in message
news:qHfMb.5269489$Id.844968@news.easynews.com...
> Hello everyone, > > Purely for fun, I am working on a project where my ultimate goal is to > calculate heart-rate variability (HRV) based on a heart-pulse graph (looks > like an ECG.) > > For now, HRV for my project is going to be limited to finding the standard > deviation based on the intervals between each heartbeat, which corresponds > to the peak-to-peak interval. > > I have little or no knowledge of DSP techniques. I'm a computer
programmer,
> and have had some calculus in college, so I should be able to learn enough > about DSP to accomplish my goal, but I need some outside help to point me
in
> the right direction. > > I follow my projects wherever they lead me, and this one has brought me
into
> the world of DSP. First time for me. I'm finding it really interesting,
if
> not overwhelming! > > So far, I've discovered there are techniques for calculating HRV based on > both time-domain and frequency-domain graphs of the heart data. > > I'd like to work on techniques, just to broaden my knowledge. But here is > where I'm having trouble. > > 1) The time-domain techniques involve taking the Normal-to-Normal
intervals
> between peaks and performing statistical functions on them. The problem
I'm
> having here, is how to correctly find the "peaks." Each heart beat has
one
> large spike, I believe in the R-phase (heart beats can be described as QRS > complexes), but there are other much smaller spikes which may contain
peaks,
> but are not of the same magnitude. Whichever technique I use must > compensate for those smaller peaks, and ignore them. > > The other issue is that because of my sampling device, the scale of my
data
> may not always be the same, so any type of "threshold" used to determine
if
> a peak is of high enough magnitude must be adaptive in some way. > > So, for this problem of accurately detecting heart-beat peaks- I could > probably hobble some hokey algorithm together, that did a fairly good job > under most circumstances, but I'm interested in learning about more
accurate
> and/or sophisticated ways to do this. After all, it's supposed to be a > learning experience for me. Hope someone can help. > > 2) Frequency-Domain analysis of heart-data > My intention here, for now, is to use someone elses FFT API in my own > softare, and simply pass in the data to be transformed. I'm dimly aware > that if I don't have evenly spaced samples, I better compensate before I
run
> an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > I actually have a lot of questions on this topic, but perhaps I should
start
> at a few very basic, hopefully easy to understands ones .. > > Lets say my sampled heart-data contains real-numbers over time. Every > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a Y > scale of -5 to +5. Most beats have their major peak at about 2.8xx -
3.2xx
> in the scale that the device samples with. > > Also- I'm assuming I can somehow use the FFT to find the location of the > peaks in the data, or if I can't do that directly, to calculate how many > peaks of a certain magnitude, at least. (Obviously, not really sure what I > can do here.) Remember, the ultimate goal is that I want to take the > standard deviation of the length of time between each heart beat. > > When I manage to successfully transform the time-domain data into > frequency-domain, how do I determine at what frequency the peaks in the > time-domain are represented at in the new frequency-domain graph? Is it > based on the Y values in the time-domain graph? > > Well, I have more ground to cover, but I think I'll wait on a response to > that, just to see if I'm anywhere close to an understanding of what is
going
> on. > > Thanks everyone. > Datura > > > > > > > > > > > > > >
Fred,
I tried the AGC algorithm on ECG with decay factor 0.65-0.99 ranges,
the results seems to be wrong. I suspect that whether the AGC
algorithm is applicable to this kind of data - signal with some high
amplitude period followed by a long silent period (of course
comparatively low amplitude P and T waves are in between R waves).  In
between RR interval, there are almost 6000 samples of signals (whose
amplitude is much less than first detected peak) at 8k sample rate
(assume pulse period is 0.75secs). If we apply the AGC algorithm, with
decay factor, the signals in between will get amplified tremendously
as time increases, that is, the noise get amplified and destroy the
signal itself. While adjusting the decay factor for getting the good
SNR with amplification will ends up the decay factor value to 1
(0.99999).  This value does not give much amplification to P and T
wave. It is kind of detecting the peak value and scale the signal
accordingly, once the new peak value is detected, the scale factor
changes. A potential danger for the above is; if some noise of
relatively good amplitude will reduce the gain value significantly. I
don't have much idea about this topic. I just tried the method what
you mentioned. I assume that some adaptive technique will there to
handle this kind of situation.

Please refer the link for ECG intervals and waves.
http://medlib.med.utah.edu/kw/ecg/mml/ecg_533.html

rgds
ajith

 

 

 "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:<jKOdnbiwOZkImp_dRVn-jA@centurytel.net>...
> "Datura" <nospam@stopspam.com> wrote in message > news:qHfMb.5269489$Id.844968@news.easynews.com... > > Hello everyone, > > > > Purely for fun, I am working on a project where my ultimate goal is to > > calculate heart-rate variability (HRV) based on a heart-pulse graph (looks > > like an ECG.) > > > > For now, HRV for my project is going to be limited to finding the standard > > deviation based on the intervals between each heartbeat, which corresponds > > to the peak-to-peak interval. > > > > I have little or no knowledge of DSP techniques. I'm a computer > programmer, > > and have had some calculus in college, so I should be able to learn enough > > about DSP to accomplish my goal, but I need some outside help to point me > in > > the right direction. > > > > I follow my projects wherever they lead me, and this one has brought me > into > > the world of DSP. First time for me. I'm finding it really interesting, > if > > not overwhelming! > > > > So far, I've discovered there are techniques for calculating HRV based on > > both time-domain and frequency-domain graphs of the heart data. > > > > I'd like to work on techniques, just to broaden my knowledge. But here is > > where I'm having trouble. > > > > 1) The time-domain techniques involve taking the Normal-to-Normal > intervals > > between peaks and performing statistical functions on them. The problem > I'm > > having here, is how to correctly find the "peaks." Each heart beat has > one > > large spike, I believe in the R-phase (heart beats can be described as QRS > > complexes), but there are other much smaller spikes which may contain > peaks, > > but are not of the same magnitude. Whichever technique I use must > > compensate for those smaller peaks, and ignore them. > > > > The other issue is that because of my sampling device, the scale of my > data > > may not always be the same, so any type of "threshold" used to determine > if > > a peak is of high enough magnitude must be adaptive in some way. > > > > So, for this problem of accurately detecting heart-beat peaks- I could > > probably hobble some hokey algorithm together, that did a fairly good job > > under most circumstances, but I'm interested in learning about more > accurate > > and/or sophisticated ways to do this. After all, it's supposed to be a > > learning experience for me. Hope someone can help. > > > > 2) Frequency-Domain analysis of heart-data > > My intention here, for now, is to use someone elses FFT API in my own > > softare, and simply pass in the data to be transformed. I'm dimly aware > > that if I don't have evenly spaced samples, I better compensate before I > run > > an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > > > I actually have a lot of questions on this topic, but perhaps I should > start > > at a few very basic, hopefully easy to understands ones .. > > > > Lets say my sampled heart-data contains real-numbers over time. Every > > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a Y > > scale of -5 to +5. Most beats have their major peak at about 2.8xx - > 3.2xx > > in the scale that the device samples with. > > > > Also- I'm assuming I can somehow use the FFT to find the location of the > > peaks in the data, or if I can't do that directly, to calculate how many > > peaks of a certain magnitude, at least. (Obviously, not really sure what I > > can do here.) Remember, the ultimate goal is that I want to take the > > standard deviation of the length of time between each heart beat. > > > > When I manage to successfully transform the time-domain data into > > frequency-domain, how do I determine at what frequency the peaks in the > > time-domain are represented at in the new frequency-domain graph? Is it > > based on the Y values in the time-domain graph? > > > > Well, I have more ground to cover, but I think I'll wait on a response to > > that, just to see if I'm anywhere close to an understanding of what is > going > > on. > > First, you're not really interested in frequency, you're interested in > detail of time differences from one heartbeat to the next. Well, > "instantaneous frequency" perhaps but not average frequency. If you look at > frequency, the time differences will likely be averaged out - unless you can > look at the variance of frequency. Doing that will require that you take a > lot of data and comprehend the difference between spreading due to the > length of the data and spreading due to variations in the data. So, since > it sounds like you have good signal-to-noise ratio, I'd stay in the time > domain. > > It appears you have a scaling problem and have to process it out - rather > than receiving scale information directly. > > It appears that there are regular high peaks that can be detected. > > It seems that some amount of lost data is probably not a big deal - unless > the amount of data taken is tiny. > > Let's deal with scaling first: > I would be tempted to use an AGC (automatic gain control) that's "tuned" to > the data. In your case this sounds like very fast attack (response to > largest peaks) and relatively fast decay (to accomodate gain reductions from > the source) so that this is what happens: > The gain will be set according to the maximum pulse. If the input scale > increases gain then even higher amplitudes will occur immediately and the > fast attack will deal with that. In between new maxima, allow the gain to > increase rapidly after that (inversely with signal amplitude), trading the > likelihood that a lower intra-period pulse might be detected against holding > the gain low too long and missing a maximum after a downward shift in input > gain. Missing one or two might be acceptable? > > Then amplitude threshold at an appropriate level (80% of the last peak?) for > peak detection purposes. > Then find the peak point based on when the rate of change goes to zero or > reverses - and perhaps including some interpolation method if the sample > rate you're using isn't all that high. > > Take the difference between detected peak times from pulse to pulse. > > If the AGC decays rather fast and an occasional smaller intra-period pulse > is detected, the 2 much smaller differences thus generated may allow you to > throw one data point out, ending up with a single, larger difference. > > In some pseudo code terms: > peak=0 > i=1 > decay=.95; a parameter to adjust > for now until AGCd data is collected > get signal_now > if > signal_now>peak > then > peak=signal_now ;the fastest AGC attack possible > else > peak=decay*peak; > end if > gain=1/peak; so peaks come out around 1.0 - the inverse relationship > signal_out(i)=gain*signal_now > i=i+1 > end for > ; > ; this assumes that all of the data is collected above first, rather than > streaming > ; easy enough to change to a streaming implementation > ; > ; analyze signal_out for peak locations and time differences > ; > threshold=0.8 ; a parameter to adjust (dependent on actual expression used > above for gain=1/peak) > i=1 > j=1 > for all of the data > if signal_out(i)>threshold; time to detect a peak > first=signal_out(i) > index=i > i=i+1 > if signal_out(i)>first > first=signal_out(i) > index=i > else; have found the peak > peak(j)=first > time(j)=index > j=j+1 > end if > else; not around a peak > end if > and so forth..... > [take the differences between times represented by the indices saved in > time()] > [throw out intra-period peaks found in error - possibly by inspecting > signal_now() which could also be stored] > [analyze the statistics of the time differences] > > Fred > > > Fred
"Datura" <nospam@stopspam.com> wrote in message
news:mpIMb.5387129$Id.864772@news.easynews.com...
> Great answers! I appreciate both of you taking your time to respond!! > > Since I'm taking this one step at a time, I'm going to attempt to
implement
> a rudimentary AGC system based on the psuedo-code provided by Fred. If > you're interested, I'll let you know how it goes. I'm taking and creating > graphs in real-time, so what I intend to do is have two side-by-side > graphs.. one with gain, one without ..I'm very excited to see how it comes > out! > > Thanks very much! > Datura > > "Datura" <nospam@stopspam.com> wrote in message > news:qHfMb.5269489$Id.844968@news.easynews.com... > > Hello everyone, > > > > Purely for fun, I am working on a project where my ultimate goal is to > > calculate heart-rate variability (HRV) based on a heart-pulse graph
(looks
> > like an ECG.) > > > > For now, HRV for my project is going to be limited to finding the
standard
> > deviation based on the intervals between each heartbeat, which
corresponds
> > to the peak-to-peak interval. > > > > I have little or no knowledge of DSP techniques. I'm a computer > programmer, > > and have had some calculus in college, so I should be able to learn
enough
> > about DSP to accomplish my goal, but I need some outside help to point
me
> in > > the right direction. > > > > I follow my projects wherever they lead me, and this one has brought me > into > > the world of DSP. First time for me. I'm finding it really
interesting,
> if > > not overwhelming! > > > > So far, I've discovered there are techniques for calculating HRV based
on
> > both time-domain and frequency-domain graphs of the heart data. > > > > I'd like to work on techniques, just to broaden my knowledge. But here
is
> > where I'm having trouble. > > > > 1) The time-domain techniques involve taking the Normal-to-Normal > intervals > > between peaks and performing statistical functions on them. The problem > I'm > > having here, is how to correctly find the "peaks." Each heart beat has > one > > large spike, I believe in the R-phase (heart beats can be described as
QRS
> > complexes), but there are other much smaller spikes which may contain > peaks, > > but are not of the same magnitude. Whichever technique I use must > > compensate for those smaller peaks, and ignore them. > > > > The other issue is that because of my sampling device, the scale of my > data > > may not always be the same, so any type of "threshold" used to determine > if > > a peak is of high enough magnitude must be adaptive in some way. > > > > So, for this problem of accurately detecting heart-beat peaks- I could > > probably hobble some hokey algorithm together, that did a fairly good
job
> > under most circumstances, but I'm interested in learning about more > accurate > > and/or sophisticated ways to do this. After all, it's supposed to be a > > learning experience for me. Hope someone can help. > > > > 2) Frequency-Domain analysis of heart-data > > My intention here, for now, is to use someone elses FFT API in my own > > softare, and simply pass in the data to be transformed. I'm dimly
aware
> > that if I don't have evenly spaced samples, I better compensate before I > run > > an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > > > I actually have a lot of questions on this topic, but perhaps I should > start > > at a few very basic, hopefully easy to understands ones .. > > > > Lets say my sampled heart-data contains real-numbers over time. Every > > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a
Y
> > scale of -5 to +5. Most beats have their major peak at about 2.8xx - > 3.2xx > > in the scale that the device samples with. > > > > Also- I'm assuming I can somehow use the FFT to find the location of the > > peaks in the data, or if I can't do that directly, to calculate how many > > peaks of a certain magnitude, at least. (Obviously, not really sure what
I
> > can do here.) Remember, the ultimate goal is that I want to take the > > standard deviation of the length of time between each heart beat. > > > > When I manage to successfully transform the time-domain data into > > frequency-domain, how do I determine at what frequency the peaks in the > > time-domain are represented at in the new frequency-domain graph? Is it > > based on the Y values in the time-domain graph? > > > > Well, I have more ground to cover, but I think I'll wait on a response
to
> > that, just to see if I'm anywhere close to an understanding of what is > going > > on. > > > > Thanks everyone. > > Datura
Datura, Take a look at Ajith's post. I didn't know what your sample rate or period was so obviously the decay rate was fictitious. Indeed, with a high sample rate compared to the period, the decay factor will be much closer to 1.0. You could experiment with linear increases in gain as well. A lowpass filter at the input should get rid of very narrow noise spikes so the gain doesn't get driven down by such things. Otherwise, I don't see why the fast attack isn't the right thing to do. The objective of the decay is to respond to scale changes that reduce signal amplitude. If the decay is too great, then interim maxima will be detected - which is undesirable. If the decay is too small, then signal amplitude reductions due to input scale changes can result in it taking more than one period for the gain to increase enough to detect the desired maxima. That's an tradeoff that you have to make. I remain optimistic that you can set the decay and the detection threshold to reach a happy medium. Fred
Ajith:

Usually, in ECG signals, the AGC cannot implemented continuously, since, as
you have seen, the ACG energy occupies such a small portion of the time
domain. If run continously, the AGC is dominated by the noise level between
beats, as you have seen.

One simple way to AGC ECG signals is to use the peaks of the R wave. Of
course, you don't know its an R-wave ahead of time, but the assumption is
the, once the data is properly filtered, the highest data point will be
R-wave. Then the AGC gain is adjusted to keep these peaks at a constant
amplitude. In-band noise will trick this, but you can make it more robust by
using the knowledge that heart beats can only occur between minimum and
maximum intervals. If you start detecting peaks out of this range, you know
that your SNR is not acceptable. You can play games with freezing the AGC
gain when you think it is probably noise.

Now, one might ask the validity of using an AGC that assumes so much about
the data. In my application, we needed to detect R-waves on the fly, and the
above AGC did a reasonable job of keeping them at near the same amplitude,
enabling the time-domain algorithm for R-wave detection to work OK. However,
if the OP has an application where he is running an algorithm on a lengthy
pre-recorded ECG, he may be better off with a more holistic processing
approach on the entire data set. I have not found much literature on such
approaches, and also I have found that most ECG processing routines have one
thing in common--they have lots of "special cases" for the various types of
noise encountered on the ECG signals.

Jim

"Ajith Kumar P C" <ajith_pc@yahoo.com> wrote in message
news:18eae751.0401122336.380e065@posting.google.com...
> Fred, > I tried the AGC algorithm on ECG with decay factor 0.65-0.99 ranges, > the results seems to be wrong. I suspect that whether the AGC > algorithm is applicable to this kind of data - signal with some high > amplitude period followed by a long silent period (of course > comparatively low amplitude P and T waves are in between R waves). In > between RR interval, there are almost 6000 samples of signals (whose > amplitude is much less than first detected peak) at 8k sample rate > (assume pulse period is 0.75secs). If we apply the AGC algorithm, with > decay factor, the signals in between will get amplified tremendously > as time increases, that is, the noise get amplified and destroy the > signal itself. While adjusting the decay factor for getting the good > SNR with amplification will ends up the decay factor value to 1 > (0.99999). This value does not give much amplification to P and T > wave. It is kind of detecting the peak value and scale the signal > accordingly, once the new peak value is detected, the scale factor > changes. A potential danger for the above is; if some noise of > relatively good amplitude will reduce the gain value significantly. I > don't have much idea about this topic. I just tried the method what > you mentioned. I assume that some adaptive technique will there to > handle this kind of situation. > > Please refer the link for ECG intervals and waves. > http://medlib.med.utah.edu/kw/ecg/mml/ecg_533.html > > rgds > ajith > > > > > > "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<jKOdnbiwOZkImp_dRVn-jA@centurytel.net>...
> > "Datura" <nospam@stopspam.com> wrote in message > > news:qHfMb.5269489$Id.844968@news.easynews.com... > > > Hello everyone, > > > > > > Purely for fun, I am working on a project where my ultimate goal is to > > > calculate heart-rate variability (HRV) based on a heart-pulse graph
(looks
> > > like an ECG.) > > > > > > For now, HRV for my project is going to be limited to finding the
standard
> > > deviation based on the intervals between each heartbeat, which
corresponds
> > > to the peak-to-peak interval. > > > > > > I have little or no knowledge of DSP techniques. I'm a computer > > programmer, > > > and have had some calculus in college, so I should be able to learn
enough
> > > about DSP to accomplish my goal, but I need some outside help to point
me
> > in > > > the right direction. > > > > > > I follow my projects wherever they lead me, and this one has brought
me
> > into > > > the world of DSP. First time for me. I'm finding it really
interesting,
> > if > > > not overwhelming! > > > > > > So far, I've discovered there are techniques for calculating HRV based
on
> > > both time-domain and frequency-domain graphs of the heart data. > > > > > > I'd like to work on techniques, just to broaden my knowledge. But here
is
> > > where I'm having trouble. > > > > > > 1) The time-domain techniques involve taking the Normal-to-Normal > > intervals > > > between peaks and performing statistical functions on them. The
problem
> > I'm > > > having here, is how to correctly find the "peaks." Each heart beat
has
> > one > > > large spike, I believe in the R-phase (heart beats can be described as
QRS
> > > complexes), but there are other much smaller spikes which may contain > > peaks, > > > but are not of the same magnitude. Whichever technique I use must > > > compensate for those smaller peaks, and ignore them. > > > > > > The other issue is that because of my sampling device, the scale of my > > data > > > may not always be the same, so any type of "threshold" used to
determine
> > if > > > a peak is of high enough magnitude must be adaptive in some way. > > > > > > So, for this problem of accurately detecting heart-beat peaks- I could > > > probably hobble some hokey algorithm together, that did a fairly good
job
> > > under most circumstances, but I'm interested in learning about more > > accurate > > > and/or sophisticated ways to do this. After all, it's supposed to be a > > > learning experience for me. Hope someone can help. > > > > > > 2) Frequency-Domain analysis of heart-data > > > My intention here, for now, is to use someone elses FFT API in my own > > > softare, and simply pass in the data to be transformed. I'm dimly
aware
> > > that if I don't have evenly spaced samples, I better compensate before
I
> > run > > > an FFT on the data. Otherwise, I'm looking at some kind of
inaccuracy.
> > > > > > I actually have a lot of questions on this topic, but perhaps I should > > start > > > at a few very basic, hopefully easy to understands ones .. > > > > > > Lets say my sampled heart-data contains real-numbers over time. Every > > > 25-30ms in this case. Lets say, when plotted, my heart-data graph has
a Y
> > > scale of -5 to +5. Most beats have their major peak at about 2.8xx - > > 3.2xx > > > in the scale that the device samples with. > > > > > > Also- I'm assuming I can somehow use the FFT to find the location of
the
> > > peaks in the data, or if I can't do that directly, to calculate how
many
> > > peaks of a certain magnitude, at least. (Obviously, not really sure
what I
> > > can do here.) Remember, the ultimate goal is that I want to take the > > > standard deviation of the length of time between each heart beat. > > > > > > When I manage to successfully transform the time-domain data into > > > frequency-domain, how do I determine at what frequency the peaks in
the
> > > time-domain are represented at in the new frequency-domain graph? Is
it
> > > based on the Y values in the time-domain graph? > > > > > > Well, I have more ground to cover, but I think I'll wait on a response
to
> > > that, just to see if I'm anywhere close to an understanding of what is > > going > > > on. > > > > First, you're not really interested in frequency, you're interested in > > detail of time differences from one heartbeat to the next. Well, > > "instantaneous frequency" perhaps but not average frequency. If you
look at
> > frequency, the time differences will likely be averaged out - unless you
can
> > look at the variance of frequency. Doing that will require that you
take a
> > lot of data and comprehend the difference between spreading due to the > > length of the data and spreading due to variations in the data. So,
since
> > it sounds like you have good signal-to-noise ratio, I'd stay in the time > > domain. > > > > It appears you have a scaling problem and have to process it out -
rather
> > than receiving scale information directly. > > > > It appears that there are regular high peaks that can be detected. > > > > It seems that some amount of lost data is probably not a big deal -
unless
> > the amount of data taken is tiny. > > > > Let's deal with scaling first: > > I would be tempted to use an AGC (automatic gain control) that's "tuned"
to
> > the data. In your case this sounds like very fast attack (response to > > largest peaks) and relatively fast decay (to accomodate gain reductions
from
> > the source) so that this is what happens: > > The gain will be set according to the maximum pulse. If the input scale > > increases gain then even higher amplitudes will occur immediately and
the
> > fast attack will deal with that. In between new maxima, allow the gain
to
> > increase rapidly after that (inversely with signal amplitude), trading
the
> > likelihood that a lower intra-period pulse might be detected against
holding
> > the gain low too long and missing a maximum after a downward shift in
input
> > gain. Missing one or two might be acceptable? > > > > Then amplitude threshold at an appropriate level (80% of the last peak?)
for
> > peak detection purposes. > > Then find the peak point based on when the rate of change goes to zero
or
> > reverses - and perhaps including some interpolation method if the sample > > rate you're using isn't all that high. > > > > Take the difference between detected peak times from pulse to pulse. > > > > If the AGC decays rather fast and an occasional smaller intra-period
pulse
> > is detected, the 2 much smaller differences thus generated may allow you
to
> > throw one data point out, ending up with a single, larger difference. > > > > In some pseudo code terms: > > peak=0 > > i=1 > > decay=.95; a parameter to adjust > > for now until AGCd data is collected > > get signal_now > > if > > signal_now>peak > > then > > peak=signal_now ;the fastest AGC attack possible > > else > > peak=decay*peak; > > end if > > gain=1/peak; so peaks come out around 1.0 - the inverse relationship > > signal_out(i)=gain*signal_now > > i=i+1 > > end for > > ; > > ; this assumes that all of the data is collected above first, rather
than
> > streaming > > ; easy enough to change to a streaming implementation > > ; > > ; analyze signal_out for peak locations and time differences > > ; > > threshold=0.8 ; a parameter to adjust (dependent on actual expression
used
> > above for gain=1/peak) > > i=1 > > j=1 > > for all of the data > > if signal_out(i)>threshold; time to detect a peak > > first=signal_out(i) > > index=i > > i=i+1 > > if signal_out(i)>first > > first=signal_out(i) > > index=i > > else; have found the peak > > peak(j)=first > > time(j)=index > > j=j+1 > > end if > > else; not around a peak > > end if > > and so forth..... > > [take the differences between times represented by the indices saved in > > time()] > > [throw out intra-period peaks found in error - possibly by inspecting > > signal_now() which could also be stored] > > [analyze the statistics of the time differences] > > > > Fred > > > > > > Fred
Hay Datura,

this problem is very similar to the CELP coding algorithm stage
named "Tone detection and tone period estimation".
This procedure is rather robust and is proven in every CELP coder.
The idea is to calculate the autocorrelation function for delays
which belong to the human pulse period range.
The autocorrelation function removes random  noises as well.
Then the pulse period is estimated as the autocorrelation function
peak placement.
Then it can be estimated more precisely by averaging.

Regards,
A.Ser.



"v" <yes@i.am> wrote in message news:400400BB.C1B095B3@i.am...
> Hay Datura, > > this problem is very similar to the CELP coding algorithm stage > named "Tone detection and tone period estimation". > This procedure is rather robust and is proven in every CELP coder. > The idea is to calculate the autocorrelation function for delays > which belong to the human pulse period range. > The autocorrelation function removes random noises as well. > Then the pulse period is estimated as the autocorrelation function > peak placement. > Then it can be estimated more precisely by averaging.
I think this departs from the objective. As I understand it, he wants to look at the statistics of period length variation. So, multiple period averages are out of the question because they obscure the intended information. This is a very wideband measurement and limiting the bandwidth by averaging goes in the wrong direction I think. I don't know what Datura's measurement parameters are but let's assume that he needs to know the length of each period to within 1%. This means that he has to sample at a high enough rate to: - have at least 200 samples per period or be able to accurately interpolate peak positons at least to 1/2% of a period - since he's going to have to difference two measurements to find the period length in each period. Accordingly the sample rate is dependent on measurement bandwidth considerations and not signal bandwidth as we normally think of it. There's probably a more elegant mathematical way to state this requirement - I just don't know what that is. Fred
Hey, I've got the algorithm, or a slight variation of it, working, and it
seems to suit my purposes just fine.

To recap- it is, indeed, the RR intervals I want to measure.  The idea is to
take the standard deviation of 5 minutes worth of RR interval data (SDNN) to
compute the SDNN HRV index.  So, in to do this- I am interested in detecting
the R peaks.   Also, I am attempting to do this in real-time, detecting the
R peaks instantaneously and using a sliding window for the SDNN
calculations.

Also, I want to note, that this is purely for fun- just a learning exercise
for me.  There are no negative consquences to any of this not working.  In
fact, my sampling device, at only ~ 30Hz, is not even suitable for serious
scientific purposes.

I noticed very quickly that a decay factor of .95 was too low, so quickly
adjusted it upwards. Values between .98 and .9925 seem to work well for me.
I was really amazed at how much difference even a minor change makes.

I noticed also that there are trade-offs, as mentioned in some of the other
posts, between the factor values and how well the algorithm compensates for
SNR or low scale.

The changes I've made include adding an attack factor, and changing the gain
equation to be: gain = attack/peak.  This seems to give me a consistent
upper limit of 1.

Also, it is not necessary for the AGCd signal to be a faithful replication
of the original.  All the AGCd signal needs to do is help me to detect the R
peak, and that it does!  It may not be a perfect system, but it seems to
work far better than anything else I've tried up to now, and with the added
bonus that it does compensate for some small amount of noise.

So, here's what I'm doing.  Tell me if this makes sense or if there is a
better way ..

With an attack = .992 and decay = .985 I get an AGCd signal that resembles a
long string of New Mexican desert plateaus.  The beauty of this is, as soon
as I get out of the first R phase, and my peak is set, the next interim
maximas shoot the AGCd signal up to 1, where it remains until the sample
right after the peak in the next R phase.  Because the decay factor is
fairly high, the AGCd signal then drops significantly.

My method for detecting R peaks this way, then, dispenses with a threshold
value, and works as such:

if last_signal_out equals 1 and current_signal_out < 1 then
 peak detected at last_signal_out
end if

At which point, I simply check my buffer in the original signal to determine
the actual R-peak value and the time it occurred.

This method seems to work, I've played with this for hours now, and it seems
reliable.  Of course, all the standard caveats about noise apply, but my SNR
is very low to start with.

My sampling device produces y-graph values between 1.5 and 4.0 ...so,
perhaps this only works within those ranges, I'm not sure.  Would this
technique be applicable elsewhere?

Here's a bit of data to illustrate my point ... I'm sorry it's not formatted
better, ignore the second column.. The 3rd column is the original sample
data, followed by the AGCd value.

It just occured to me, that maybe this only works because the changes in
scale are not that great?  Not sure ..any opinion on this?

It also occurred to me, that this may work since my sampling frequency is so
low as well.  By the time 30ms pass, enough time has elapsed for the next
sample to have changed significantly.  Is this a factor in why this seems to
work? And if so, could alternate attack/decay factors compensate?

Thanks,
Datura

20:35:42.536 2.27  2.151 0.989162212715862
20:35:42.566 2.27  2.119 0.981810219484306
20:35:42.596 2.27  2.087 0.974290628540741
20:35:42.626 2.28  2.06 0.968953154791229
20:35:42.666 2.28  2.046 0.969640338804556
20:35:42.706 2.27  2.038 0.973147585623203
20:35:42.746 2.27  2.031 0.977133578581622
20:35:42.776 2.27  2.028 0.983063223989636
20:35:42.806 2.27  2.01 0.981700564996432
20:35:42.836 2.28  1.999 0.983705868354884
20:35:42.866 2.27  1.991 0.98717287303328
20:35:42.896 2.28  1.99 1
20:35:42.936 2.27  1.993 1
20:35:42.976 2.28  1.992 1
20:35:43.16 2.28  1.996 1
20:35:43.37 2.28  2 1
20:35:43.77 2.28  2.006 1
20:35:43.107 2.28  2.009 1
20:35:43.137 2.28  2.008 1
20:35:43.177 2.28  2.006 1
20:35:43.217 2.28  2.008 1
20:35:43.247 2.28  2.014 1
20:35:43.297 2.28  2.03 1
20:35:43.317 2.28  2.056 1
20:35:43.347 2.28  2.17 1
20:35:43.377 2.28  2.222 1
20:35:43.417 2.28  2.209 1
20:35:43.447 2.28  2.174 0.991592671610186
20:35:43.487 2.28  2.136 0.981622493707108
20:35:43.517 2.28  2.099 0.971908043609477
20:35:43.547 2.28  2.064 0.962923787390642
20:35:43.577 2.28  2.045 0.961269182245652
20:35:43.617 2.28  2.038 0.965217908601375
20:35:43.657 2.27  2.031 0.969171421762034
20:35:43.687 2.28  2.018 0.970244797138452
20:35:43.717 2.27  1.998 0.967888052705813
20:35:43.758 2.27  1.978 0.965440285752804
20:35:43.788 2.27  1.96 0.963883810110458
20:35:43.828 2.27  1.951 0.966708123952862
20:35:43.858 2.27  1.952 0.97451246106743
20:35:43.898 2.27  1.95 0.980870514040493
20:35:43.928 2.27  1.953 0.989803068615169
20:35:43.958 2.27  1.956 1
20:35:43.988 2.27  1.962 1
20:35:44.18 2.27  1.976 1
20:35:44.58 2.27  1.984 1
20:35:44.88 2.27  1.981 1
20:35:44.128 2.27  1.998 1
20:35:44.158 2.27  2.07 1
20:35:44.198 2.26  2.18 1
20:35:44.228 2.26  2.253 1
20:35:44.268 2.26  2.262 1
20:35:44.288 2.27  2.228 0.992412145022238
20:35:44.328 2.27  2.179 0.977920609238705
20:35:44.368 2.26  2.131 0.963605572443941
20:35:44.398 2.26  2.114 0.963141997712307
20:35:44.439 2.26  2.066 0.948386011106776
20:35:44.469 2.26  2.039 0.943064794597778
20:35:44.499 2.26  2.021 0.94180307672038
20:35:44.529 2.27  1.998 0.938120788290865
20:35:44.569 2.26  1.974 0.933855990038485
20:35:44.609 2.26  1.956 0.932333084004031
20:35:44.639 2.26  1.939 0.93121409028028
20:35:44.669 2.26  1.937 0.937283204681268
20:35:44.699 2.27  1.938 0.944853489801439
20:35:44.739 2.27  1.943 0.95444956403376
20:35:44.769 2.26  1.954 0.967106333072914
20:35:44.809 2.26  1.961 0.977905178890803
20:35:44.839 2.26  1.973 0.991324232072803
20:35:44.879 2.26  1.981 1
20:35:44.919 2.26  1.984 1
20:35:44.939 2.26  1.985 1
20:35:44.979 2.26  1.997 1
20:35:45.9 2.26  2.062 1
20:35:45.49 2.26  2.183 1
20:35:45.79 2.26  2.286 1
20:35:45.110 2.26  2.32 1
20:35:45.150 2.26  2.295 1
20:35:45.190 2.26  2.244 0.98516652672824
20:35:45.210 2.26  2.189 0.968282406375857
20:35:45.250 2.26  2.124 0.946630027233635
20:35:45.280 2.26  2.062 0.925942267645645
20:35:45.320 2.26  2.024 0.915746444460899










"Datura" <nospam@stopspam.com> wrote in message
news:qHfMb.5269489$Id.844968@news.easynews.com...
> Hello everyone, > > Purely for fun, I am working on a project where my ultimate goal is to > calculate heart-rate variability (HRV) based on a heart-pulse graph (looks > like an ECG.) > > For now, HRV for my project is going to be limited to finding the standard > deviation based on the intervals between each heartbeat, which corresponds > to the peak-to-peak interval. > > I have little or no knowledge of DSP techniques. I'm a computer
programmer,
> and have had some calculus in college, so I should be able to learn enough > about DSP to accomplish my goal, but I need some outside help to point me
in
> the right direction. > > I follow my projects wherever they lead me, and this one has brought me
into
> the world of DSP. First time for me. I'm finding it really interesting,
if
> not overwhelming! > > So far, I've discovered there are techniques for calculating HRV based on > both time-domain and frequency-domain graphs of the heart data. > > I'd like to work on techniques, just to broaden my knowledge. But here is > where I'm having trouble. > > 1) The time-domain techniques involve taking the Normal-to-Normal
intervals
> between peaks and performing statistical functions on them. The problem
I'm
> having here, is how to correctly find the "peaks." Each heart beat has
one
> large spike, I believe in the R-phase (heart beats can be described as QRS > complexes), but there are other much smaller spikes which may contain
peaks,
> but are not of the same magnitude. Whichever technique I use must > compensate for those smaller peaks, and ignore them. > > The other issue is that because of my sampling device, the scale of my
data
> may not always be the same, so any type of "threshold" used to determine
if
> a peak is of high enough magnitude must be adaptive in some way. > > So, for this problem of accurately detecting heart-beat peaks- I could > probably hobble some hokey algorithm together, that did a fairly good job > under most circumstances, but I'm interested in learning about more
accurate
> and/or sophisticated ways to do this. After all, it's supposed to be a > learning experience for me. Hope someone can help. > > 2) Frequency-Domain analysis of heart-data > My intention here, for now, is to use someone elses FFT API in my own > softare, and simply pass in the data to be transformed. I'm dimly aware > that if I don't have evenly spaced samples, I better compensate before I
run
> an FFT on the data. Otherwise, I'm looking at some kind of inaccuracy. > > I actually have a lot of questions on this topic, but perhaps I should
start
> at a few very basic, hopefully easy to understands ones .. > > Lets say my sampled heart-data contains real-numbers over time. Every > 25-30ms in this case. Lets say, when plotted, my heart-data graph has a Y > scale of -5 to +5. Most beats have their major peak at about 2.8xx -
3.2xx
> in the scale that the device samples with. > > Also- I'm assuming I can somehow use the FFT to find the location of the > peaks in the data, or if I can't do that directly, to calculate how many > peaks of a certain magnitude, at least. (Obviously, not really sure what I > can do here.) Remember, the ultimate goal is that I want to take the > standard deviation of the length of time between each heart beat. > > When I manage to successfully transform the time-domain data into > frequency-domain, how do I determine at what frequency the peaks in the > time-domain are represented at in the new frequency-domain graph? Is it > based on the Y values in the time-domain graph? > > Well, I have more ground to cover, but I think I'll wait on a response to > that, just to see if I'm anywhere close to an understanding of what is
going
> on. > > Thanks everyone. > Datura > > > > > > > > > > > > > >