I am wondering if there is any criterion relating the ability to resolve the smallest time-duration echo to the sampling rate of an ADC used to convert a detected analog sonar signal to a digital sequence. The Nyquist sampling theorem states that you must sample a signal at greater than 2*f_h, where f_h is the maximum frequency of interest in the analog signal. A common mistake is to assume that you must sample exactly at 2*f_h. However, does this also apply to the duration of discrete echos? For example, if the duration of an echo is t_e, then do I need to sample the signal at a frequency greater than 2/t_e = 2*f_e to be able to resolve the echo? Suppose that I situate two sonar transducers, transducer #1 and transducer #2, close together so that I can perform seismic velocity analysis on the reflected pulses from a sedimentary sea floor. The first transducer, transducer #1, is situated closer to the seismic source. If the transducers are situated a certain distance apart, then the reflected pulses will take a longer time to travel to transducer #2 due to the offset between the two transducers. Should the sampling rate that I select take into consideration the distance between these two transducers? Nicholas
Sampling rate required to resolve separate sonar echoes
Started by ●January 20, 2009
Reply by ●January 20, 20092009-01-20
On Jan 20, 10:39�am, Nicholas Kinar <n.ki...@usask.ca> wrote:> I am wondering if there is any criterion relating the ability to resolve > the smallest time-duration echo to the sampling rate of an ADC used to > convert a detected analog sonar signal to a digital sequence. > > The Nyquist sampling theorem states that you must sample a signal at > greater than 2*f_h, where f_h is the maximum frequency of interest in > the analog signal. �A common mistake is to assume that you must sample > exactly at 2*f_h. > > However, does this also apply to the duration of discrete echos? �For > example, if the duration of an echo is t_e, then do I need to sample the > signal at a frequency greater than 2/t_e = 2*f_e to be able to resolve > the echo? > > Suppose that I situate two sonar transducers, transducer #1 and > transducer #2, close together so that I can perform seismic velocity > analysis on the reflected pulses from a sedimentary sea floor. �The > first transducer, transducer #1, is situated closer to the seismic > source. �If the transducers are situated a certain distance apart, then > the reflected pulses will take a longer time to travel to transducer #2 > due to the offset between the two transducers. > > Should the sampling rate that I select take into consideration the > distance between these two transducers? > > NicholasIf the two pulses are sampled above Nyquist, you should be able to resolve the differential delay to a fraction of a sample by interpolation. John
Reply by ●January 20, 20092009-01-20
> > If the two pulses are sampled above Nyquist, you should be able to > resolve the differential delay to a fraction of a sample by > interpolation. > > JohnThanks for your reply, John! Interpolation is a good idea when resolving particular events such as echos.
Reply by ●January 20, 20092009-01-20
On 20 Jan, 16:39, Nicholas Kinar <n.ki...@usask.ca> wrote:> I am wondering if there is any criterion relating the ability to resolve > the smallest time-duration echo to the sampling rate of an ADC used to > convert a detected analog sonar signal to a digital sequence.Use a sampling rate well above the bandwidth of the sonar pulse. Depending on the frequency range of the sonar you might want to mix the analog signal down to baseband first, but shaving the sampling rates down towards Nyquist is plain stupid, if it can be at all avoided.> The Nyquist sampling theorem states that you must sample a signal at > greater than 2*f_h, where f_h is the maximum frequency of interest in > the analog signal. �A common mistake is to assume that you must sample > exactly at 2*f_h.Nope, it's the bandwidth. Use 5-10 x oversampling if you want to use the data for anything useful.> However, does this also apply to the duration of discrete echos? �For > example, if the duration of an echo is t_e, then do I need to sample the > signal at a frequency greater than 2/t_e = 2*f_e to be able to resolve > the echo?The temporal resolution of echos is goverened by the time-bandwidth product and is independent of the sampling rate, provided the sampling rate respects the Nyquist limit.> Suppose that I situate two sonar transducers, transducer #1 and > transducer #2, close together so that I can perform seismic velocity > analysis on the reflected pulses from a sedimentary sea floor.You can't do that very well (maybe not at all) with just one reciever. Seismic methods rely on either refracted wave analysis or normal move-out analysis, both of which require arrays of recievers.>�The > first transducer, transducer #1, is situated closer to the seismic > source. �If the transducers are situated a certain distance apart, then > the reflected pulses will take a longer time to travel to transducer #2 > due to the offset between the two transducers. > > Should the sampling rate that I select take into consideration the > distance between these two transducers?No. Your concern is to make sure that the sampling rate is suitable with respect to the signal. Again, determine the bandwidth of the signal and use 5-10x oversampling. If you fail this, it is no point going on with other analyses. Rune
Reply by ●January 20, 20092009-01-20
> > Use a sampling rate well above the bandwidth of the sonar pulse. > Depending on the frequency range of the sonar you might want > to mix the analog signal down to baseband first, but shaving > the sampling rates down towards Nyquist is plain stupid, if it > can be at all avoided. > > > Nope, it's the bandwidth. Use 5-10 x oversampling if you want > to use the data for anything useful. > > > The temporal resolution of echos is goverened by the time-bandwidth > product and is independent of the sampling rate, provided the > sampling > rate respects the Nyquist limit. Thank you so much for all of this information, Rune. I realize now that the bandwidth of the signal is very important.> You can't do that very well (maybe not at all) with just one > reciever. Seismic methods rely on either refracted wave analysis > or normal move-out analysis, both of which require arrays of > recievers.Not with one receiver, but perhaps with two receivers?> No. Your concern is to make sure that the sampling rate is suitable > with respect to the signal. Again, determine the bandwidth of the > signal and use 5-10x oversampling. If you fail this, it is no point > going on with other analyses. > > RuneUnderstandably, it is the bandwidth that is the bottleneck in this situation. Why the 5-10x oversampling rule? Nicholas
Reply by ●January 20, 20092009-01-20
On 20 Jan, 17:36, Nicholas Kinar <n.ki...@usask.ca> wrote:> > No. Your concern is to make sure that the sampling rate is suitable > > with respect to the signal. Again, determine the bandwidth of the > > signal and use 5-10x oversampling. If you fail this, it is no point > > going on with other analyses. > > > Rune > > Understandably, it is the bandwidth that is the bottleneck in this > situation. �Why the 5-10x oversampling rule?Because it prevents users from shaving the sampling rate too close to Nyquist. Making hardware and doing experiments is both time-consuming and expensive, so selecting the sampling rate is a major design decision that, once implemented, can not easily be revised. You can always add a decimation step afterwards, if the amounts of data become too large to handle with 10x oversampling. If you sample at 1.05x Nyquist you are stuck with no leeway. Rune
Reply by ●January 20, 20092009-01-20
> Because it prevents users from shaving the sampling rate too > close to Nyquist. Making hardware and doing experiments is > both time-consuming and expensive, so selecting the sampling > rate is a major design decision that, once implemented, can > not easily be revised. > > You can always add a decimation step afterwards, if the amounts > of data become too large to handle with 10x oversampling. If you > sample at 1.05x Nyquist you are stuck with no leeway. > > RuneI agree, and it is really good practice to do this. It is always better to have more than less data to work with. Nicholas
Reply by ●January 20, 20092009-01-20
Am Tue, 20 Jan 2009 08:44:30 -0800 schrieb Rune Allnor:>> >> Understandably, it is the bandwidth that is the bottleneck in this >> situation. Why the 5-10x oversampling rule? > > Because it prevents users from shaving the sampling rate too close to > Nyquist. Making hardware and doing experiments is both time-consuming > and expensive, so selecting the sampling rate is a major design decision > that, once implemented, can not easily be revised. > > You can always add a decimation step afterwards, if the amounts of data > become too large to handle with 10x oversampling. If you sample at 1.05x > Nyquist you are stuck with no leeway. >In addition to what Rune said, you'll have to take into consideration that you have to filter your analog data before it is given to an A/D converter. This filter being analog, it is easier to have one with a slower roll-off. If you oversample, you can either work with the unncessary bigger bandwidth throughout the whole chain or decimate at some intermediate step. Just my 0.02$. Analog electrical engineers might want to disagree :-) Martin
Reply by ●January 20, 20092009-01-20
Rune Allnor wrote:> On 20 Jan, 17:36, Nicholas Kinar <n.ki...@usask.ca> wrote: > >>> No. Your concern is to make sure that the sampling rate is suitable >>> with respect to the signal. Again, determine the bandwidth of the >>> signal and use 5-10x oversampling. If you fail this, it is no point >>> going on with other analyses. >>> Rune >> Understandably, it is the bandwidth that is the bottleneck in this >> situation. Why the 5-10x oversampling rule? > > Because it prevents users from shaving the sampling rate too > close to Nyquist. Making hardware and doing experiments is > both time-consuming and expensive, so selecting the sampling > rate is a major design decision that, once implemented, can > not easily be revised. > > You can always add a decimation step afterwards, if the amounts > of data become too large to handle with 10x oversampling. If you > sample at 1.05x Nyquist you are stuck with no leeway.Moreover, Kinar wrote in his opening post, "...greater than 2*f_h, where f_h is the maximum frequency of interest in the analog signal." Frequencies higher than f_h, whether interesting or not, will alias. The higher sample rate moves that alias out to where a digital LPF can suppress it. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●January 20, 20092009-01-20
mblume wrote:> In addition to what Rune said, you'll have to take into consideration that > you have to filter your analog data before it is given to an A/D converter. > This filter being analog, it is easier to have one with a slower roll-off. > If you oversample, you can either work with the unncessary bigger bandwidth > throughout the whole chain or decimate at some intermediate step. > > Just my 0.02$. Analog electrical engineers might want to disagree :-) > MartinI agree as well, Martin. It is easier to deal with filtering analog signals, but there is always some leeway in the digital domain. It is probably safer to do some "crude" analog filtering, and then after the circuit is complete, it is best to use DSP to work on things that you could not anticipate in hardware. Nicholas