Suppose I want to test if echo is induced during transmission through a network.
I send a predetermined messsage from the farside and on the nearend, I compare
this transmitted predetermined message (from the farend)with the original
message. From there I try to detect if there is any amount of echo present. My
question is: suppose due to packet drops, the parts of the predetermined speech
from farend failed to arrive at the nearend. Wouldn't this "fool" an echo
canceller? When I say "fool", I meant the echo canceller might try to adapted
to the missing part of the speech which will result in the wrong calculation of
return loss? I guess what I really want to ask is does G.168 differentiate
between echo and packet drops in a signal to be measured.
Another question wrt return loss measurement. Suppose there exists various
degree of network latency, does a "typical" echo canceller need to continuously
realign the 2 signals in time domain (say in the above setup) in order to
measure return loss correctly?
Thanks in advance.