I am using a homemade Viterbi decoder to remove ISI. Originally, my decoder maintained path history (states vs time) for the entire duration of the message (1e5 bits) and it traced all the way back from end to finish. Then I modified it to use a sliding window. I created a circular buffer equal to the desired traceback length (about 5 times the ISI duration) and kept the path history in that buffer, tracing back at each step of the input over the reduced length and producing one new bit. It works fine, BER wise. I am still maintaining a running cumulative sum of the suriviving path error over the entire message. This grows without bound, so I need to make this a sliding sum. But the bookkeeping to do that is complicated. What I did instead is continuously compute the mean of all of the cumulative error sums, and subtrac that off at each step. Then it doesn't grow without bound. This gives good BER too. Is this a crazy thing to do? I have a bad feeling that it is a dumb idea and I should go with the sliding sum instead, however tricky to implement. Comments welcome. John

# Sliding window Viterbi

Started by ●August 30, 2005

Reply by ●August 30, 20052005-08-30

john <johns@xetron.com> wrote:>I am still maintaining a running cumulative sum of the suriviving path >error over the entire message. This grows without bound, so I need to >make this a sliding sum. But the bookkeeping to do that is complicated. >What I did instead is continuously compute the mean of all of the >cumulative error sums, and subtrac that off at each step. Then it >doesn't grow without bound. This gives good BER too.>Is this a crazy thing to do? I have a bad feeling that it is a dumb >idea and I should go with the sliding sum instead, however tricky to >implement.This is called subtractive normalization and it's not crazy. (Most people would find the smallest path metric at a given trellis stage, and subtract it from all of the metrics. You're subtracting the mean instead, which I suspect works equally well.) The other approach is called modulo normaliztion. Steve

Reply by ●August 31, 20052005-08-31

john wrote:> I am using a homemade Viterbi decoder to remove ISI. Originally, my > decoder maintained path history (states vs time) for the entire > duration of the message (1e5 bits) and it traced all the way back from > end to finish. Then I modified it to use a sliding window. I created a > circular buffer equal to the desired traceback length (about 5 times > the ISI duration) and kept the path history in that buffer, tracing > back at each step of the input over the reduced length and producing > one new bit. It works fine, BER wise. > > I am still maintaining a running cumulative sum of the suriviving path > error over the entire message. This grows without bound, so I need to > make this a sliding sum. But the bookkeeping to do that is complicated. > What I did instead is continuously compute the mean of all of the > cumulative error sums, and subtrac that off at each step. Then it > doesn't grow without bound. This gives good BER too.What has that to do with ISI? It seems to me to be a different issue. ... Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������

Reply by ●August 31, 20052005-08-31

> What has that to do with ISI? It seems to me to be a different issue.I'm not 100% sure about this, but it appears that you can do Viterbi equalisation, as well as Viterbi decoding. If a binary stream is input to an ISI channel, the channel can be treated as a convolutional coder, which means that a maximum likelihood estimate of the binary sequence is returned by a Viterbi algorithm, matched to that "encoder". I am not sure how this is done exactly, but I do know that the ISI channel must be relatively short, since the number of states in the Viterbi decoder (and hence memory requirements) grows exponentially with the "encoder" length. I believe it works well for multipath channels with a small number of paths. Maybe side information about the channel is also needed. What I dont really understand is that a conventional convolutional encoder works with modulo-2 adders, whereas a passive ISI channel adds interference at each sample by regular addition. Maybe the Viterbi sequence estimator must be modified to account for this. Slainte Porterboy I woke up this morning and I got myself a beer The future's uncertain and the end is always near

Reply by ●August 31, 20052005-08-31

> > What has that to do with ISI? It seems to me to be a different issue. > > ... > > Jerry > -- > Engineering is the art of making what you want from things you can getJerry, See if you can dig up a copy of "How I Learned to Love the Trellis" by Bernard Sklar. It appeared in IEEE Signal Processing Magazine a couple of years ago. If you can't find a copy, I'll send you one. John

Reply by ●August 31, 20052005-08-31

Jerry Avins <jya@ieee.org> wrote:>john wrote:>> I am using a homemade Viterbi decoder to remove ISI. Originally, my >> [ snip path metric arithmetic discussion ]> What has that to do with ISI? It seems to me to be a different issue.I assume the Viterbi decoder in question is part of an MLSE (Forney) equalizer. Steve

Reply by ●August 31, 20052005-08-31

john wrote: ...> See if you can dig up a copy of "How I Learned to Love the Trellis" by > Bernard Sklar. It appeared in IEEE Signal Processing Magazine a couple > of years ago. If you can't find a copy, I'll send you one.The world fills me with awe. At 73, I am still amazed and thrilled by the wonder of it: "T6: How I Learned to Love the Trellis: Using the Viterbi Algorithm for Equalization and Detection Tutorial Instructor: Dr. Bernard Sklar, Director Advanced Systems, Communications Engineering Services "Tutorial Description: In 1967, Andrew Viterbi first presented his now famous algorithm for the decoding of convolutional codes. A few years later, what became known as the Viterbi decoding algorithm (VDA), was applied to the detection of data signals distorted by intersymbol interference (ISI). This half-day tutorial focuses on how the VDA can be used for signal detection and equalization, in a way that is quite different from the usual equalization approach of adjusting a received signal via filtering. The filtering approach attempts to shape or modify the received signal in order to �reverse� the distortion. However, with the VDA, the receiver can be described as �adjusting itself� so as to make good data estimates from the distorted waveforms. Basic tools are reviewed, such as finite state machines, likelihood functions, and tree and trellis diagrams. An application from the Global System for Mobile (GSM) Communications is demonstrated. Also covered in the tutorial is the use of the VDA with a super-trellis to simultaneously perform detection, equalization, and decoding. The main goal of this half-day tutorial is to provide intuitive insight as to how the VDA works, and why it is a useful tool for detecting and equalizing signals that can be modeled as outputs from a finite state machine." Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������

Reply by ●August 31, 20052005-08-31

On Wed, 31 Aug 2005 00:53:36 +0000 (UTC), spope33@speedymail.org (Steve Pope) wrote:>john <johns@xetron.com> wrote: > >>I am still maintaining a running cumulative sum of the suriviving path >>error over the entire message. This grows without bound, so I need to >>make this a sliding sum. But the bookkeeping to do that is complicated. >>What I did instead is continuously compute the mean of all of the >>cumulative error sums, and subtrac that off at each step. Then it >>doesn't grow without bound. This gives good BER too. > >>Is this a crazy thing to do? I have a bad feeling that it is a dumb >>idea and I should go with the sliding sum instead, however tricky to >>implement. > >This is called subtractive normalization and it's not crazy. >(Most people would find the smallest path metric at a given >trellis stage, and subtract it from all of the metrics. >You're subtracting the mean instead, which I suspect works >equally well.)I would suspect that in a floating-point simulation there would be a lot of insensitivity to the quantity being subtracted, so substracting the minimum metric or the mean shouldn't make much, if any difference. As long as the arithmetic doesn't saturate or rollover due to overflows things should be hunky-dorry.>The other approach is called modulo normaliztion. > >SteveIIRC, this is what the old Qualcomm VLSI Viterbi decoders did. When an overflow was detected anywhere, everything got right shifted one bit. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org