DSPRelated.com
Forums

Adaptive Filter for Binary Signal?

Started by jerothb December 1, 2010
On Dec 1, 3:20=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Knowlegde is power. Power is 10c per KW-h. My power is mere 10 MW,
mere? you're doing better than Wall Street lawyers! my power is only about a megawatt. at least when i'm turned on.
> > On Dec 1, 3:03 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > >> Filters are agnostic to the waveform.
i dunno about that. i've always thought that matched filters were quite particular to waveforms that look like their impulse response flipped around. r b-j
On Dec 1, 6:16=A0pm, brent <buleg...@columbus.rr.com> wrote:
> On Dec 1, 3:33=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > > jerothb wrote: > > > You are clearly a business man of the highest order. > > > > On Dec 1, 3:27 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote: > > > >>I see. You are interested in anyone doing your work for free. Fuck of=
f.
On 12/1/2010 9:31 AM, jerothb wrote:
> Hello, > > I'm taking an adaptive filter course, and have been given a desired > signal and a distorted signal. The desired signal is non-return-to- > zero purely random binary data, i.e. -1, 1, 1, -1, 1, -1 etc. The > distorted is this exact same signal, but filtered and added with > noise. We are to filter the distorted signal using adaptive filter > methods that we learned, namely steepest descent, LMS, RLS, etc. > > My question is, is this AT ALL realistic? Would you ever equalize a > binary signal? I thought equalizers and adaptive filters would run on > a modulated waveform signal, not pure binary. Unfortunately, I'm > beginning to doubt my teachers wisdom in giving us this project! > > For example, the BER on the distorted signal is about 0.48. I used > the optimal Wiener filter (not-adaptive, I know), and the MSE went > from about 4 down to 0.995. But that is HORRIBLE for binary data! > The BER went down to only like 0.478. And of course stuff like LMS, > steepest descent, etc, which also try to minimize MSE are never going > to be better than the Wiener filter, so this seems like a meaningless > pursuit! > > My Wiener filter is good. I tried it on some distorted audio, and the > result was fantastic. It took out almost all the distortion. So > that's not my problem. > > I'm trying to understand why these methods don't work on binary data. > Could anyone explain this to me? Is there something inherent in the > mean square minimizing filters that takes advantage of the fact that > consecutive samples are dependent on each other. (Obviously, with the > data I've been given, the previous sample has no bearing on the > next.) In class examples we always used AR process type signals, > which have samples that are always correlated with the previous > sample. We never had a single example with abrupt binary data, as > this is. > > To be clear, here is an actual excerpt of some part of the desired > signal and the distorted: > > 1.0000 -1.0000 1.0000 1.0000 -1.0000 -1.0000 > -1.0000 1.0000 1.0000 1.0000 -1.0000 > -0.1417 1.0748 -0.0883 -1.1488 0.8724 0.0210 > 0.9441 -0.9969 -1.1470 -0.8130 1.0622 > > basically, it's the same 1's and -1's, but scaled by the channel and > noise. > > Would you EVER send data in this form over a noisy/fading channel such > as this one?? Should adaptive filtering techniques work in this case?
As far as I'm concerned, before the day of Widrow et al and adaptive filters, Bob Lucky did adaptive equalizers for modems. It's much like the problem you describe - with fewer levels. Check it out. Fred
On Wed, 01 Dec 2010 19:01:02 +0000, glen herrmannsfeldt wrote:

> jerothb <jerothb@gmail.com> wrote: > >> I'm taking an adaptive filter course, and have been given a desired >> signal and a distorted signal. The desired signal is non-return-to- >> zero purely random binary data, i.e. -1, 1, 1, -1, 1, -1 etc. The >> distorted is this exact same signal, but filtered and added with noise. >> We are to filter the distorted signal using adaptive filter methods >> that we learned, namely steepest descent, LMS, RLS, etc. > >> My question is, is this AT ALL realistic? Would you ever equalize a >> binary signal? I thought equalizers and adaptive filters would run on >> a modulated waveform signal, not pure binary. Unfortunately, I'm >> beginning to doubt my teachers wisdom in giving us this project! > > This sounds pretty much like the problem of data coming off tape or > disk. Though people learned a long time ago that NRZ wasn't the best > choice, it is still a choice. > > -- glen
A more pertinent example might be the motherboard inside the computer you're using. PCI express uses 2.5 or 5.0 Gb/s 8B10B coded signals to connect various chipsets. SATA uses 1.5, 3.0 or 6.0 Gb/s 8B10 coded signals to connect to the disk drives. The bits produced by 8B10B coding are a loose approximation to non-return- to-zero purely random binary data, just with constraints on transition density, run length and DC balance. The distinction may not matter if this is an introductory course. Cheers, Allan
On Dec 1, 2:09&#4294967295;pm, jerothb <jero...@gmail.com> wrote:
> Then why does it work so poorly? &#4294967295;I'm just trying to come up with a > mathematical reason. &#4294967295;Yes, it lowers the MSE, as it should, but who > cares? &#4294967295;The BER is still crap. >
Above (cost function) you just answered (cost function) your own question (cost function), but don't know that you have. If I model a third-order process with a linear model, the result will not be very good. Also, if I derive a model by minimizing the mean-squared error (i.e., cost function), that says nothing about its performance as far as the BER is concerned (a DIFFERENTcost function). The Weiner filter, and the vast majority of adaptive filters are derived by minimizing the MSE (MSE is the cost function). As you stated, the MSE (cost function) was reduced. Voila, the filter is doing exactly what is was designed to do (minimize the cost function). If you want to maximize, the BER, you need to derive an algorithm based on that cost function. Maurice Givens
Yes!  You hit the nail on the head.  This was exactly what I was
getting at.  Our teacher told us to use MSE minimizing algorithms on a
binary sequency of samples {1, -1} with equal probability.

Since he never mentioned BER (I just looked into that of my own
initiative), I'm just minimize the MSE and ignore the BER.  It's sort
of pointless, since this is binary data, but that seems to be what he
wants.


> > Above (cost function) you just answered (cost function) your own > question (cost function), but don't know that you have. If I model a > third-order process with a linear model, the result will not be very > good. Also, if I derive a model by minimizing the mean-squared error > (i.e., cost function), that says nothing about its performance as far > as the BER is concerned (a DIFFERENTcost function). The Weiner filter, > and the vast majority of adaptive filters are derived by minimizing > the MSE (MSE is the cost function). As you stated, the MSE (cost > function) was reduced. Voila, the filter is doing exactly what is was > designed to do (minimize the cost function). > > If you want to maximize, the BER, you need to derive an algorithm > based on that cost function. > > Maurice Givens
On 12/01/2010 03:45 PM, Tim Wescott wrote:
 > [...]
> Oh my gawd! I'm being taught addition and the problem says I have 33 > apples in one box, and 89 apples in another, and how many is that in > total. But I don't like apples! And I only eat one orange a day anyway! > So this'll never be useful!!!!! (gosh, I sure wonder if I need to go buy > more oranges -- too bad I'm not being given the tools to do that job).
Prezactly! Well-put, Tim! -- Randy Yates % "My Shangri-la has gone away, fading like Digital Signal Labs % the Beatles on 'Hey Jude'" mailto://yates@ieee.org % http://www.digitalsignallabs.com % 'Shangri-La', *A New World Record*, ELO