DSPRelated.com
Forums

How to downsample this signal?

Started by Piotr Wyderski May 13, 2017
Hello,

I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This high 
frequency is for rapid detection of overvoltage conditions. However, for
further processing this sample rate is way too high, so I'd like to
change the sampling rate by a factor ~100 (not a very critical value,
just "sensibly low").

The thing will be implemented on a PSOC5LP, which has a dedicated
digital filter block capable of ~32e6 FMACs per second with fixed
24 bit resolution. I'll use half of that for other purposes and would 
like to retain data accuracy at least 12 bit. The dominant signal
is around 50Hz and its quality should remain high, especially the
phase relations.

Given that constraints, what architecture would you suggest to
do that? I was thinking about not using multiplication at all
(at least at the high freuency stage) and implement two cascaded
order 4, decimation ratio 8 CICs, which fit in the 24-bit registers
and provide combined decimation ratio of 64, which is fine. Then
clean up the output with a FIR. This combo worked years ago in
the FPGA world, where multiplication was a problem. But I think
I loose something important by not using the available MAC unit,
so could you please suggest me a better approach?

    Best regards, Piotr




On 13.05.2017 10:23, Piotr Wyderski wrote:
> Hello, > > I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This high > frequency is for rapid detection of overvoltage conditions. However, for > further processing this sample rate is way too high, so I'd like to > change the sampling rate by a factor ~100 (not a very critical value, > just "sensibly low"). > > The thing will be implemented on a PSOC5LP, which has a dedicated > digital filter block capable of ~32e6 FMACs per second with fixed > 24 bit resolution. I'll use half of that for other purposes and would > like to retain data accuracy at least 12 bit. The dominant signal > is around 50Hz and its quality should remain high, especially the > phase relations. > > Given that constraints, what architecture would you suggest to > do that? I was thinking about not using multiplication at all > (at least at the high freuency stage) and implement two cascaded > order 4, decimation ratio 8 CICs, which fit in the 24-bit registers > and provide combined decimation ratio of 64, which is fine. Then > clean up the output with a FIR. This combo worked years ago in > the FPGA world, where multiplication was a problem. But I think > I loose something important by not using the available MAC unit, > so could you please suggest me a better approach? > > Best regards, Piotr >
Have you considered uniform-phase M-path recursive all-pass filters? A good reference is "Multirate Signal Processing for Communication Systems" by fredric harris. Hope that helps. Gene
Moving average? Or y=c*y+(1-c)*x, x being your sample, c being say .99 
Both of those would give weird frequency dependent responses though.  Binomial filter, better, no?
On 5/13/2017 3:23 AM, Piotr Wyderski wrote:
> Hello, > > I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This high > frequency is for rapid detection of overvoltage conditions. However, for > further processing this sample rate is way too high, so I'd like to > change the sampling rate by a factor ~100 (not a very critical value, > just "sensibly low"). > > The thing will be implemented on a PSOC5LP, which has a dedicated > digital filter block capable of ~32e6 FMACs per second with fixed > 24 bit resolution. I'll use half of that for other purposes and would > like to retain data accuracy at least 12 bit. The dominant signal > is around 50Hz and its quality should remain high, especially the > phase relations. > > Given that constraints, what architecture would you suggest to > do that? I was thinking about not using multiplication at all > (at least at the high freuency stage) and implement two cascaded > order 4, decimation ratio 8 CICs, which fit in the 24-bit registers > and provide combined decimation ratio of 64, which is fine. Then > clean up the output with a FIR. This combo worked years ago in > the FPGA world, where multiplication was a problem. But I think > I loose something important by not using the available MAC unit, > so could you please suggest me a better approach?
You would need to do the filter design to see how good a filter you can make with those constraints, but what is wrong with using the MAC for a FIR filter? As you say, CIC filters are used when multiplies are limited and often used with a FIR after decimation to "clean up" the final filter response. The CIC has nice deep nulls at appropriate frequencies to help minimize aliasing the upper limit. But if you have the resources to implement FIR filters, why not? It all depends on how good of a filter you can make. Using 160 multiplies per output sample would give you a pretty nice filter I believe. Just take advantage of the various optimizations available in this case. You are familiar with polyphase decimating FIR filters, no? -- Rick C
Piotr Wyderski <peter.pan@neverland.mil> writes:

> Hello, > > I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This > high frequency is for rapid detection of overvoltage conditions. > However, for > further processing this sample rate is way too high, so I'd like to > change the sampling rate by a factor ~100 (not a very critical value, > just "sensibly low"). > > The thing will be implemented on a PSOC5LP, which has a dedicated > digital filter block capable of ~32e6 FMACs per second with fixed > 24 bit resolution. I'll use half of that for other purposes and would > like to retain data accuracy at least 12 bit. The dominant signal > is around 50Hz and its quality should remain high, especially the > phase relations. > > Given that constraints, what architecture would you suggest to > do that? I was thinking about not using multiplication at all > (at least at the high freuency stage) and implement two cascaded > order 4, decimation ratio 8 CICs, which fit in the 24-bit registers > and provide combined decimation ratio of 64, which is fine. Then > clean up the output with a FIR. This combo worked years ago in > the FPGA world, where multiplication was a problem. But I think > I loose something important by not using the available MAC unit, > so could you please suggest me a better approach?
Piotr, CICs are not really very good decimation filters, in general. That is, they alias, and there is no way to "clean up" the signal afterwards. What I don't understand is why, with the sledgehammer processor you have, you don't do a decent polyphase filter with a good FIR? -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
On 5/13/2017 12:58 PM, Randy Yates wrote:
> Piotr Wyderski <peter.pan@neverland.mil> writes: > >> Hello, >> >> I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This >> high frequency is for rapid detection of overvoltage conditions. >> However, for >> further processing this sample rate is way too high, so I'd like to >> change the sampling rate by a factor ~100 (not a very critical value, >> just "sensibly low"). >> >> The thing will be implemented on a PSOC5LP, which has a dedicated >> digital filter block capable of ~32e6 FMACs per second with fixed >> 24 bit resolution. I'll use half of that for other purposes and would >> like to retain data accuracy at least 12 bit. The dominant signal >> is around 50Hz and its quality should remain high, especially the >> phase relations. >> >> Given that constraints, what architecture would you suggest to >> do that? I was thinking about not using multiplication at all >> (at least at the high freuency stage) and implement two cascaded >> order 4, decimation ratio 8 CICs, which fit in the 24-bit registers >> and provide combined decimation ratio of 64, which is fine. Then >> clean up the output with a FIR. This combo worked years ago in >> the FPGA world, where multiplication was a problem. But I think >> I loose something important by not using the available MAC unit, >> so could you please suggest me a better approach? > > Piotr, > > CICs are not really very good decimation filters, in general. That > is, they alias, and there is no way to "clean up" the signal > afterwards. > > What I don't understand is why, with the sledgehammer processor > you have, you don't do a decent polyphase filter with a good > FIR?
They can be used without aliasing into the pass band if they are used only for the higher stages of decimation. They have nulls that are aliased to baseband. As long as that attenuation and width is sufficient they work great. -- Rick C
rickman <gnuarm@gmail.com> writes:

> On 5/13/2017 12:58 PM, Randy Yates wrote: >> Piotr Wyderski <peter.pan@neverland.mil> writes: >> >>> Hello, >>> >>> I have the 50Hz mains voltage signal sampled at 12bit/100kHz. This >>> high frequency is for rapid detection of overvoltage conditions. >>> However, for >>> further processing this sample rate is way too high, so I'd like to >>> change the sampling rate by a factor ~100 (not a very critical value, >>> just "sensibly low"). >>> >>> The thing will be implemented on a PSOC5LP, which has a dedicated >>> digital filter block capable of ~32e6 FMACs per second with fixed >>> 24 bit resolution. I'll use half of that for other purposes and would >>> like to retain data accuracy at least 12 bit. The dominant signal >>> is around 50Hz and its quality should remain high, especially the >>> phase relations. >>> >>> Given that constraints, what architecture would you suggest to >>> do that? I was thinking about not using multiplication at all >>> (at least at the high freuency stage) and implement two cascaded >>> order 4, decimation ratio 8 CICs, which fit in the 24-bit registers >>> and provide combined decimation ratio of 64, which is fine. Then >>> clean up the output with a FIR. This combo worked years ago in >>> the FPGA world, where multiplication was a problem. But I think >>> I loose something important by not using the available MAC unit, >>> so could you please suggest me a better approach? >> >> Piotr, >> >> CICs are not really very good decimation filters, in general. That >> is, they alias, and there is no way to "clean up" the signal >> afterwards. >> >> What I don't understand is why, with the sledgehammer processor >> you have, you don't do a decent polyphase filter with a good >> FIR? > > They can be used without aliasing into the pass band if they are used > only for the higher stages of decimation. They have nulls that are > aliased to baseband. As long as that attenuation and width is > sufficient they work great.
They can be, depending on the situation, as you say. Conversely there appear to be situations in which the aliasing would be too significant, or the passband response would be too unflat because of the high order required to get the aliasing down. Of course you can compensate for that. One would have to consider all these things if you were short on computational resources. It appears computational resources in Piotr's case are plentiful, and his application requires high accuracy (i.e., low aliasing and a flat bandpass), hence my suggestion. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
Thank you all for your input.

Randy Yates wrote:

> What I don't understand is why, with the sledgehammer processor > you have, you don't do a decent polyphase filter with a good > FIR?
It is a hobby project and I had 15 years of break with any form of DSP, at least understood as *signal* processing. I believe I still understand the math behind it, but I know only a limited number of "tricks". Polyphase representation is totally new to me, so are its applications to decimation/interpolation. The FIR filter has many advantages, e.g. maintaining linear phase will be easy, but the "direct" approach implied a FIR of insanely high order, so I didn't pursue the idea and drifted towards IIR, with which I had some positive experience, especially with CICs used in my SDR project. As far as I can see, the polyphase FIR approach should be the right choice for me. I am still not sure whether I am able to design a filter in this topology correctly, but well, it's what we call "learning". Thank you for bringing it to my attention. BTW, the processor is not as sledgehammer as it may sound, especially due to the very limited RAM resources available to the DSP unit (2 banks of 128 24-bit words each, period.). The main processor (an ARM) has everything needed to the job, except of low latency guarantees, which is enough to kick it out of the scene. Whatever I can do, I must do using the DFB block. Currently the unknown is the definition of "whatever", but I'm aiming at saturating the block. Best regards, Piotr
On 5/14/2017 6:42 PM, Piotr Wyderski wrote:
> Thank you all for your input. > > Randy Yates wrote: > >> What I don't understand is why, with the sledgehammer processor >> you have, you don't do a decent polyphase filter with a good >> FIR? > > It is a hobby project and I had 15 years of break with any form > of DSP, at least understood as *signal* processing. I believe > I still understand the math behind it, but I know only a limited > number of "tricks". Polyphase representation is totally new to > me, so are its applications to decimation/interpolation.
A polyphase filter *is* a FIR filter. It is what you end up with by factoring the decimation and only calculate the outputs you need rather than calculating every output and *then* doing the decimation. It's been a long time since I wrote one of these, so the details are rather fuzzy. Google is your friend, at least much more accurate than my hazy recollections.
> The FIR filter has many advantages, e.g. maintaining linear phase > will be easy, but the "direct" approach implied a FIR of insanely > high order, so I didn't pursue the idea and drifted towards IIR, > with which I had some positive experience, especially with CICs > used in my SDR project. > > As far as I can see, the polyphase FIR approach should be the right > choice for me. I am still not sure whether I am able to design a > filter in this topology correctly, but well, it's what we call > "learning". Thank you for bringing it to my attention.
As it was a learning experience for me.
> BTW, the processor is not as sledgehammer as it may sound, especially > due to the very limited RAM resources available to the DSP unit (2 banks > of 128 24-bit words each, period.). The main processor (an ARM) has > everything needed to the job, except of low latency guarantees, which > is enough to kick it out of the scene. Whatever I can do, I must do > using the DFB block. Currently the unknown is the definition of > "whatever", but I'm aiming at saturating the block.
Are the 248 words for data or also program? That should be plenty of room for coefficients. The data can be processed on the fly. -- Rick C
rickman wrote:

> Are the 248 words for data or also program?
Data only, there are dedicated code memories. The control part is totally crazy: there are two separate code memories working in parallel (they take both paths simultaneosly in order to provide instructions with no delay in the case of a branch). But there is no such thing as a program counter, the code memories store instructions at consecutive addresses without any deeper structure. A run of instructions is called a block and is ended with the presence of a jump instruction. There is also a third control memory which stores the information where the i-th block begins and to which j-th state the FSM should go in the case of a branch. In short, hardware basic blocks. The data path is a VLIW with exposed pipelining, which adds fun.
> That should be plenty of room for coefficients.
But doesn't the FIR require a lot of cells for the past data values? Best regards, Piotr