DSPRelated.com
Forums

Some practicalities with implementing DSP and Digital Control

Started by lpm 4 years ago11 replieslatest reply 4 years ago221 views

Hi,

Besides some undergraduate study in signals and systems, communications and controls (so I understand at least some fundamentals) I don't really have any background in DSP, but have always been interested in it.  With some unexpected extra time away from work recently, I decided to dive into a hobby DSP project.  While I think I can mostly figure my way through how I might design the digital filters and controllers I want, I'm stumbling a bit trying to connect that to a practical implementation strategy and looking for a bit of help and advice.

To get specific, my current plan is as follows.  In order to get consistent high-speed sampling I plan to set up my ADC to free run with DMA and process the filters and controllers on a batch of samples from the DMA at a time. The question I'm wrestling with is how to meter out the controller updates in time. Should I apply the controller output immediately upon computing it, or apply it at some fixed consistent update rate? Every setup I've considered has a tradeoff between latency and jitter.  If I put out updates as early as I can possibly compute them, the exact timing of when each update will appear is not going to be consistent (jitter).  If I meter the updates out so they are applied at a consistent rate, they are more likely to be further "out of date" by the time they are applied (latency).  How should I trade one against the other?

I hope this question makes sense.  I haven't been able to find a discussion about this anywhere, which makes me think I'm probably missing something simple.

[ - ]
Reply by SlartibartfastJune 25, 2020

The answer to your question is completely dependent on the sensitivity of your system to latency or jitter or dropouts in the control signal.   If it is insensitive to these things, then No Big Deal, but if it is sensitive to one or more of them, then your strategy of how to mitigate it depends on the level of sensitivity and which quality it is sensitive to.

If jitter is a big deal but latency is not, then buffering everything to a reasonable expected delay and using an output clock to meter out the samples should work.   If it is highly sensitive to latency then you may have to tolerate whatever jitter is generated by your DMA batch sampling process, or try to minimize that jitter in other ways (like doing something other than DMA).  If you have to keep the latency low and the jitter low and this results in dropouts by having to skip samples that you can no longer wait for, then you may have to sort out how best to fill in those missing samples, if necessary, by filtering or repetition or whatever.

So, just like selecting adult diapers, it depends.

(Sorry, old political joke.)


[ - ]
Reply by lpmJune 25, 2020

Hi, thanks for your reply.

It certainly makes sense that if the system is insensitive to either jitter or latency, I can simply optimize for the other. Even that much is obvious to me :). What you point out is that the more interesting question I should have asked (but didn't) is how would I assess if my system is insensitive to jitter or latency? Is there any guidance you can offer on that?

[ - ]
Reply by CaradocJune 25, 2020

You can compute the delay margin to see how your system is sensible to delay (or jitter).

https://www.mathworks.com/help/control/ref/allmargin.html

[ - ]
Reply by CaradocJune 25, 2020

For control loop application if delay is no big deal, then jitter is no big deal either...

If delay is a big deal, then jitter is most likely a big deal.

[ - ]
Reply by jfrsystemsJune 25, 2020

Your question is a valid question. It can't be answered without understanding the context of the control application and the statistics of the update process. If the plant you are trying to control exhibits low bandwidth and the average update rate is high enough and its variance small enough, then applying updates as they are computed may work as well as the application needs. The plant effectively filters the randomness in the update stream,  including the variation in loop gain it implies. However, I would not design on that basis without ensuring that the numbers support the approach. 

Since obviously you are in the process of discovery, I will point you to the more regular way of dealing with this issue, which arises in nearly every control system. Establish the worst case delay in computing the control error. Sit the sampling epoch for the error sequence at this epoch. Include the delay in the control block diagram and do a Z-transform analysis of the system including the plant dynamics. look to the old text book by Ragazzini for tutoring on Z-transforms. Consider using a dead-beat control strategy in designing the compensator. Consider simulating the system in Matlab, Simulink or Octave, so you can experiment with parameter variation.

This isn't exactly answering your question but I hope it takes your worthy exploration into another layer of understanding on which you can build. There is a lot of control theory to help you, but I do suggest you lay foundations in Laplace and Z-transform theory, the former a tool of continuous time systems and the latter its extension to sampled data systems.  

Enjoy the process of increasing your knowledge.

[ - ]
Reply by lpmJune 25, 2020

Thanks for your reply, there is some good advice here.

Thinking about the plant's limited response speed as a lowpass on the applied control signal is a helpful way to think about it I think.  Coming at it from that perspective, it seems to me that if I am able to compute controller updates quickly enough applying them immediately will actually be advantageous.  It seems that if they are applied faster than the dynamics of the plant can 'catch up' then the overall performance becomes limited by the plant dynamics and the authority of the controller.  Conversely, if you were to instead meter out the control samples to something 'in band' of the plant's response, you would be introducing additional delay to the response unnecessarily. Is my interpretation roughly accurate?

I like your suggestion for how to include the processing time into the overall analysis of the system. I'm not sure I have the equipment available to make the measurements necessary to make such a formal model of my plant, at least initially, but the idea you present is something I will keep in mind.

I haven't used the Laplace domain in anger in years, and have never actually used the Z-transform (although I have a rough understanding of it by way of analogy to the Laplace transform) so your suggestion that I should focus on those areas is well founded.  I'm less aware of a dead-beat compensator.  Is there an approachable reference for those that you can recommend?


[ - ]
Reply by jfrsystemsJune 25, 2020

I think you would find John Rossiter's series of lectures on State-Space Design on YouTube interesting. Among other things he compares Optimal Control with Dead Beat and shows the trade-offs. Dead Beat is interesting in showing the minimum number of samples to settle related to the order of the system, but the plant has to be precisely known and may not be completely stable in its transfer function. I mentioned it to introduce the force of analysis - it is a limiting case.

State-Space, Z-transform and Laplace are essential background for control system design and worth the effort. Keep going, you are in an interesting domain.



[ - ]
Reply by CaradocJune 25, 2020

For typical digital processing applications, we usually process a batch of samples (say 16). For control loops, it's a bad idea. First of all, if you have 16 samples, the first 15 samples represent past values, while the 16th sample represent the "present" value (or close to it). Doesn't make sense to com^pute the control law based on the past values when you have a more recent value.

So bottom line, 


1 sample -> Compute Control Law -> Update Output
1 sample -> Compute Control Law -> Update Output


If you want to estimate how sensible your system is to delay, you can use the Padé approximant to mode a delay and then compute the phase margin.









[ - ]
Reply by Joe_WestJune 25, 2020

Not specifically mentioned is that if you are doing something that involves a derivative (time-rate-of-change), having jitter in your sampling interval can result in all sorts of unwanted behaviors. You will generally be better off if the sampling interval is constant. The literature on this goes back at least three decades.

[ - ]
Reply by lpmJune 25, 2020

Of course, but my question is focused on how to apply output samples from the controller, not when to take input ADC samples.

[ - ]
Reply by djmaguireJune 25, 2020

There is good advice above.  I will add that I had the same question as you when first implementing adaptive active noise and vibration systems.  

If the I/O was CODEC-based, then it was easy.  Everything was going to be fixed interval with one sample of delay.  As above, I would read inputs, calc the control outputs, and update the CODEC output values.  The output would get updated one sample later at the instant that the inputs were updated - guaranteed.  My obligation was simply to get my control outputs calculated in that time.

But what about ADC/DAC systems?  I could update as soon as my control outputs were ready.  That would be good, right?  No.

The problem is that the jitter of the varying input-to-output transport lag introduces noise that I then need to filter.  ...and, if my system would at all "benefit" from the events where the control effort is updated early, then - presumably - there would be some penalty for the events that were not updated early.  If that were the case, by definition, my computational delay is too long.  Being sufficient part of the time is not reasonable.

This question is not some abstract concept.  Most of the systems that I designed were sampled at higher rates than my control rate.  The downsampling and upsampling made my computational load different from sample-to-sample.  It would've ended up a mess if I had let it.

Please keep in mind also that - while the sampling rate is a function of bandwidth - the adaptation of the control filters within an adaptive control scheme is not.  If I am controlling noise in a prop airplane, my sampling rate is a function of the highest frequencies that I want to control.  The control filters need to be calculated/output at that rate.  ...but - given that the rate of change of those tonal components in that scenario is quite slow - I could spread my adaptation updates of those control filters over a number of samples.  That can be important in resource-limited ($) control systems.   

So... my recommendation in general is that you set your rate according to your needed control bandwidth and update your output at that same rate with the minimum - but fixed - lag dependably deliverable by your control hardware/software.