DSPRelated.com
Forums

Re: Good Design techniques

Started by Andrew Nesterov October 16, 2008
> Subject: Good Design techniques
> Posted by: "christophe blouet" c...@hotmail.com
> Date: Wed Oct 15, 2008 12:42 pm ((PDT))
>
> Hi,
>
> I've been asked to write notes about how to do a good design for embedded
> software and DSP.
> Does anybody know a good book that already exist on that subject?
> I mean scheduler, double buffers, synchronisation problems, evaluating load
> of processor before development, state machines, etc...that sort of things.
> a kind of guide book to design a real time embedded software.

Hi Christophe,

>From a practical perspective, it might be very difficult to come up with a
common definition of a "good" design. I used to talk to real-time s/w
designers who were ready to shoot everybody who's talking about a scheduler,
they strongly believe that the major events loop would do everything related
to scheduling and, moreover the loop would "release" CPU for doing more
important things than giving precious CPU cycles for an OS "overhead".
In my opinion, this is one of the most extreme points, but nevertheless it
does exists (e.g. industrial controllers field).

The most suitable link to begin searching for the basics and papers is
(I just like it) Wikipedia: http://en.wikipedia.org/
It has very good introductory articles on scheduling algorithms, basic
interprocess synchronization methods and also on the traps and pitfalls
like races, deadlocks and priority inversions. This is a good place
for a beginner to start.

The intermediate and advanced levels may involve looking up bibliographies
on citeeer (http://citeseer.ist.psu.edu/ and the mirrors) or regular paper
library indices and catalogues.

This way or another, there are thousands papers and hundreds of textbooks
on the subject, including proceedings of dedicated conferences. Google
searches for key phrases like "real-time algorithms" or similar would
give millions of links to sift through.

However, there are two very simple concepts: the first one is double
buffering - which is serviced by a single binary semaphore. It has
a logically derived development as in the model of a circular buffer
of pointers to a group of arrays, i.e. not just two buffers, but N > 2
buffers.

The second one is CPU load. You may always assume it is 1. The central
limit theorem states that if it is less than 1, then the project manager
would pick a less resourceful CPU. The iterations would continue until
the load is 1. End of proof :)

Rgds,

Andrew
Andrew N,

I have been designing/writing/etc real-time embedded systems since 1978.
Because I work contract positions in software, I have seen lots and lots of
different methods for the architect of the software design.

I have used/debugged/designed many of those real-time systems.

While I certainly have no 'kill' reflex over the software architecture, I have
found the least problematic, most consistently responsive systems use a
background loop doing very low priority items (like ram test, CRC calculation,
etc) and all real-time events driven by interrupts (timer driven interrupts or
external event driven interrupts).

The OSs that I have used (and there are several, including some I
designed/wrote) take a significant number of CPU cycles to perform a task or
thread switch.
Faster CPUs make the elapsed time shorter, but do not reduce the CPU cycle
count, especially as processors have gotten more complex and therefore have
greater amounts of registers to save/restore during a task swap.

In the early days, a task might not be invoked more than every 100 to 1000msec.
Now, a task may be invoked every 2msec, so CPU cycle count is still very important.

R. Williams
---------- Original Message -----------
From: Andrew Nesterov
To: c...
Cc: Christophe Blouet
Sent: Thu, 16 Oct 2008 06:52:59 -0700
Subject: [c6x] Re: Good Design techniques

> > Subject: Good Design techniques
> > Posted by: "christophe blouet" c...@hotmail.com
> > Date: Wed Oct 15, 2008 12:42 pm ((PDT))
> >
> > Hi,
> >
> > I've been asked to write notes about how to do a good design for embedded
> > software and DSP.
> > Does anybody know a good book that already exist on that subject?
> > I mean scheduler, double buffers, synchronisation problems, evaluating load
> > of processor before development, state machines, etc...that sort of things.
> > a kind of guide book to design a real time embedded software.
>
> Hi Christophe,
>
> >From a practical perspective, it might be very difficult to come up with a
> common definition of a "good" design. I used to talk to real-time s/w
> designers who were ready to shoot everybody who's talking about a
> scheduler, they strongly believe that the major events loop would do
> everything related to scheduling and, moreover the loop would
> "release" CPU for doing more important things than giving precious CPU
> cycles for an OS "overhead". In my opinion, this is one of the most
> extreme points, but nevertheless it does exists (e.g. industrial
> controllers field).
>
> The most suitable link to begin searching for the basics and papers is
> (I just like it) Wikipedia: http://en.wikipedia.org/
> It has very good introductory articles on scheduling algorithms, basic
> interprocess synchronization methods and also on the traps and pitfalls
> like races, deadlocks and priority inversions. This is a good place
> for a beginner to start.
>
> The intermediate and advanced levels may involve looking up bibliographies
> on citeeer (http://citeseer.ist.psu.edu/ and the mirrors) or regular paper
> library indices and catalogues.
>
> This way or another, there are thousands papers and hundreds of textbooks
> on the subject, including proceedings of dedicated conferences. Google
> searches for key phrases like "real-time algorithms" or similar would
> give millions of links to sift through.
>
> However, there are two very simple concepts: the first one is double
> buffering - which is serviced by a single binary semaphore. It has
> a logically derived development as in the model of a circular buffer
> of pointers to a group of arrays, i.e. not just two buffers, but N > 2
> buffers.
>
> The second one is CPU load. You may always assume it is 1. The central
> limit theorem states that if it is less than 1, then the project
> manager would pick a less resourceful CPU. The iterations would
> continue until the load is 1. End of proof :)
>
> Rgds,
>
> Andrew
------- End of Original Message -------
Thanks all

I also use a lot of wikipedia, don't really like to re invent the wheel every day.
I had also that debate about OS, schedulers, loops...
My theory is that a scheduler is a loop where you can add some sophistications, like queues, task list, etc... all of course is taking time, but depends of your needs.
There is a big gap when you move to real time OS, due to tasks that can be interrupted...
When I'm running a project I usually say, start with a loop, and see what's missing, if you can prove you need more ok, otherwise keep it as simple as it can be.

The thing that brought this idea of writing some design techniques, is that the more it goes the less you meet people with good experience, and the more people you get that start by saying: What you do not have LINUX in your DSP? a double buffer, what for? can't you do a malloc? or That works, but not in real time, sorry I need 4 processors for your 60 tap filter, what? Cache? no what is that?
I doubt that university now teach people how to optimise or how to control a DMA.
And my experience debugging software is that people try to make it their own way but at the end of the day it doesn't work and they have to learn the hard way how to do things...and I believe all this fuzzy logic leads to nearly only one good solution that really works and stabilise all the software built on it.
So I'm surprised nobody thought about writing down some good techniques to write real time embedded software...

I'll start writing things for my company, if somebody's interested in reading it and giving views, I would appreciate your experience.

Ciao

> From: r...@lewiscounty.com
> To: a...@techemail.com; c...
> CC: c...@hotmail.com
> Subject: Re: [c6x] Re: Good Design techniques
> Date: Thu, 16 Oct 2008 10:51:43 -0700
> Andrew N,
>
> I have been designing/writing/etc real-time embedded systems since 1978.
> Because I work contract positions in software, I have seen lots and lots of
> different methods for the architect of the software design.
>
> I have used/debugged/designed many of those real-time systems.
>
> While I certainly have no 'kill' reflex over the software architecture, I have
> found the least problematic, most consistently responsive systems use a
> background loop doing very low priority items (like ram test, CRC calculation,
> etc) and all real-time events driven by interrupts (timer driven interrupts or
> external event driven interrupts).
>
> The OSs that I have used (and there are several, including some I
> designed/wrote) take a significant number of CPU cycles to perform a task or
> thread switch.
> Faster CPUs make the elapsed time shorter, but do not reduce the CPU cycle
> count, especially as processors have gotten more complex and therefore have
> greater amounts of registers to save/restore during a task swap.
>
> In the early days, a task might not be invoked more than every 100 to 1000msec.
> Now, a task may be invoked every 2msec, so CPU cycle count is still very important.
>
> R. Williams
> ---------- Original Message -----------
> From: Andrew Nesterov
> To: c...
> Cc: Christophe Blouet
> Sent: Thu, 16 Oct 2008 06:52:59 -0700
> Subject: [c6x] Re: Good Design techniques
>
> > > Subject: Good Design techniques
> > > Posted by: "christophe blouet" c...@hotmail.com
> > > Date: Wed Oct 15, 2008 12:42 pm ((PDT))
> > >
> > > Hi,
> > >
> > > I've been asked to write notes about how to do a good design for embedded
> > > software and DSP.
> > > Does anybody know a good book that already exist on that subject?
> > > I mean scheduler, double buffers, synchronisation problems, evaluating load
> > > of processor before development, state machines, etc...that sort of things.
> > > a kind of guide book to design a real time embedded software.
> >
> > Hi Christophe,
> >
> > >From a practical perspective, it might be very difficult to come up with a
> > common definition of a "good" design. I used to talk to real-time s/w
> > designers who were ready to shoot everybody who's talking about a
> > scheduler, they strongly believe that the major events loop would do
> > everything related to scheduling and, moreover the loop would
> > "release" CPU for doing more important things than giving precious CPU
> > cycles for an OS "overhead". In my opinion, this is one of the most
> > extreme points, but nevertheless it does exists (e.g. industrial
> > controllers field).
> >
> > The most suitable link to begin searching for the basics and papers is
> > (I just like it) Wikipedia: http://en.wikipedia.org/
> > It has very good introductory articles on scheduling algorithms, basic
> > interprocess synchronization methods and also on the traps and pitfalls
> > like races, deadlocks and priority inversions. This is a good place
> > for a beginner to start.
> >
> > The intermediate and advanced levels may involve looking up bibliographies
> > on citeeer (http://citeseer.ist.psu.edu/ and the mirrors) or regular paper
> > library indices and catalogues.
> >
> > This way or another, there are thousands papers and hundreds of textbooks
> > on the subject, including proceedings of dedicated conferences. Google
> > searches for key phrases like "real-time algorithms" or similar would
> > give millions of links to sift through.
> >
> > However, there are two very simple concepts: the first one is double
> > buffering - which is serviced by a single binary semaphore. It has
> > a logically derived development as in the model of a circular buffer
> > of pointers to a group of arrays, i.e. not just two buffers, but N > 2
> > buffers.
> >
> > The second one is CPU load. You may always assume it is 1. The central
> > limit theorem states that if it is less than 1, then the project
> > manager would pick a less resourceful CPU. The iterations would
> > continue until the load is 1. End of proof :)
> >
> > Rgds,
> >
> > Andrew
> ------- End of Original Message -------
>
Christophe

I don't know if I could enrich your research, but I would appreciate if
I could read all this an get a little of your experience.
I am new in this DSP world and desingning and all info would be very
important.

Ed Marques.

christophe blouet escreveu:
>
> Thanks all
>
> I also use a lot of wikipedia, don't really like to re invent the
> wheel every day.
> I had also that debate about OS, schedulers, loops...
> My theory is that a scheduler is a loop where you can add some
> sophistications, like queues, task list, etc... all of course is
> taking time, but depends of your needs.
> There is a big gap when you move to real time OS, due to tasks that
> can be interrupted...
> When I'm running a project I usually say, start with a loop, and see
> what's missing, if you can prove you need more ok, otherwise keep it
> as simple as it can be.
>
> The thing that brought this idea of writing some design techniques, is
> that the more it goes the less you meet people with good experience,
> and the more people you get that start by saying: What you do not have
> LINUX in your DSP? a double buffer, what for? can't you do a malloc?
> or That works, but not in real time, sorry I need 4 processors for
> your 60 tap filter, what? Cache? no what is that?
> I doubt that university now teach people how to optimise or how to
> control a DMA.
> And my experience debugging software is that people try to make it
> their own way but at the end of the day it doesn't work and they have
> to learn the hard way how to do things...and I believe all this fuzzy
> logic leads to nearly only one good solution that really works and
> stabilise all the software built on it.
> So I'm surprised nobody thought about writing down some good
> techniques to write real time embedded software...
>
> I'll start writing things for my company, if somebody's interested in
> reading it and giving views, I would appreciate your experience.
>
> Ciao
>
> > From: r...@lewiscounty.com
> > To: a...@techemail.com; c...
> > CC: c...@hotmail.com
> > Subject: Re: [c6x] Re: Good Design techniques
> > Date: Thu, 16 Oct 2008 10:51:43 -0700
> >
> >
> > Andrew N,
> >
> > I have been designing/writing/etc real-time embedded systems since 1978.
> > Because I work contract positions in software, I have seen lots and
> lots of
> > different methods for the architect of the software design.
> >
> > I have used/debugged/designed many of those real-time systems.
> >
> > While I certainly have no 'kill' reflex over the software
> architecture, I have
> > found the least problematic, most consistently responsive systems use a
> > background loop doing very low priority items (like ram test, CRC
> calculation,
> > etc) and all real-time events driven by interrupts (timer driven
> interrupts or
> > external event driven interrupts).
> >
> > The OSs that I have used (and there are several, including some I
> > designed/wrote) take a significant number of CPU cycles to perform a
> task or
> > thread switch.
> > Faster CPUs make the elapsed time shorter, but do not reduce the CPU
> cycle
> > count, especially as processors have gotten more complex and
> therefore have
> > greater amounts of registers to save/restore during a task swap.
> >
> > In the early days, a task might not be invoked more than every 100
> to 1000msec.
> > Now, a task may be invoked every 2msec, so CPU cycle count is still
> very important.
> >
> > R. Williams
> >
> >
> > ---------- Original Message -----------
> > From: Andrew Nesterov
> > To: c...
> > Cc: Christophe Blouet
> > Sent: Thu, 16 Oct 2008 06:52:59 -0700
> > Subject: [c6x] Re: Good Design techniques
> >
> > > > Subject: Good Design techniques
> > > > Posted by: "christophe blouet" c...@hotmail.com
> > > > Date: Wed Oct 15, 2008 12:42 pm ((PDT))
> > > >
> > > > Hi,
> > > >
> > > > I've been asked to write notes about how to do a good design for
> embedded
> > > > software and DSP.
> > > > Does anybody know a good book that already exist on that subject?
> > > > I mean scheduler, double buffers, synchronisation problems,
> evaluating load
> > > > of processor before development, state machines, etc...that sort
> of things.
> > > > a kind of guide book to design a real time embedded software.
> > >
> > > Hi Christophe,
> > >
> > > >From a practical perspective, it might be very difficult to come
> up with a
> > > common definition of a "good" design. I used to talk to real-time s/w
> > > designers who were ready to shoot everybody who's talking about a
> > > scheduler, they strongly believe that the major events loop would do
> > > everything related to scheduling and, moreover the loop would
> > > "release" CPU for doing more important things than giving precious
> CPU
> > > cycles for an OS "overhead". In my opinion, this is one of the most
> > > extreme points, but nevertheless it does exists (e.g. industrial
> > > controllers field).
> > >
> > > The most suitable link to begin searching for the basics and papers is
> > > (I just like it) Wikipedia: http://en.wikipedia.org/
> > > It has very good introductory articles on scheduling algorithms, basic
> > > interprocess synchronization methods and also on the traps and
> pitfalls
> > > like races, deadlocks and priority inversions. This is a good place
> > > for a beginner to start.
> > >
> > > The intermediate and advanced levels may involve looking up
> bibliographies
> > > on citeeer (http://citeseer.ist.psu.edu/ and the mirrors) or
> regular paper
> > > library indices and catalogues.
> > >
> > > This way or another, there are thousands papers and hundreds of
> textbooks
> > > on the subject, including proceedings of dedicated conferences. Google
> > > searches for key phrases like "real-time algorithms" or similar would
> > > give millions of links to sift through.
> > >
> > > However, there are two very simple concepts: the first one is double
> > > buffering - which is serviced by a single binary semaphore. It has
> > > a logically derived development as in the model of a circular buffer
> > > of pointers to a group of arrays, i.e. not just two buffers, but N > 2
> > > buffers.
> > >
> > > The second one is CPU load. You may always assume it is 1. The central
> > > limit theorem states that if it is less than 1, then the project
> > > manager would pick a less resourceful CPU. The iterations would
> > > continue until the load is 1. End of proof :)
> > >
> > > Rgds,
> > >
> > > Andrew
> > ------- End of Original Message -------
> >
>
> From: Christophe Blouet
>
> When I'm running a project I usually say, start with a loop, and see what's
> missing, if you can prove you need more ok, otherwise keep it as simple as it
> can be.

> From: Richard Williams
>
> While I certainly have no 'kill' reflex over the software architecture, I have
> found the least problematic, most consistently responsive systems use a
> background loop doing very low priority items (like ram test, CRC calculation,
> etc) and all real-time events driven by interrupts (timer driven interrupts or
> external event driven interrupts).

30:0 :), because I would say, whenever the budget allows it, always use an RTOS
and forget about the super loop :)

Once I found an esse on the net, unfortunaly I cannot say anything about its author,
but I hope you guys will find it interesting:

http://embuild.org/merrill/DesignYourOs.html

I would add just two points: frame push/pop during a task dispatch is in
fact almost 100% the same process as that one the ISR would do on entry/exit.
While some fast ISRs wouldn't need to save all registers, still many would
do, especially if written in C.

Next, results shows that a schedulable set of real-time tasks (I am referring
to Liu and Leyland paper of 1973 and other papers on the subject, by Audsley,
Burns, Lehoczky, Sha etc.) never gives a 100% CPU utilization. It is always
less than 1 for a practical set of real-time tasks. Ok, based on this, one can
consider that "horrible" OS overhead as a non-real-time task that runs in the
CPU cycles that were not utilized by the set of real-time tasks. And indeed,
many if not all real-time OSes are designed exactly this way.

By the way, this fact also shows why there is no need to try to load a
CPU by 100% - there is just no way for a set of real-time task to
utilize all of the CPU. Of course, if there is "always ready" non-real-time
task in the system, then the CPU would be 100% utilized - e.g. the
dynamic halt (idle task) of an RTOS - I mean the "do nothing" infinite
loop.

Once we agreed that OS does not "steal" CPU cycles from real-time tasks,
its advantage in good logical organization of the application, the ability
to reuse software, becomes obvious. These are my reasons to never use the
super-loop scheduling (which is in fact a single priority round robin
algorithm).

Rgds,

Andrew
Andrew,

I know I did not say to use a super loop/round robin methodology.
I (ahm) have used such a architecture in the past. Until I realized what a poor
choice it is for deterministic/responsive/low latency real-time systems.

Your 'background/idle/always ready' task is what I use for those things that
only need to be done 'eventually' and all time constrained processing being
initiated by timers and interrupts. However, I do not implement a context swap
between different tasks as that takes time, is not all that responsive and
certainly is not deterministic nor low latency.

R. Williams
---------- Original Message -----------
From: Andrew Nesterov
To: christophe blouet
Cc: r...@lewiscounty.com, c...
Sent: Fri, 17 Oct 2008 10:20:09 -0700
Subject: RE: [c6x] Re: Good Design techniques

> > From: Christophe Blouet
> >
> > When I'm running a project I usually say, start with a loop, and see what's
> > missing, if you can prove you need more ok, otherwise keep it as simple as it
> > can be.
>
> > From: Richard Williams
> >
> > While I certainly have no 'kill' reflex over the software architecture, I have
> > found the least problematic, most consistently responsive systems use a
> > background loop doing very low priority items (like ram test, CRC calculation,
> > etc) and all real-time events driven by interrupts (timer driven interrupts or
> > external event driven interrupts).
>
> 30:0 :), because I would say, whenever the budget allows it, always
> use an RTOS and forget about the super loop :)
>
> Once I found an esse on the net, unfortunaly I cannot say anything
> about its author, but I hope you guys will find it interesting:
>
> http://embuild.org/merrill/DesignYourOs.html
>
> I would add just two points: frame push/pop during a task dispatch is
> in fact almost 100% the same process as that one the ISR would do on entry/exit.
> While some fast ISRs wouldn't need to save all registers, still many would
> do, especially if written in C.
>
> Next, results shows that a schedulable set of real-time tasks (I am referring
> to Liu and Leyland paper of 1973 and other papers on the subject, by
> Audsley, Burns, Lehoczky, Sha etc.) never gives a 100% CPU
> utilization. It is always less than 1 for a practical set of real-time
> tasks. Ok, based on this, one can consider that "horrible" OS overhead
> as a non-real-time task that runs in the CPU cycles that were not
> utilized by the set of real-time tasks. And indeed, many if not all
> real-time OSes are designed exactly this way.
>
> By the way, this fact also shows why there is no need to try to load a
> CPU by 100% - there is just no way for a set of real-time task to
> utilize all of the CPU. Of course, if there is "always ready" non-real-
> time task in the system, then the CPU would be 100% utilized - e.g.
> the dynamic halt (idle task) of an RTOS - I mean the "do nothing" infinite
> loop.
>
> Once we agreed that OS does not "steal" CPU cycles from real-time
> tasks, its advantage in good logical organization of the application,
> the ability to reuse software, becomes obvious. These are my reasons
> to never use the super-loop scheduling (which is in fact a single
> priority round robin algorithm).
>
> Rgds,
>
> Andrew
------- End of Original Message -------
Hi Richard,

Sorry if I did put it unclear. Yes I do understand your idea about BG loop
and RT processing in ISRs. However this method (generally speaking, not
having in mind any particular INTC) could well be missing or loosing
interrupt events and as a result fail to meet real-time deadlines.
First of all, not all h/w systems provide with sticky interrupt bits,
thus while the control is in a lengthly ISR, other interrupts might
be lost. Other situation might arise with delaying a relevant interrupt
processing - if an ISR would mask other interrupts during its execution.
The third problem is communication between ISRs - for example, two
tasks have received data that are to be combined and processed by a
third task.

Concerning context swap - as I had mentioned earlier, even if the code
does not perform it explicitly, an ISR would do it anyway, perhaps not
all the registers, but usually many of them. So I really cannot see
a difference between such an implicit way of saving the context frame
and a task dispatcher.

Thanks to all for this discussion.

Regards,

Andrew

> Date: Fri, 17 Oct 2008 12:32:25 -0700
> From: Richard Williams >
> Andrew,
>
> I know I did not say to use a super loop/round robin methodology.
> I (ahm) have used such a architecture in the past. Until I realized what a poor
> choice it is for deterministic/responsive/low latency real-time systems.
>
> Your 'background/idle/always ready' task is what I use for those things that
> only need to be done 'eventually' and all time constrained processing being
> initiated by timers and interrupts. However, I do not implement a context swap
> between different tasks as that takes time, is not all that responsive and
> certainly is not deterministic nor low latency.
>
> R. Williams
> ---------- Original Message -----------
> From: Andrew Nesterov
> Sent: Fri, 17 Oct 2008 10:20:09 -0700
> Subject: RE: [c6x] Re: Good Design techniques
>
>>> From: Christophe Blouet
>>>
>>> When I'm running a project I usually say, start with a loop, and see what's
>>> missing, if you can prove you need more ok, otherwise keep it as simple as it
>>> can be.
>>
>>> From: Richard Williams
>>>
>>> While I certainly have no 'kill' reflex over the software architecture, I have
>>> found the least problematic, most consistently responsive systems use a
>>> background loop doing very low priority items (like ram test, CRC calculation,
>>> etc) and all real-time events driven by interrupts (timer driven interrupts or
>>> external event driven interrupts).
>>
>> 30:0 :), because I would say, whenever the budget allows it, always
>> use an RTOS and forget about the super loop :)
>>
>> Once I found an esse on the net, unfortunaly I cannot say anything
>> about its author, but I hope you guys will find it interesting:
>>
>> http://embuild.org/merrill/DesignYourOs.html
>>
>> I would add just two points: frame push/pop during a task dispatch is
>> in fact almost 100% the same process as that one the ISR would do on entry/exit.
>> While some fast ISRs wouldn't need to save all registers, still many would
>> do, especially if written in C.
>>
>> Next, results shows that a schedulable set of real-time tasks (I am referring
>> to Liu and Leyland paper of 1973 and other papers on the subject, by
>> Audsley, Burns, Lehoczky, Sha etc.) never gives a 100% CPU
>> utilization. It is always less than 1 for a practical set of real-time
>> tasks. Ok, based on this, one can consider that "horrible" OS overhead
>> as a non-real-time task that runs in the CPU cycles that were not
>> utilized by the set of real-time tasks. And indeed, many if not all
>> real-time OSes are designed exactly this way.
>>
>> By the way, this fact also shows why there is no need to try to load a
>> CPU by 100% - there is just no way for a set of real-time task to
>> utilize all of the CPU. Of course, if there is "always ready" non-real-
>> time task in the system, then the CPU would be 100% utilized - e.g.
>> the dynamic halt (idle task) of an RTOS - I mean the "do nothing" infinite
>> loop.
>>
>> Once we agreed that OS does not "steal" CPU cycles from real-time
>> tasks, its advantage in good logical organization of the application,
>> the ability to reuse software, becomes obvious. These are my reasons
>> to never use the super-loop scheduling (which is in fact a single
>> priority round robin algorithm).
>>
>> Rgds,
>>
>> Andrew
> ------- End of Original Message -------
Andrew,

criteria 1,
any good real-time system will not have 'long' interrupt processing times.

We are speaking of the TI 6000 series of CPUs (and ALL other CPUs that I have
used over the years) have interrupt pending indications and the interrupt
pending is triggered via a condition of the sensing input of the CPU. (usually a
'level' transition of the sensing input).

So, unless the 'real-time' system is written by someone that has no idea what
they are doing, no interrupts will be lost.

criteria 2,
any good real-time system is that the interrupt will be processed and the
'interrupt pending' indication cleared before another interrupt trigger occurs
on the same sensing input of the CPU.

The above criteria are some of the characteristics of a good real-time system.
Any real-time system that fails the above criteria is trash (unless it is 'ok'
to miss interrupts) and needs to have the software architecture re-designed.

R. Williams

---------- Original Message -----------
From: Andrew Nesterov
To: Richard Williams
Cc: christophe blouet , c...
Sent: Mon, 20 Oct 2008 23:41:59 -0700
Subject: RE: [c6x] Re: Good Design techniques

> Hi Richard,
>
> Sorry if I did put it unclear. Yes I do understand your idea about BG loop
> and RT processing in ISRs. However this method (generally speaking, not
> having in mind any particular INTC) could well be missing or loosing
> interrupt events and as a result fail to meet real-time deadlines.
> First of all, not all h/w systems provide with sticky interrupt bits,
> thus while the control is in a lengthly ISR, other interrupts might
> be lost. Other situation might arise with delaying a relevant interrupt
> processing - if an ISR would mask other interrupts during its
> execution. The third problem is communication between ISRs - for
> example, two tasks have received data that are to be combined and
> processed by a third task.
>
> Concerning context swap - as I had mentioned earlier, even if the code
> does not perform it explicitly, an ISR would do it anyway, perhaps not
> all the registers, but usually many of them. So I really cannot see
> a difference between such an implicit way of saving the context frame
> and a task dispatcher.
>
> Thanks to all for this discussion.
>
> Regards,
>
> Andrew
>
> > Date: Fri, 17 Oct 2008 12:32:25 -0700
> > From: Richard Williams > >
> > Andrew,
> >
> > I know I did not say to use a super loop/round robin methodology.
> > I (ahm) have used such a architecture in the past. Until I realized what a poor
> > choice it is for deterministic/responsive/low latency real-time systems.
> >
> > Your 'background/idle/always ready' task is what I use for those things that
> > only need to be done 'eventually' and all time constrained processing being
> > initiated by timers and interrupts. However, I do not implement a context swap
> > between different tasks as that takes time, is not all that responsive and
> > certainly is not deterministic nor low latency.
> >
> > R. Williams
> >
> >
> > ---------- Original Message -----------
> > From: Andrew Nesterov
> > Sent: Fri, 17 Oct 2008 10:20:09 -0700
> > Subject: RE: [c6x] Re: Good Design techniques
> >
> >>> From: Christophe Blouet
> >>>
> >>> When I'm running a project I usually say, start with a loop, and see what's
> >>> missing, if you can prove you need more ok, otherwise keep it as simple as it
> >>> can be.
> >>
> >>> From: Richard Williams
> >>>
> >>> While I certainly have no 'kill' reflex over the software architecture, I have
> >>> found the least problematic, most consistently responsive systems use a
> >>> background loop doing very low priority items (like ram test, CRC calculation,
> >>> etc) and all real-time events driven by interrupts (timer driven interrupts or
> >>> external event driven interrupts).
> >>
> >> 30:0 :), because I would say, whenever the budget allows it, always
> >> use an RTOS and forget about the super loop :)
> >>
> >> Once I found an esse on the net, unfortunaly I cannot say anything
> >> about its author, but I hope you guys will find it interesting:
> >>
> >> http://embuild.org/merrill/DesignYourOs.html
> >>
> >> I would add just two points: frame push/pop during a task dispatch is
> >> in fact almost 100% the same process as that one the ISR would do on
entry/exit.
> >> While some fast ISRs wouldn't need to save all registers, still many would
> >> do, especially if written in C.
> >>
> >> Next, results shows that a schedulable set of real-time tasks (I am referring
> >> to Liu and Leyland paper of 1973 and other papers on the subject, by
> >> Audsley, Burns, Lehoczky, Sha etc.) never gives a 100% CPU
> >> utilization. It is always less than 1 for a practical set of real-time
> >> tasks. Ok, based on this, one can consider that "horrible" OS overhead
> >> as a non-real-time task that runs in the CPU cycles that were not
> >> utilized by the set of real-time tasks. And indeed, many if not all
> >> real-time OSes are designed exactly this way.
> >>
> >> By the way, this fact also shows why there is no need to try to load a
> >> CPU by 100% - there is just no way for a set of real-time task to
> >> utilize all of the CPU. Of course, if there is "always ready" non-real-
> >> time task in the system, then the CPU would be 100% utilized - e.g.
> >> the dynamic halt (idle task) of an RTOS - I mean the "do nothing" infinite
> >> loop.
> >>
> >> Once we agreed that OS does not "steal" CPU cycles from real-time
> >> tasks, its advantage in good logical organization of the application,
> >> the ability to reuse software, becomes obvious. These are my reasons
> >> to never use the super-loop scheduling (which is in fact a single
> >> priority round robin algorithm).
> >>
> >> Rgds,
> >>
> >> Andrew
> > ------- End of Original Message -------
> >
> >
------- End of Original Message -------
Richard and Andrew - I thought that I would chime in a bit :-)

Please keep in mind that we tend to view real time systems through our
own prism of experience. There are many types of real time systems
with very diverse requirements. If we work with video or audio or
radar data or 'whatever' it will tend to slant our view of what a real
time system 'has to be'.

I will submit that a real time system 'has to perform as designed
under all required conditions' - every time.
One key to this is to never miss an interrupt [unless the requirements
state otherwise, like - 'for any sequence of 100 wheel rotation
interrupts, the ABS system must process 99 of them correctly. Any 2
missed interrupts must be separated by at least 99 consecutive
correctly processed interrupts].
Whatever the requirements, correct interrupt processing can sometimes
be accomplished by what one person would deem 'long ISRs'. In another
environment, what looks like a 'nice short ISR' could be too long.
IMO, the answer is 'it depends'... on the system environmment.

Many of our 'modern high tech systems' tend to use an RTOS [sometimes
this is a 'real requirement' and sometimes it is a defensive measure
against the "chrome hangers" who are constantly 'enhancing the system'
during development]. IMO there are some 'deeply embedded real time
systems' that must tweak their behavior based on external stimuli -
some of these work faster and more robustly [and are easier to
maintain] without using an RTOS.

mikedunn

On Tue, Oct 21, 2008 at 12:36 PM, Richard Williams
wrote:
> Andrew,
>
> criteria 1,
> any good real-time system will not have 'long' interrupt processing times.
>
> We are speaking of the TI 6000 series of CPUs (and ALL other CPUs that I
> have
> used over the years) have interrupt pending indications and the interrupt
> pending is triggered via a condition of the sensing input of the CPU.
> (usually a
> 'level' transition of the sensing input).
>
> So, unless the 'real-time' system is written by someone that has no idea
> what
> they are doing, no interrupts will be lost.
>
> criteria 2,
> any good real-time system is that the interrupt will be processed and the
> 'interrupt pending' indication cleared before another interrupt trigger
> occurs
> on the same sensing input of the CPU.
>
> The above criteria are some of the characteristics of a good real-time
> system.
> Any real-time system that fails the above criteria is trash (unless it is
> 'ok'
> to miss interrupts) and needs to have the software architecture re-designed.
>
> R. Williams
>
> ---------- Original Message -----------
> From: Andrew Nesterov
> To: Richard Williams
> Cc: christophe blouet , c...
> Sent: Mon, 20 Oct 2008 23:41:59 -0700
> Subject: RE: [c6x] Re: Good Design techniques
>
>> Hi Richard,
>>
>> Sorry if I did put it unclear. Yes I do understand your idea about BG loop
>> and RT processing in ISRs. However this method (generally speaking, not
>> having in mind any particular INTC) could well be missing or loosing
>> interrupt events and as a result fail to meet real-time deadlines.
>> First of all, not all h/w systems provide with sticky interrupt bits,
>> thus while the control is in a lengthly ISR, other interrupts might
>> be lost. Other situation might arise with delaying a relevant interrupt
>> processing - if an ISR would mask other interrupts during its
>> execution. The third problem is communication between ISRs - for
>> example, two tasks have received data that are to be combined and
>> processed by a third task.
>>
>> Concerning context swap - as I had mentioned earlier, even if the code
>> does not perform it explicitly, an ISR would do it anyway, perhaps not
>> all the registers, but usually many of them. So I really cannot see
>> a difference between such an implicit way of saving the context frame
>> and a task dispatcher.
>>
>> Thanks to all for this discussion.
>>
>> Regards,
>>
>> Andrew
>>
>> > Date: Fri, 17 Oct 2008 12:32:25 -0700
>> > From: Richard Williams >> >
>> > Andrew,
>> >
>> > I know I did not say to use a super loop/round robin methodology.
>> > I (ahm) have used such a architecture in the past. Until I realized what
>> > a poor
>> > choice it is for deterministic/responsive/low latency real-time systems.
>> >
>> > Your 'background/idle/always ready' task is what I use for those things
>> > that
>> > only need to be done 'eventually' and all time constrained processing
>> > being
>> > initiated by timers and interrupts. However, I do not implement a
>> > context swap
>> > between different tasks as that takes time, is not all that responsive
>> > and
>> > certainly is not deterministic nor low latency.
>> >
>> > R. Williams
>> >
>> >
>> > ---------- Original Message -----------
>> > From: Andrew Nesterov
>> > Sent: Fri, 17 Oct 2008 10:20:09 -0700
>> > Subject: RE: [c6x] Re: Good Design techniques
>> >
>> >>> From: Christophe Blouet
>> >>>
>> >>> When I'm running a project I usually say, start with a loop, and see
>> >>> what's
>> >>> missing, if you can prove you need more ok, otherwise keep it as
>> >>> simple as it
>> >>> can be.
>> >>
>> >>> From: Richard Williams
>> >>>
>> >>> While I certainly have no 'kill' reflex over the software
>> >>> architecture, I have
>> >>> found the least problematic, most consistently responsive systems use
>> >>> a
>> >>> background loop doing very low priority items (like ram test, CRC
>> >>> calculation,
>> >>> etc) and all real-time events driven by interrupts (timer driven
>> >>> interrupts or
>> >>> external event driven interrupts).
>> >>
>> >> 30:0 :), because I would say, whenever the budget allows it, always
>> >> use an RTOS and forget about the super loop :)
>> >>
>> >> Once I found an esse on the net, unfortunaly I cannot say anything
>> >> about its author, but I hope you guys will find it interesting:
>> >>
>> >> http://embuild.org/merrill/DesignYourOs.html
>> >>
>> >> I would add just two points: frame push/pop during a task dispatch is
>> >> in fact almost 100% the same process as that one the ISR would do on
> entry/exit.
>> >> While some fast ISRs wouldn't need to save all registers, still many
>> >> would
>> >> do, especially if written in C.
>> >>
>> >> Next, results shows that a schedulable set of real-time tasks (I am
>> >> referring
>> >> to Liu and Leyland paper of 1973 and other papers on the subject, by
>> >> Audsley, Burns, Lehoczky, Sha etc.) never gives a 100% CPU
>> >> utilization. It is always less than 1 for a practical set of real-time
>> >> tasks. Ok, based on this, one can consider that "horrible" OS overhead
>> >> as a non-real-time task that runs in the CPU cycles that were not
>> >> utilized by the set of real-time tasks. And indeed, many if not all
>> >> real-time OSes are designed exactly this way.
>> >>
>> >> By the way, this fact also shows why there is no need to try to load a
>> >> CPU by 100% - there is just no way for a set of real-time task to
>> >> utilize all of the CPU. Of course, if there is "always ready" non-real-
>> >> time task in the system, then the CPU would be 100% utilized - e.g.
>> >> the dynamic halt (idle task) of an RTOS - I mean the "do nothing"
>> >> infinite
>> >> loop.
>> >>
>> >> Once we agreed that OS does not "steal" CPU cycles from real-time
>> >> tasks, its advantage in good logical organization of the application,
>> >> the ability to reuse software, becomes obvious. These are my reasons
>> >> to never use the super-loop scheduling (which is in fact a single
>> >> priority round robin algorithm).
>> >>
>> >> Rgds,
>> >>
>> >> Andrew
>> > ------- End of Original Message -------
>> >
>> >
> ------- End of Original Message -------

--
www.dsprelated.com/blogs-1/nf/Mike_Dunn.php
Hi Mike,

Thanks for your comments. In fact, I did not try to argue the
importance of the basic things that had been pointed out in this
discussion, and first of all - missing an external event.

I was trying to emphasize the importance of scheduling, separation of
interrupt mode and application mode and transition from a qualitative
approach (- words) to a quantitative one (- numbers). Hopefully the
references below would illustrate what I tried to say :)

* C.L. Liu and J.W. Layland. Scheduling algorithms for multiprogramming in a
hard real-time environment. Journal of ACM, 20(1):46--61, January 1973.
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid2857F17C009A878846D5AA82A57B786?doi.1.1.36.8216&rep=rep1&type=pdf

* N. C. Audsley, A. Burns, M. F. Richardson, A. J. Wellings, Real-Time
Scheduling: The Deadline-Monotonic Approach, Proc. IEEE Workshop on Real-Time
Operating Systems and Software, 1991
http://citeseerx.ist.psu.edu/viewdoc/download?doi.1.1.37.4438&rep=rep1&type=pdf

* N. Audsley, A. Burns, Real-Time System Scheduling (1990)
http://citeseerx.ist.psu.edu/viewdoc/download?doi.1.1.29.4929&rep=rep1&type=pdf

* N. Audsley, A. Burns, M. Richardson, K. Tindell, A. J. Wellings, Applying
New Scheduling Theory to Static Priority Pre-Emptive Scheduling, Software
Engineering Journal 1993
http://citeseerx.ist.psu.edu/viewdoc/download?doi.1.1.30.6436&rep=rep1&type=pdf

* J. A. Stankovic. Misconceptions about real-time computing: A serious
problem for next-generation systems. IEEE Computer 21(10), 21(10), October
1988
http://www.ece.cmu.edu/~ece749/docs/Misconceptions-Stankovic.pdf

* B. Sprunt, L. Sha, and J. P. Lehoczky, "Aperiodic Task Scheduling for Hard
Real-Time Systems, " Real-Time Systems: The International Journal of
Time-Critical Computing Systems, vol. 1, pp. 27--60, 1989

* B. Sprunt, Aperiodic task scheduling for real-time systems (1990), PhD
Thesis.
http://citeseerx.ist.psu.edu/viewdoc/download?doi.1.1.34.6306&rep=rep1&type=pdf

These papers are the foundations, there are more uptodate references on the
Citeseer. I just tried to mention the major authors in the field.

Rgds,
Andrew

> Date: Tue, 21 Oct 2008 15:35:22 -0500
> From: Michael Dunn >
> Richard and Andrew - I thought that I would chime in a bit :-)
>
> Please keep in mind that we tend to view real time systems through our
> own prism of experience. There are many types of real time systems
> with very diverse requirements. If we work with video or audio or
> radar data or 'whatever' it will tend to slant our view of what a real
> time system 'has to be'.
>
> I will submit that a real time system 'has to perform as designed
> under all required conditions' - every time.
> One key to this is to never miss an interrupt [unless the requirements
> state otherwise, like - 'for any sequence of 100 wheel rotation
> interrupts, the ABS system must process 99 of them correctly. Any 2
> missed interrupts must be separated by at least 99 consecutive
> correctly processed interrupts].
> Whatever the requirements, correct interrupt processing can sometimes
> be accomplished by what one person would deem 'long ISRs'. In another
> environment, what looks like a 'nice short ISR' could be too long.
> IMO, the answer is 'it depends'... on the system environmment.
>
> Many of our 'modern high tech systems' tend to use an RTOS [sometimes
> this is a 'real requirement' and sometimes it is a defensive measure
> against the "chrome hangers" who are constantly 'enhancing the system'
> during development]. IMO there are some 'deeply embedded real time
> systems' that must tweak their behavior based on external stimuli -
> some of these work faster and more robustly [and are easier to
> maintain] without using an RTOS.
>
> mikedunn
>
> On Tue, Oct 21, 2008 at 12:36 PM, Richard Williams
> wrote:
>> Andrew,
>>
>> criteria 1,
>> any good real-time system will not have 'long' interrupt processing times.
>>
>> We are speaking of the TI 6000 series of CPUs (and ALL other CPUs that I
>> have
>> used over the years) have interrupt pending indications and the interrupt
>> pending is triggered via a condition of the sensing input of the CPU.
>> (usually a
>> 'level' transition of the sensing input).
>>
>> So, unless the 'real-time' system is written by someone that has no idea
>> what
>> they are doing, no interrupts will be lost.
>>
>> criteria 2,
>> any good real-time system is that the interrupt will be processed and the
>> 'interrupt pending' indication cleared before another interrupt trigger
>> occurs
>> on the same sensing input of the CPU.
>>
>> The above criteria are some of the characteristics of a good real-time
>> system.
>> Any real-time system that fails the above criteria is trash (unless it is
>> 'ok'
>> to miss interrupts) and needs to have the software architecture re-designed.
>>
>> R. Williams
>>
>> ---------- Original Message -----------
>> From: Andrew Nesterov
>> To: Richard Williams
>> Cc: christophe blouet , c...
>> Sent: Mon, 20 Oct 2008 23:41:59 -0700
>> Subject: RE: [c6x] Re: Good Design techniques
>>
>>> Hi Richard,
>>>
>>> Sorry if I did put it unclear. Yes I do understand your idea about BG loop
>>> and RT processing in ISRs. However this method (generally speaking, not
>>> having in mind any particular INTC) could well be missing or loosing
>>> interrupt events and as a result fail to meet real-time deadlines.
>>> First of all, not all h/w systems provide with sticky interrupt bits,
>>> thus while the control is in a lengthly ISR, other interrupts might
>>> be lost. Other situation might arise with delaying a relevant interrupt
>>> processing - if an ISR would mask other interrupts during its
>>> execution. The third problem is communication between ISRs - for
>>> example, two tasks have received data that are to be combined and
>>> processed by a third task.
>>>
>>> Concerning context swap - as I had mentioned earlier, even if the code
>>> does not perform it explicitly, an ISR would do it anyway, perhaps not
>>> all the registers, but usually many of them. So I really cannot see
>>> a difference between such an implicit way of saving the context frame
>>> and a task dispatcher.
>>>
>>> Thanks to all for this discussion.
>>>
>>> Regards,
>>>
>>> Andrew
>>>
>>>> Date: Fri, 17 Oct 2008 12:32:25 -0700
>>>> From: Richard Williams >>>>
>>>> Andrew,
>>>>
>>>> I know I did not say to use a super loop/round robin methodology.
>>>> I (ahm) have used such a architecture in the past. Until I realized what
>>>> a poor
>>>> choice it is for deterministic/responsive/low latency real-time systems.
>>>>
>>>> Your 'background/idle/always ready' task is what I use for those things
>>>> that
>>>> only need to be done 'eventually' and all time constrained processing
>>>> being
>>>> initiated by timers and interrupts. However, I do not implement a
>>>> context swap
>>>> between different tasks as that takes time, is not all that responsive
>>>> and
>>>> certainly is not deterministic nor low latency.
>>>>
>>>> R. Williams
>>>>
>>>>
>>>> ---------- Original Message -----------
>>>> From: Andrew Nesterov
>>>> Sent: Fri, 17 Oct 2008 10:20:09 -0700
>>>> Subject: RE: [c6x] Re: Good Design techniques
>>>>
>>>>>> From: Christophe Blouet
>>>>>>
>>>>>> When I'm running a project I usually say, start with a loop, and see
>>>>>> what's
>>>>>> missing, if you can prove you need more ok, otherwise keep it as
>>>>>> simple as it
>>>>>> can be.
>>>>>
>>>>>> From: Richard Williams
>>>>>>
>>>>>> While I certainly have no 'kill' reflex over the software
>>>>>> architecture, I have
>>>>>> found the least problematic, most consistently responsive systems use
>>>>>> a
>>>>>> background loop doing very low priority items (like ram test, CRC
>>>>>> calculation,
>>>>>> etc) and all real-time events driven by interrupts (timer driven
>>>>>> interrupts or
>>>>>> external event driven interrupts).
>>>>>
>>>>> 30:0 :), because I would say, whenever the budget allows it, always
>>>>> use an RTOS and forget about the super loop :)
>>>>>
>>>>> Once I found an esse on the net, unfortunaly I cannot say anything
>>>>> about its author, but I hope you guys will find it interesting:
>>>>>
>>>>> http://embuild.org/merrill/DesignYourOs.html
>>>>>
>>>>> I would add just two points: frame push/pop during a task dispatch is
>>>>> in fact almost 100% the same process as that one the ISR would do on
>> entry/exit.
>>>>> While some fast ISRs wouldn't need to save all registers, still many
>>>>> would
>>>>> do, especially if written in C.
>>>>>
>>>>> Next, results shows that a schedulable set of real-time tasks (I am
>>>>> referring
>>>>> to Liu and Leyland paper of 1973 and other papers on the subject, by
>>>>> Audsley, Burns, Lehoczky, Sha etc.) never gives a 100% CPU
>>>>> utilization. It is always less than 1 for a practical set of real-time
>>>>> tasks. Ok, based on this, one can consider that "horrible" OS overhead
>>>>> as a non-real-time task that runs in the CPU cycles that were not
>>>>> utilized by the set of real-time tasks. And indeed, many if not all
>>>>> real-time OSes are designed exactly this way.
>>>>>
>>>>> By the way, this fact also shows why there is no need to try to load a
>>>>> CPU by 100% - there is just no way for a set of real-time task to
>>>>> utilize all of the CPU. Of course, if there is "always ready" non-real-
>>>>> time task in the system, then the CPU would be 100% utilized - e.g.
>>>>> the dynamic halt (idle task) of an RTOS - I mean the "do nothing"
>>>>> infinite
>>>>> loop.
>>>>>
>>>>> Once we agreed that OS does not "steal" CPU cycles from real-time
>>>>> tasks, its advantage in good logical organization of the application,
>>>>> the ability to reuse software, becomes obvious. These are my reasons
>>>>> to never use the super-loop scheduling (which is in fact a single
>>>>> priority round robin algorithm).
>>>>>
>>>>> Rgds,
>>>>>
>>>>> Andrew
>>>> ------- End of Original Message -------
>>>>
>>>>
>> ------- End of Original Message -------
>>
>> --
> www.dsprelated.com/blogs-1/nf/Mike_Dunn.php
>