DSPRelated.com
Forums

Reg. Increase in Jitter when NDK library is used with TI DM642 DSP processor

Started by Kandy June 9, 2013
Hi All,

I have been using TI DM642 processor along with a set of libraries that TI
has provided.

The following is the experiment that I did to understand the behavior of TI
DSP and NDK library.

Whenever signal "X" to DSP is raised from low to high, a HWI function is
invoked in my program, which in turn will set signal "Y" high for 5
microseconds (this time is not a concern here) and set it low afterwards. I
have set up a mechanism through which signal "X" will be turned high and
low at a speed of 1Khz (Speed is not the concern here). When i measured it
in Oscilloscope by tapping the signals "X" and "Y", I saw a jitter of 250
nano seconds between rise in 'X" and start of rise of signal "Y".

I used NDK library for TCP communication with a GUI that we have built. We
used functions like NC_SystemOpen, NC_NetStart to configure TCP protocol
stack and run it separately in a task. In a separate thread we establish a
connection with GUI and sends some data regularly. When we configured NDK
scheduler to be in high priority and interrupt mode then the jitter rates
went up to 50 microseconds. When we configured it to Low priority and
polling method, it came down to 15 microseconds approximately.

Why is there a increase in jitter rate when NDK library is used? What is
NDK library doing to it? Is there a way to stop it? Or is there anything
wrong that I am doing in configuration? Has anybody undergone this and
achieved better results?

Any sort of idea or help is appreciated.

Thanks
Kandhasamy
Kandy.

If I understand your description correctly,,,

You have multiple interrupts running, each with its' own priority.
You have multiple tasks running.
You have an OS running.

Each of these activities requires context switches which consume CPU cycles.

Polling rather than using an interrupt consumes lots of CPU cycles.

It cannot be expected that each event timing would be completely deterministic.
Therefore, I would expect jitter in the output signal.

---------- Original Message -----------
From: Kandy
To: c...
Sent: Sun, 9 Jun 2013 22:03:09 +0530
Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI DM642
DSP processor

> Hi All,
>
> I have been using TI DM642 processor along with a set of libraries
> that TI has provided.
>
> The following is the experiment that I did to understand the behavior
> of TI DSP and NDK library.
>
> Whenever signal "X" to DSP is raised from low to high, a HWI function
> is invoked in my program, which in turn will set signal "Y" high for 5
> microseconds (this time is not a concern here) and set it low
> afterwards. I have set up a mechanism through which signal "X" will be
> turned high and low at a speed of 1Khz (Speed is not the concern here)
> . When i measured it in Oscilloscope by tapping the signals "X" and
> "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> of rise of signal "Y".
>
> I used NDK library for TCP communication with a GUI that we have
> built. We used functions like NC_SystemOpen, NC_NetStart to configure
> TCP protocol stack and run it separately in a task. In a separate
> thread we establish a connection with GUI and sends some data
> regularly. When we configured NDK scheduler to be in high priority and
> interrupt mode then the jitter rates went up to 50 microseconds. When
> we configured it to Low priority and polling method, it came down to
> 15 microseconds approximately.
>
> Why is there a increase in jitter rate when NDK library is used? What
> is NDK library doing to it? Is there a way to stop it? Or is there anything
> wrong that I am doing in configuration? Has anybody undergone this and
> achieved better results?
>
> Any sort of idea or help is appreciated.
>
> Thanks
> Kandhasamy
------- End of Original Message -------

_____________________________________
Kandy,

One thing to look carefully at, is where the interrupt functions, the bios, the
stacks, etc are located.
They should be located on-chip rather than external RAM.

It is expected that running the NDK will create variable delays in the execution
of interrupt routines, as the actual interrupts are buffered, then passed to the
interrupt handler by the scheduler.

R. Williams

---------- Original Message -----------
From: Kandy
To: Richard Williams
Cc: c...
Sent: Mon, 10 Jun 2013 23:06:30 +0530
Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
DM642 DSP processor

> Richard,
>
> Thanks for your response.
>
> I use only one interrupt and i disable it when i start executing the HWI
> function. At the end I clear all the interrupt request en-queued and enable
> it at the end.
>
> There are two task created with priority 7 and 6. One to start the TCP
> protocol stack and the other to run my application.
>
> I am using DM642 DSP processor with 600 MHz speed. Without the NDK library
> the interrupt latency is around 250 ns. When I used NDK library in all
> possible combinations (High priority and interrupt, low priority and
> interrupt, low priority and polling), the interrupt latency ranges
> from 10 to 50 microseconds.
>
> My application requirements are we should respond to interrupts within
> a latency time of 3 microseconds. This is time critical.
>
> I have read the NDK user manual and reference guide to some extent relative
> to this problem. I would like to know what are all the possible ways to
> bring down this latency?
>
> Or are we saying that if we use NDK library then we cannot implement time
> critical applications like this?
>
> Thanks
> Kandy
>
> On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams wrote:
>
> > Kandy.
> >
> > If I understand your description correctly,,,
> >
> > You have multiple interrupts running, each with its' own priority.
> > You have multiple tasks running.
> > You have an OS running.
> >
> > Each of these activities requires context switches which consume CPU
> > cycles.
> >
> > Polling rather than using an interrupt consumes lots of CPU cycles.
> >
> > It cannot be expected that each event timing would be completely
> > deterministic.
> > Therefore, I would expect jitter in the output signal.
> >
> >
> >
> >
> > ---------- Original Message -----------
> > From: Kandy
> > To: c...
> > Sent: Sun, 9 Jun 2013 22:03:09 +0530
> > Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > DM642
> > DSP processor
> >
> > > Hi All,
> > >
> > > I have been using TI DM642 processor along with a set of libraries
> > > that TI has provided.
> > >
> > > The following is the experiment that I did to understand the behavior
> > > of TI DSP and NDK library.
> > >
> > > Whenever signal "X" to DSP is raised from low to high, a HWI function
> > > is invoked in my program, which in turn will set signal "Y" high for 5
> > > microseconds (this time is not a concern here) and set it low
> > > afterwards. I have set up a mechanism through which signal "X" will be
> > > turned high and low at a speed of 1Khz (Speed is not the concern here)
> > > . When i measured it in Oscilloscope by tapping the signals "X" and
> > > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> > > of rise of signal "Y".
> > >
> > > I used NDK library for TCP communication with a GUI that we have
> > > built. We used functions like NC_SystemOpen, NC_NetStart to configure
> > > TCP protocol stack and run it separately in a task. In a separate
> > > thread we establish a connection with GUI and sends some data
> > > regularly. When we configured NDK scheduler to be in high priority and
> > > interrupt mode then the jitter rates went up to 50 microseconds. When
> > > we configured it to Low priority and polling method, it came down to
> > > 15 microseconds approximately.
> > >
> > > Why is there a increase in jitter rate when NDK library is used? What
> > > is NDK library doing to it? Is there a way to stop it? Or is there
> > anything
> > > wrong that I am doing in configuration? Has anybody undergone this and
> > > achieved better results?
> > >
> > > Any sort of idea or help is appreciated.
> > >
> > > Thanks
> > > Kandhasamy
> > ------- End of Original Message -------
> >
> >
------- End of Original Message -------

_____________________________________
Richard,

Thanks for your response.

I use only one interrupt and i disable it when i start executing the HWI
function. At the end I clear all the interrupt request en-queued and enable
it at the end.

There are two task created with priority 7 and 6. One to start the TCP
protocol stack and the other to run my application.

I am using DM642 DSP processor with 600 MHz speed. Without the NDK library
the interrupt latency is around 250 ns. When I used NDK library in all
possible combinations (High priority and interrupt, low priority and
interrupt, low priority and polling), the interrupt latency ranges from 10
to 50 microseconds.

My application requirements are we should respond to interrupts within a
latency time of 3 microseconds. This is time critical.

I have read the NDK user manual and reference guide to some extent relative
to this problem. I would like to know what are all the possible ways to
bring down this latency?

Or are we saying that if we use NDK library then we cannot implement time
critical applications like this?

Thanks
Kandy
On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams wrote:

> Kandy.
>
> If I understand your description correctly,,,
>
> You have multiple interrupts running, each with its' own priority.
> You have multiple tasks running.
> You have an OS running.
>
> Each of these activities requires context switches which consume CPU
> cycles.
>
> Polling rather than using an interrupt consumes lots of CPU cycles.
>
> It cannot be expected that each event timing would be completely
> deterministic.
> Therefore, I would expect jitter in the output signal.
> ---------- Original Message -----------
> From: Kandy
> To: c...
> Sent: Sun, 9 Jun 2013 22:03:09 +0530
> Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> DM642
> DSP processor
>
> > Hi All,
> >
> > I have been using TI DM642 processor along with a set of libraries
> > that TI has provided.
> >
> > The following is the experiment that I did to understand the behavior
> > of TI DSP and NDK library.
> >
> > Whenever signal "X" to DSP is raised from low to high, a HWI function
> > is invoked in my program, which in turn will set signal "Y" high for 5
> > microseconds (this time is not a concern here) and set it low
> > afterwards. I have set up a mechanism through which signal "X" will be
> > turned high and low at a speed of 1Khz (Speed is not the concern here)
> > . When i measured it in Oscilloscope by tapping the signals "X" and
> > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> > of rise of signal "Y".
> >
> > I used NDK library for TCP communication with a GUI that we have
> > built. We used functions like NC_SystemOpen, NC_NetStart to configure
> > TCP protocol stack and run it separately in a task. In a separate
> > thread we establish a connection with GUI and sends some data
> > regularly. When we configured NDK scheduler to be in high priority and
> > interrupt mode then the jitter rates went up to 50 microseconds. When
> > we configured it to Low priority and polling method, it came down to
> > 15 microseconds approximately.
> >
> > Why is there a increase in jitter rate when NDK library is used? What
> > is NDK library doing to it? Is there a way to stop it? Or is there
> anything
> > wrong that I am doing in configuration? Has anybody undergone this and
> > achieved better results?
> >
> > Any sort of idea or help is appreciated.
> >
> > Thanks
> > Kandhasamy
> ------- End of Original Message -------
Hello Kandy,

I am not sure if I can help. I have no NDK specific knowledge, but
maybe a few useful thoughts.

Problem: DSP response to a HWI takes too long.

1. Are the interrupts 'swizzled' by some of the software [MUXH & MUXL]??
2. Is the MDIO interrupt connected to INT_04??
3. You may want to ensure that your HWI is connected to INT04.
4. Does the NDK SW relocate the interrupt vector table?? It may not be
as you think if it is relocated.

mikedunn

On 6/10/2013 12:36 PM, Kandy wrote:
> Richard,
>
> Thanks for your response.
>
> I use only one interrupt and i disable it when i start executing the
> HWI function. At the end I clear all the interrupt request en-queued
> and enable it at the end.
>
> There are two task created with priority 7 and 6. One to start the TCP
> protocol stack and the other to run my application.
>
> I am using DM642 DSP processor with 600 MHz speed. Without the NDK
> library the interrupt latency is around 250 ns. When I used NDK
> library in all possible combinations (High priority and interrupt, low
> priority and interrupt, low priority and polling), the interrupt
> latency ranges from 10 to 50 microseconds.
>
> My application requirements are we should respond to interrupts within
> a latency time of 3 microseconds. This is time critical.
>
> I have read the NDK user manual and reference guide to some extent
> relative to this problem. I would like to know what are all the
> possible ways to bring down this latency?
>
> Or are we saying that if we use NDK library then we cannot implement
> time critical applications like this?
>
> Thanks
> Kandy
> On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams
> > wrote:
>
> Kandy.
>
> If I understand your description correctly,,,
>
> You have multiple interrupts running, each with its' own priority.
> You have multiple tasks running.
> You have an OS running.
>
> Each of these activities requires context switches which consume
> CPU cycles.
>
> Polling rather than using an interrupt consumes lots of CPU cycles.
>
> It cannot be expected that each event timing would be completely
> deterministic.
> Therefore, I would expect jitter in the output signal.
> ---------- Original Message -----------
> From: Kandy > >
> To: c...
> Sent: Sun, 9 Jun 2013 22:03:09 +0530
> Subject: [c6x] Reg. Increase in Jitter when NDK library is used
> with TI DM642
> DSP processor
>
> > Hi All,
> >
> > I have been using TI DM642 processor along with a set of libraries
> > that TI has provided.
> >
> > The following is the experiment that I did to understand the
> behavior
> > of TI DSP and NDK library.
> >
> > Whenever signal "X" to DSP is raised from low to high, a HWI
> function
> > is invoked in my program, which in turn will set signal "Y" high
> for 5
> > microseconds (this time is not a concern here) and set it low
> > afterwards. I have set up a mechanism through which signal "X"
> will be
> > turned high and low at a speed of 1Khz (Speed is not the concern
> here)
> > . When i measured it in Oscilloscope by tapping the signals "X" and
> > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and
> start
> > of rise of signal "Y".
> >
> > I used NDK library for TCP communication with a GUI that we have
> > built. We used functions like NC_SystemOpen, NC_NetStart to
> configure
> > TCP protocol stack and run it separately in a task. In a separate
> > thread we establish a connection with GUI and sends some data
> > regularly. When we configured NDK scheduler to be in high
> priority and
> > interrupt mode then the jitter rates went up to 50 microseconds.
> When
> > we configured it to Low priority and polling method, it came down to
> > 15 microseconds approximately.
> >
> > Why is there a increase in jitter rate when NDK library is used?
> What
> > is NDK library doing to it? Is there a way to stop it? Or is
> there anything
> > wrong that I am doing in configuration? Has anybody undergone
> this and
> > achieved better results?
> >
> > Any sort of idea or help is appreciated.
> >
> > Thanks
> > Kandhasamy
> ------- End of Original Message -------
>
Kandy,

the .map file should/will contain all the locator information.

If you are using the T.I. BIOS, I think the BIOS configuration file contains the
ability to have certain interrupt handlers not buffered.

Since the source for the NDK is available for download from T.I.
you could download it.
Then configure it to not buffer certain interrupts.

Here is one place where the source is available:


R. Williams

---------- Original Message -----------
From: Kandy
To: Richard Williams
Cc: c...
Sent: Wed, 12 Jun 2013 20:39:30 +0530
Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
DM642 DSP processor

> Thanks Richard. At present I don't know how to check that. Let me
> learn and try to do it.
>
> Thanks
> Kandy
>
> On Tue, Jun 11, 2013 at 12:04 AM, Richard Williams
> wrote:
>
> > Kandy,
> >
> > One thing to look carefully at, is where the interrupt functions, the
> > bios, the
> > stacks, etc are located.
> > They should be located on-chip rather than external RAM.
> >
> > It is expected that running the NDK will create variable delays in the
> > execution
> > of interrupt routines, as the actual interrupts are buffered, then passed
> > to the
> > interrupt handler by the scheduler.
> >
> > R. Williams
> >
> >
> >
> >
> > ---------- Original Message -----------
> > From: Kandy
> > To: Richard Williams
> > Cc: c...
> > Sent: Mon, 10 Jun 2013 23:06:30 +0530
> > Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > DM642 DSP processor
> >
> > > Richard,
> > >
> > > Thanks for your response.
> > >
> > > I use only one interrupt and i disable it when i start executing the HWI
> > > function. At the end I clear all the interrupt request en-queued and
> > enable
> > > it at the end.
> > >
> > > There are two task created with priority 7 and 6. One to start the TCP
> > > protocol stack and the other to run my application.
> > >
> > > I am using DM642 DSP processor with 600 MHz speed. Without the NDK
> > library
> > > the interrupt latency is around 250 ns. When I used NDK library in all
> > > possible combinations (High priority and interrupt, low priority and
> > > interrupt, low priority and polling), the interrupt latency ranges
> > > from 10 to 50 microseconds.
> > >
> > > My application requirements are we should respond to interrupts within
> > > a latency time of 3 microseconds. This is time critical.
> > >
> > > I have read the NDK user manual and reference guide to some extent
> > relative
> > > to this problem. I would like to know what are all the possible ways to
> > > bring down this latency?
> > >
> > > Or are we saying that if we use NDK library then we cannot implement time
> > > critical applications like this?
> > >
> > > Thanks
> > > Kandy
> > >
> > > On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams <
> > r...@lewiscounty.com>wrote:
> > >
> > > > Kandy.
> > > >
> > > > If I understand your description correctly,,,
> > > >
> > > > You have multiple interrupts running, each with its' own priority.
> > > > You have multiple tasks running.
> > > > You have an OS running.
> > > >
> > > > Each of these activities requires context switches which consume CPU
> > > > cycles.
> > > >
> > > > Polling rather than using an interrupt consumes lots of CPU cycles.
> > > >
> > > > It cannot be expected that each event timing would be completely
> > > > deterministic.
> > > > Therefore, I would expect jitter in the output signal.
> > > >
> > > >
> > > >
> > > >
> > > > ---------- Original Message -----------
> > > > From: Kandy
> > > > To: c...
> > > > Sent: Sun, 9 Jun 2013 22:03:09 +0530
> > > > Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > > > DM642
> > > > DSP processor
> > > >
> > > > > Hi All,
> > > > >
> > > > > I have been using TI DM642 processor along with a set of libraries
> > > > > that TI has provided.
> > > > >
> > > > > The following is the experiment that I did to understand the behavior
> > > > > of TI DSP and NDK library.
> > > > >
> > > > > Whenever signal "X" to DSP is raised from low to high, a HWI function
> > > > > is invoked in my program, which in turn will set signal "Y" high for
> > 5
> > > > > microseconds (this time is not a concern here) and set it low
> > > > > afterwards. I have set up a mechanism through which signal "X" will
> > be
> > > > > turned high and low at a speed of 1Khz (Speed is not the concern
> > here)
> > > > > . When i measured it in Oscilloscope by tapping the signals "X" and
> > > > > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> > > > > of rise of signal "Y".
> > > > >
> > > > > I used NDK library for TCP communication with a GUI that we have
> > > > > built. We used functions like NC_SystemOpen, NC_NetStart to configure
> > > > > TCP protocol stack and run it separately in a task. In a separate
> > > > > thread we establish a connection with GUI and sends some data
> > > > > regularly. When we configured NDK scheduler to be in high priority
> > and
> > > > > interrupt mode then the jitter rates went up to 50 microseconds. When
> > > > > we configured it to Low priority and polling method, it came down to
> > > > > 15 microseconds approximately.
> > > > >
> > > > > Why is there a increase in jitter rate when NDK library is used? What
> > > > > is NDK library doing to it? Is there a way to stop it? Or is there
> > > > anything
> > > > > wrong that I am doing in configuration? Has anybody undergone this
> > and
> > > > > achieved better results?
> > > > >
> > > > > Any sort of idea or help is appreciated.
> > > > >
> > > > > Thanks
> > > > > Kandhasamy
> > > > ------- End of Original Message -------
> > > >
> > > >
> > ------- End of Original Message -------
> >
> >
------- End of Original Message -------

_____________________________________
Kandy,

You might also want to assure the following are in your code:

C code: #include
Configuration script: var Hwi = xdc.useModule('ti.sysbios.hal.Hwi');

R. Williams
---------- Original Message -----------
From: "Richard Williams"
To: Kandy
Cc: c...
Sent: Wed, 12 Jun 2013 09:51:10 -0700
Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
DM642 DSP processor

> Kandy,
>
> the .map file should/will contain all the locator information.
>
> If you are using the T.I. BIOS, I think the BIOS configuration file
> contains the ability to have certain interrupt handlers not buffered.
>
> Since the source for the NDK is available for download from T.I.
> you could download it.
> Then configure it to not buffer certain interrupts.
>
> Here is one place where the source is available:
>
R. Williams
>
> ---------- Original Message -----------
> From: Kandy
> To: Richard Williams
> Cc: c...
> Sent: Wed, 12 Jun 2013 20:39:30 +0530
> Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used
> with TI DM642 DSP processor
>
> > Thanks Richard. At present I don't know how to check that. Let me
> > learn and try to do it.
> >
> > Thanks
> > Kandy
> >
> > On Tue, Jun 11, 2013 at 12:04 AM, Richard Williams
> > wrote:
> >
> > > Kandy,
> > >
> > > One thing to look carefully at, is where the interrupt functions, the
> > > bios, the
> > > stacks, etc are located.
> > > They should be located on-chip rather than external RAM.
> > >
> > > It is expected that running the NDK will create variable delays in the
> > > execution
> > > of interrupt routines, as the actual interrupts are buffered, then passed
> > > to the
> > > interrupt handler by the scheduler.
> > >
> > > R. Williams
> > >
> > >
> > >
> > >
> > > ---------- Original Message -----------
> > > From: Kandy
> > > To: Richard Williams
> > > Cc: c...
> > > Sent: Mon, 10 Jun 2013 23:06:30 +0530
> > > Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > > DM642 DSP processor
> > >
> > > > Richard,
> > > >
> > > > Thanks for your response.
> > > >
> > > > I use only one interrupt and i disable it when i start executing the HWI
> > > > function. At the end I clear all the interrupt request en-queued and
> > > enable
> > > > it at the end.
> > > >
> > > > There are two task created with priority 7 and 6. One to start the TCP
> > > > protocol stack and the other to run my application.
> > > >
> > > > I am using DM642 DSP processor with 600 MHz speed. Without the NDK
> > > library
> > > > the interrupt latency is around 250 ns. When I used NDK library in all
> > > > possible combinations (High priority and interrupt, low priority and
> > > > interrupt, low priority and polling), the interrupt latency ranges
> > > > from 10 to 50 microseconds.
> > > >
> > > > My application requirements are we should respond to interrupts within
> > > > a latency time of 3 microseconds. This is time critical.
> > > >
> > > > I have read the NDK user manual and reference guide to some extent
> > > relative
> > > > to this problem. I would like to know what are all the possible ways to
> > > > bring down this latency?
> > > >
> > > > Or are we saying that if we use NDK library then we cannot implement time
> > > > critical applications like this?
> > > >
> > > > Thanks
> > > > Kandy
> > > >
> > > > On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams <
> > > r...@lewiscounty.com>wrote:
> > > >
> > > > > Kandy.
> > > > >
> > > > > If I understand your description correctly,,,
> > > > >
> > > > > You have multiple interrupts running, each with its' own priority.
> > > > > You have multiple tasks running.
> > > > > You have an OS running.
> > > > >
> > > > > Each of these activities requires context switches which consume CPU
> > > > > cycles.
> > > > >
> > > > > Polling rather than using an interrupt consumes lots of CPU cycles.
> > > > >
> > > > > It cannot be expected that each event timing would be completely
> > > > > deterministic.
> > > > > Therefore, I would expect jitter in the output signal.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > ---------- Original Message -----------
> > > > > From: Kandy
> > > > > To: c...
> > > > > Sent: Sun, 9 Jun 2013 22:03:09 +0530
> > > > > Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > > > > DM642
> > > > > DSP processor
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I have been using TI DM642 processor along with a set of libraries
> > > > > > that TI has provided.
> > > > > >
> > > > > > The following is the experiment that I did to understand the behavior
> > > > > > of TI DSP and NDK library.
> > > > > >
> > > > > > Whenever signal "X" to DSP is raised from low to high, a HWI function
> > > > > > is invoked in my program, which in turn will set signal "Y" high for
> > > 5
> > > > > > microseconds (this time is not a concern here) and set it low
> > > > > > afterwards. I have set up a mechanism through which signal "X" will
> > > be
> > > > > > turned high and low at a speed of 1Khz (Speed is not the concern
> > > here)
> > > > > > . When i measured it in Oscilloscope by tapping the signals "X" and
> > > > > > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> > > > > > of rise of signal "Y".
> > > > > >
> > > > > > I used NDK library for TCP communication with a GUI that we have
> > > > > > built. We used functions like NC_SystemOpen, NC_NetStart to configure
> > > > > > TCP protocol stack and run it separately in a task. In a separate
> > > > > > thread we establish a connection with GUI and sends some data
> > > > > > regularly. When we configured NDK scheduler to be in high priority
> > > and
> > > > > > interrupt mode then the jitter rates went up to 50 microseconds. When
> > > > > > we configured it to Low priority and polling method, it came down to
> > > > > > 15 microseconds approximately.
> > > > > >
> > > > > > Why is there a increase in jitter rate when NDK library is used? What
> > > > > > is NDK library doing to it? Is there a way to stop it? Or is there
> > > > > anything
> > > > > > wrong that I am doing in configuration? Has anybody undergone this
> > > and
> > > > > > achieved better results?
> > > > > >
> > > > > > Any sort of idea or help is appreciated.
> > > > > >
> > > > > > Thanks
> > > > > > Kandhasamy
> > > > > ------- End of Original Message -------
> > > > >
> > > > >
> > > ------- End of Original Message -------
> > >
> > >
> ------- End of Original Message -------
------- End of Original Message -------

_____________________________________
Thanks Richard. At present I don't know how to check that. Let me learn and
try to do it.

Thanks
Kandy

On Tue, Jun 11, 2013 at 12:04 AM, Richard Williams
wrote:

> Kandy,
>
> One thing to look carefully at, is where the interrupt functions, the
> bios, the
> stacks, etc are located.
> They should be located on-chip rather than external RAM.
>
> It is expected that running the NDK will create variable delays in the
> execution
> of interrupt routines, as the actual interrupts are buffered, then passed
> to the
> interrupt handler by the scheduler.
>
> R. Williams
> ---------- Original Message -----------
> From: Kandy
> To: Richard Williams
> Cc: c...
> Sent: Mon, 10 Jun 2013 23:06:30 +0530
> Subject: Re: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> DM642 DSP processor
>
> > Richard,
> >
> > Thanks for your response.
> >
> > I use only one interrupt and i disable it when i start executing the HWI
> > function. At the end I clear all the interrupt request en-queued and
> enable
> > it at the end.
> >
> > There are two task created with priority 7 and 6. One to start the TCP
> > protocol stack and the other to run my application.
> >
> > I am using DM642 DSP processor with 600 MHz speed. Without the NDK
> library
> > the interrupt latency is around 250 ns. When I used NDK library in all
> > possible combinations (High priority and interrupt, low priority and
> > interrupt, low priority and polling), the interrupt latency ranges
> > from 10 to 50 microseconds.
> >
> > My application requirements are we should respond to interrupts within
> > a latency time of 3 microseconds. This is time critical.
> >
> > I have read the NDK user manual and reference guide to some extent
> relative
> > to this problem. I would like to know what are all the possible ways to
> > bring down this latency?
> >
> > Or are we saying that if we use NDK library then we cannot implement time
> > critical applications like this?
> >
> > Thanks
> > Kandy
> >
> > On Mon, Jun 10, 2013 at 1:16 AM, Richard Williams <
> r...@lewiscounty.com>wrote:
> >
> > > Kandy.
> > >
> > > If I understand your description correctly,,,
> > >
> > > You have multiple interrupts running, each with its' own priority.
> > > You have multiple tasks running.
> > > You have an OS running.
> > >
> > > Each of these activities requires context switches which consume CPU
> > > cycles.
> > >
> > > Polling rather than using an interrupt consumes lots of CPU cycles.
> > >
> > > It cannot be expected that each event timing would be completely
> > > deterministic.
> > > Therefore, I would expect jitter in the output signal.
> > >
> > >
> > >
> > >
> > > ---------- Original Message -----------
> > > From: Kandy
> > > To: c...
> > > Sent: Sun, 9 Jun 2013 22:03:09 +0530
> > > Subject: [c6x] Reg. Increase in Jitter when NDK library is used with TI
> > > DM642
> > > DSP processor
> > >
> > > > Hi All,
> > > >
> > > > I have been using TI DM642 processor along with a set of libraries
> > > > that TI has provided.
> > > >
> > > > The following is the experiment that I did to understand the behavior
> > > > of TI DSP and NDK library.
> > > >
> > > > Whenever signal "X" to DSP is raised from low to high, a HWI function
> > > > is invoked in my program, which in turn will set signal "Y" high for
> 5
> > > > microseconds (this time is not a concern here) and set it low
> > > > afterwards. I have set up a mechanism through which signal "X" will
> be
> > > > turned high and low at a speed of 1Khz (Speed is not the concern
> here)
> > > > . When i measured it in Oscilloscope by tapping the signals "X" and
> > > > "Y", I saw a jitter of 250 nano seconds between rise in 'X" and start
> > > > of rise of signal "Y".
> > > >
> > > > I used NDK library for TCP communication with a GUI that we have
> > > > built. We used functions like NC_SystemOpen, NC_NetStart to configure
> > > > TCP protocol stack and run it separately in a task. In a separate
> > > > thread we establish a connection with GUI and sends some data
> > > > regularly. When we configured NDK scheduler to be in high priority
> and
> > > > interrupt mode then the jitter rates went up to 50 microseconds. When
> > > > we configured it to Low priority and polling method, it came down to
> > > > 15 microseconds approximately.
> > > >
> > > > Why is there a increase in jitter rate when NDK library is used? What
> > > > is NDK library doing to it? Is there a way to stop it? Or is there
> > > anything
> > > > wrong that I am doing in configuration? Has anybody undergone this
> and
> > > > achieved better results?
> > > >
> > > > Any sort of idea or help is appreciated.
> > > >
> > > > Thanks
> > > > Kandhasamy
> > > ------- End of Original Message -------
> > >
> > >
> ------- End of Original Message -------