Reply by Tom July 26, 20032003-07-26

jk wrote:

> Dear Eric san, > > Your question appears to be very interesting. May be too funny for > many who had gone through Control Systems course ( UG level) in any > university. > > > Is this yet another SISO acronym of which I am not familiar? Or is it > > one of > > > > Soft-Input-Soft-Output > > Single-Input-Single-Output > > It is "Single Input and Single Output" (SISO). > > > Eric Jacobsen > > Minister of Algorithms, Intel Corp. > > My opinions may not be Intel's opinions. > > http://www.ericjacobsen.org > > Hope the above helps you. > Kind Regards > jk > http://epigon.co.in/
with MIMO or just multivariable being used for the case of multiple inputs and outputs. For some reason the DSP people do not delve into that terminology even though they have MIMO problems.The other terminolgy often used is 'scalar' systems (SISO). Tom
Reply by jk July 23, 20032003-07-23
Dear Eric san,

Your question appears to be very interesting. May be too funny for
many who had gone through Control Systems course ( UG level) in any
university.

> Is this yet another SISO acronym of which I am not familiar? Or is it > one of > > Soft-Input-Soft-Output > Single-Input-Single-Output
It is "Single Input and Single Output" (SISO).
> Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions. > http://www.ericjacobsen.org
Hope the above helps you. Kind Regards jk http://epigon.co.in/
Reply by Sung Jin Kim July 19, 20032003-07-19
eric.jacobsen@ieee.org (Eric Jacobsen) wrote in message news:<3f184ba3.60047200@news.earthlink.net>...
> Is this yet another SISO acronym of which I am not familiar? Or is it > one of > > Soft-Input-Soft-Output > Single-Input-Single-Output > > ?? > > > Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions. > http://www.ericjacobsen.org
This seems 'Soft-Input-Soft-Output' since Tom previously said as below:
>> It is only because of *multivariable* problems ...
Regards, --- James (txdiversity@hotmail.com) - Private opinions: This is not the opinion of my affiliation.
Reply by Eric Jacobsen July 18, 20032003-07-18
On Fri, 18 Jul 2003 16:37:25 +1200, Tom <somebody@nOpam.com> wrote:

>Whatever you may think of H infinity does not get around the basic problem that >optimal does not mean best at all. If you mean H infinity is not best either then I >agree with that too. I would say for SISO systems that you can identify easily that >classical is the best by far. Optimal control (and filters) is the biggest misnoma >ever invented. > >Tom
Is this yet another SISO acronym of which I am not familiar? Or is it one of Soft-Input-Soft-Output Single-Input-Single-Output ?? Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by Peter J. Kootsookos July 18, 20032003-07-18
Tom <somebody@nOpam.com> writes:

> Whatever you may think of H infinity does not get around the basic problem that > optimal does not mean best at all. If you mean H infinity is not best either then I > agree with that too. I would say for SISO systems that you can identify easily that > classical is the best by far. Optimal control (and filters) is the biggest misnoma > ever invented.
Not really, probably just the most misunderstood. :-) Ciao, Peter K. -- Peter J. Kootsookos "Na, na na na na na na, na na na na" - 'Hey Jude', Lennon/McCartney
Reply by Tom July 18, 20032003-07-18

"Peter J. Kootsookos" wrote:

> Tom <somebody@nOpam.com> writes: > > > It is only because of multivariable problems that we have to delve into such > > things in the first place. H infinity seems to be far better and much more like > > the classical solution. > > Poppy cock. It's just another way to pose the problem; it has some > advantages over the LQG approach ("robustness") and it has some > disadvantages (people who use the controllers don't trust it). > > It's like saying that the Remez algorithm produces FIR filters "much > more like the intuitive solution" than, for example, least squares > solutions --- it depends on whose intuition and what factors are > considered. > > Ciao, > > Peter K. > > -- > Peter J. Kootsookos > > "Na, na na na na na na, na na na na" > - 'Hey Jude', Lennon/McCartney
Whatever you may think of H infinity does not get around the basic problem that optimal does not mean best at all. If you mean H infinity is not best either then I agree with that too. I would say for SISO systems that you can identify easily that classical is the best by far. Optimal control (and filters) is the biggest misnoma ever invented. Tom
Reply by Peter J. Kootsookos July 17, 20032003-07-17
Tom <somebody@nOpam.com> writes:

> It is only because of multivariable problems that we have to delve into such > things in the first place. H infinity seems to be far better and much more like > the classical solution.
Poppy cock. It's just another way to pose the problem; it has some advantages over the LQG approach ("robustness") and it has some disadvantages (people who use the controllers don't trust it). It's like saying that the Remez algorithm produces FIR filters "much more like the intuitive solution" than, for example, least squares solutions --- it depends on whose intuition and what factors are considered. Ciao, Peter K. -- Peter J. Kootsookos "Na, na na na na na na, na na na na" - 'Hey Jude', Lennon/McCartney
Reply by Tom July 17, 20032003-07-17

"Peter J. Kootsookos" wrote:

> Tom <somebody@nOpam.com> writes: > > > You need one of those square root algorithms to update the error covariance > > matrix - that should do the trick. > > Not from what Muzaffer has been telling me off-line. The square-root > algorithm is usually used to take account of numerical > instability... which does not (yet) appear to be the main cause of > concern. Of course, it might in the future. > > > Forget the reference it has been so long - never knew people still > > used Kalman filters. > > If it's optimal, how can you do better? >
Optimal means many things to differnt people. For example the stochastic optimal control problem in L2 has a solution which is a kalman filter + 'optimal' state feedback. (ie the LQG problem). Why then do we need anything else? Why the need for H infinity for instance. Well it is only optimal in the sense of L2 effectively but that does not necessarily mean 'best'. For instance the LQG solution does not include integral action for starters. Personally for SISO systems classical controllers are much better in my book and you have more control over what you are doing. It is only because of multivariable problems that we have to delve into such things in the first place. H infinity seems to be far better and much more like the classical solution. Tom
Reply by Peter J. Kootsookos July 16, 20032003-07-16
Tom <somebody@nOpam.com> writes:

> You need one of those square root algorithms to update the error covariance > matrix - that should do the trick.
Not from what Muzaffer has been telling me off-line. The square-root algorithm is usually used to take account of numerical instability... which does not (yet) appear to be the main cause of concern. Of course, it might in the future.
> Forget the reference it has been so long - never knew people still > used Kalman filters.
If it's optimal, how can you do better? Sure, EKF's are probably out-performed by particle filters or, depending on the problem, hidden Markov models, but KF's are optimal for linear problems: you can't do better. Ciao, Peter K. -- Peter J. Kootsookos "Na, na na na na na na, na na na na" - 'Hey Jude', Lennon/McCartney
Reply by Tom July 16, 20032003-07-16

Peter Kootsookos wrote:

> "Muzaffer Kal" <kal@dspia.com> wrote in message > news:noq4hvsi92n6h9umfvcdslcudibrlkh3av@4ax.com... > > > I have implemented a discrete kalman filter which works well with the > > amount of data I have but the gain and the covariance estimate values > > seem to be increasing constantly and if I supply more data, I think > > I'll get a overflows in any precision of floating point I can use. The > > filter has to run indefinitely. Any pointers on how to stabilize > > discrete (extended) kalman filters ? > > Hi Muzaffer, > > Clearly your covariance estimate should improve over time and eventually > stabilize (depending on your initial covariance estimates). > > With the EKF (extended Kalman filter), the "best" variance to select for > process and measurement noise is usually higher than the "true" process and > measurement variances. The intuition for this is that the EKF linearises > about the current state, so there are second and higher order terms which > are unaccounted for (effectively showing up as extra noise). > > This means that, rather than being fastidious and setting the EKF > process/measurement covariances to the known signal model covariances, > you're better off thinking of the EKF covariances as "knobs" that you can > vary to obtain better performance of the EKF. > > Whatever. > > If you need more help than that, I'd suggest posting an example of the > problem (probably on some web-site if it's a binary or picture) with more > detail so we can have a look. > > Ciao, > > Peter K. > > -- > Peter J. Kootsookos > > "Na, na na na na na na, na na na na" > - 'Hey Jude', Lennon/McCartney
You need one of those square root algorithms to update the error covariance matrix - that should do the trick. Forget the reference it has been so long - never knew people still used Kalman filters.There is also a UDU ^T algorithm I seem to remember. Tom