DSPRelated.com
Free Books

L-Infinity Norm of Derivative Objective

We can add a smoothness objective by adding $ L-infinity$ -norm of the derivative to the objective function.

$\displaystyle \mathrm{minimize}\quad \delta +\eta \left\Vert \Delta h\right\Vert _{\infty }.$ (4.79)

The $ L-infinity$ -norm only cares about the maximum derivative. Large $ \eta $ means we put more weight on the smoothness than the side-lobe level.

This can be formulated as an LP by adding one optimization parameter $ \sigma $ which bounds all derivatives.

$\displaystyle -\sigma \leq \Delta h_{i}\leq \sigma \qquad i=1,\ldots ,L-1.$ (4.80)

In matrix form,
$\displaystyle \left[\begin{array}{r}
-\mathbf{D}\\
\mathbf{D}\end{array}\right]h-\sigma \mathbf1$ $\displaystyle \le$ $\displaystyle 0.$  

Objective function becomes

$\displaystyle \mathrm{minimize}\quad \delta +\eta \sigma .$ (4.81)

The result of adding the Chebyshev norm of diff(h) to the objective function to be minimized ($ \eta =1$ ) is shown in Fig.3.39. The result of increasing $ \eta $ to 20 is shown in Fig.3.40.

Figure: Chebyshev norm of diff(h) added to the objective function to be minimized ($ \eta =1$ )
\includegraphics[width=\twidth,height=6.5in]{eps/print_linf_chebwin_1}

Figure: Twenty times the norm of diff(h) added to the objective function to be minimized ($ \eta =20$ )
\includegraphics[width=\twidth,height=6.5in]{eps/print_linf_chebwin_2}


Next Section:
L-One Norm of Derivative Objective
Previous Section:
Monotonicity Constraint