
Noise covariance properties in Dual-Tree Wavelet Decompositions
Dual-tree wavelet decompositions have recently gained much popularity, mainly due to their ability to provide an accurate directional analysis of images combined with a reduced redundancy. When the decomposition of a random process is performed – which occurs in particular when an additive noise is corrupting the signal to be analyzed – it is useful to characterize the statistical properties of the dual-tree wavelet coefficients of this process. As dual-tree decompositions constitute overcomplete frame expansions, correlation structures are introduced among the coefficients, even when a white noise is analyzed. In this paper, we show that it is possible to provide an accurate description of the covariance properties of the dual-tree coefficients of a wide-sense stationary process. The expressions of the (cross-) covariance sequences of the coefficients are derived in the one and two-dimensional cases. Asymptotic results are also provided, allowing to predict the behaviour of the second-order moments for large lag values or at coarse resolution. In addition, the crosscorrelations between the primal and dual wavelets, which play a primary role in our theoretical analysis, are calculated for a number of classical wavelet families. Simulation results are finally provided to validate these results.

A Nonlinear Stein Based Estimator for Multichannel Image Denoising
The use of multicomponent images has become widespread with the improvement of multisensor systems having increased spatial and spectral resolutions. However, the observed images are often corrupted by an additive Gaussian noise. In this paper, we are interested in multichannel image denoising based on a multiscale representation of the images. A multivariate statistical approach is adopted to take into account both the spatial and the inter-component correlations existing between the different wavelet subbands. More precisely, we propose a new parametric nonlinear estimator which generalizes many reported denoising methods. The derivation of the optimal parameters is achieved by applying Stein’s principle in the multivariate case. Experiments performed on multispectral remote sensing images clearly indicate that our method outperforms conventional wavelet denoising techniques.

Code Acquisition using Smart Antennas with Adaptive Filtering Scheme for DS-CDMA Systems
Pseudo-noise (PN) code synchronizer is an essential element of direct-sequence code division multiple access (DS-CDMA) system because data transmission is possible only after the receiver accurately synchronizes the locally generated PN code with the incoming PN code. The code synchronization is processed in two steps, acquisition and tracking, to estimate the delay offset between the two codes. Recently, the adaptive LMS filtering scheme has been proposed for performing both code acquisition and tracking with the identical structure, where the LMS algorithm is used to adjust the FIR filter taps to search for the value of delay-offset adaptively. A decision device is employed in the adaptive LMS filtering scheme as a decision variable to indicate code synchronization, hence it plays an important role for the performance of mean acquisition time (MAT). In this thesis, only code acquisition is considered. In this thesis, a new decision device, referred to as the weight vector square norm (WVSN) test method, is devised associated with the adaptive LMS filtering scheme for code acquisition in DS-CDMA system. The system probabilities of the proposed scheme are derived for evaluating MAT. Numerical analyses and simulation results verify that the performance of the proposed scheme, in terms of detection probability and MAT, is superior to the conventional scheme with mean-squared error (MSE) test method, especially when the signal-to-interference-plus-noise ratio (SINR) is relatively low. Furthermore, an efficient and joint-adaptation code acquisition scheme, i.e., a smart antenna coupled with the proposed adaptive LMS filtering scheme with the WVSN test method, is devised for applying to a base station, where all antenna elements are employed during PN code acquisition. This new scheme is a process of PN code acquisition and the weight coefficients of smart antenna jointly and adaptively. Numerical analyses and simulation results demonstrate that the performance of the proposed scheme with five antenna elements, in terms of the output SINR, the detection probability and the MAT, can be improved by around 7 dB, compared to the one with single antenna case.

Fixed-Point Arithmetic: An Introduction
This document presents definitions of signed and unsigned fixed-point binary number representations and develops basic rules and guidelines for the manipulation of these number representations using the common arithmetic and logical operations found in fixed-point DSPs and hardware components.

Energy Profiling of DSP Applications, A Case Study of an Intelligent ECG Monitor
Proper balance of power and performance for optimum system organization requires precise profiling of the power consumption of different hardware subsystems as well as software functions. Moreover, power consumption of mobile systems is even more important, since the battery is a large portion of the overall size and weight of the system. Average power consumption is only a crude estimate of power requirements and battery life; a much better estimate can be made using dynamic power consumption. Dynamic power consumption is a function of the execution profile of the given application running on specific hardware platform. In this paper we introduce a new environment for energy profiling of DSP applications. The environment consists of a JTAG emulator, a high-resolution HP 3583A multimeter and a workstation that controls devices and stores the traces. We use Texas Instruments’ Real Time Data Exchange mechanism (RTDXÔ) to generate an execution profile and custom procedures for energy profile data acquisition using GPIB interface. We developed custom procedures to correlate and analyze both energy and execution profiles. The environment allows us to improve the system power consumption through changes in software organization and to measure real battery life for the given hardware, software and battery configuration. As a case study, we present the analysis of a real-time portable ECG monitor implemented using a Texas Instruments TMS320C5410-100 processor board, and a Del Mar PWA ECG Amplifier.

A New Approach to Linear Filtering and Prediction Problems
In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to advances in digital computing, the Kalman filter has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation.

A DSP Implementation of OFDM Acoustic Modem
The success of multicarrier modulation in the form of OFDM in radio channels illuminates a path one could take towards high-rate underwater acoustic communications, and recently there are intensive investigations on underwater OFDM. In this paper, we implement the acoustic OFDM transmitter and receiver design of [4, 5] on a TMS320C6713 DSP board. We analyze the workload and identify the most time-consuming operations. Based on the workload analysis, we tune the algorithms and optimize the code to substantially reduce the synchronization time to 0.2 seconds and the processing time of one OFDM block to 1.7 seconds on a DSP processor at 225 MHz. This experimentation provides guidelines on our future work to reduce the per-block processing time to be less than the block duration of 0.23 seconds for real time operations.

Teaching MODEM Concepts and Design Procedure with MATLAB Simulations
MATLAB simulation is used as the primary tool to illustrate concepts, to validate MODEM designs, and to vent' operation of the subsystems employed in DSP based transmitters and receivers presented in a pair of classes on MODEM Design and Digital Receiver Design. The whole gamut of subsystems found in conventional and experimental modem designs are simulated and assembled to form a full end-to-end simulation of an operating MODEM. This paper describes the philosophy used to guide class involvement and assess the experience and the learning value to student participants.

Cascaded Integrator-Comb (CIC) Filter Introduction
In the classic paper, "An Economical Class of Digital Filters for Decimation and Interpolation", Hogenauer introduced an important class of digital filters called "Cascaded Integrator-Comb", or "CIC" for short (also sometimes called "Hogenauer filters"). Here, Matthew Donadio provides a more gentle introduction to the subject of CIC filters, geared specifically to the needs of practicing DSP designers.

An FPGA Implementation of Hierarchical Motion Estimation for Embedded Oject Tracking
This paper presents the hardware implementation of an algorithm developed to provide automatic motion detection and object tracking functionality embedded within intelligent CCTV systems. The implementation is targeted at an Altera Stratix FPGA making full use of the dedicated DSP resource. The Altera Nios embedded processor provides a platform for the tracking control loop and generic Pan Tilt Zoom camera interface. This paper details the explicit functional stages of the algorithm that lend themselves to an optimised pipelined hardware implementation. This implementation provides maximum data throughput, providing real-time operation of the described algorithm, and enables a moving camera to track a moving object in real time.

Hidden Markov Model based recognition of musical pattern in South Indian Classical Music
Automatic recognition of musical patterns plays a crucial part in Musicological and Ethno musicological research and can become an indispensable tool for the search and comparison of music extracts within a large multimedia database. This paper finds an efficient method for recognizing isolated musical patterns in a monophonic environment, using Hidden Markov Model. Each pattern, to be recognized, is converted into a sequence of frequency jumps by means of a fundamental frequency tracking algorithm, followed by a quantizer. The resulting sequence of frequency jumps is presented to the input of the recognizer which use Hidden Markov Model. The main characteristic of Hidden Markov Model is that it utilizes the stochastic information from the musical frame to recognize the pattern. The methodology is tested in the context of South Indian Classical Music, which exhibits certain characteristics that make the classification task harder, when compared with Western musical tradition. Recognition of 100% has been obtained for the six typical music pattern used in practise. South Indian classical instrument, flute is used for the whole experiment.

Noise covariance properties in Dual-Tree Wavelet Decompositions
Dual-tree wavelet decompositions have recently gained much popularity, mainly due to their ability to provide an accurate directional analysis of images combined with a reduced redundancy. When the decomposition of a random process is performed – which occurs in particular when an additive noise is corrupting the signal to be analyzed – it is useful to characterize the statistical properties of the dual-tree wavelet coefficients of this process. As dual-tree decompositions constitute overcomplete frame expansions, correlation structures are introduced among the coefficients, even when a white noise is analyzed. In this paper, we show that it is possible to provide an accurate description of the covariance properties of the dual-tree coefficients of a wide-sense stationary process. The expressions of the (cross-) covariance sequences of the coefficients are derived in the one and two-dimensional cases. Asymptotic results are also provided, allowing to predict the behaviour of the second-order moments for large lag values or at coarse resolution. In addition, the crosscorrelations between the primal and dual wavelets, which play a primary role in our theoretical analysis, are calculated for a number of classical wavelet families. Simulation results are finally provided to validate these results.

Energy Profiling of DSP Applications, A Case Study of an Intelligent ECG Monitor
Proper balance of power and performance for optimum system organization requires precise profiling of the power consumption of different hardware subsystems as well as software functions. Moreover, power consumption of mobile systems is even more important, since the battery is a large portion of the overall size and weight of the system. Average power consumption is only a crude estimate of power requirements and battery life; a much better estimate can be made using dynamic power consumption. Dynamic power consumption is a function of the execution profile of the given application running on specific hardware platform. In this paper we introduce a new environment for energy profiling of DSP applications. The environment consists of a JTAG emulator, a high-resolution HP 3583A multimeter and a workstation that controls devices and stores the traces. We use Texas Instruments’ Real Time Data Exchange mechanism (RTDXÔ) to generate an execution profile and custom procedures for energy profile data acquisition using GPIB interface. We developed custom procedures to correlate and analyze both energy and execution profiles. The environment allows us to improve the system power consumption through changes in software organization and to measure real battery life for the given hardware, software and battery configuration. As a case study, we present the analysis of a real-time portable ECG monitor implemented using a Texas Instruments TMS320C5410-100 processor board, and a Del Mar PWA ECG Amplifier.

Algorithms, Architectures, and Applications for Compressive Video Sensing
The design of conventional sensors is based primarily on the Shannon-Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete-time samples provided the sampling rate exceeds 2W samples per second. For discrete-time signals, the Shannon-Nyquist theorem has a very simple interpretation: the number of data samples must be at least as large as the dimensionality of the signal being sampled and recovered. This important result enables signal processing in the discrete-time domain without any loss of information. However, in an increasing number of applications, the Shannon-Nyquist sampling theorem dictates an unnecessary and often prohibitively high sampling rate. (See Box 1 for a derivation of the Nyquist rate of a time-varying scene.) As a motivating example, the high resolution of the image sensor hardware in modern cameras reflects the large amount of data sensed to capture an image. A 10-megapixel camera, in effect, takes 10 million measurements of the scene. Yet, almost immediately after acquisition, redundancies in the image are exploited to compress the acquired data significantly, often at compression ratios of 100:1 for visualization and even higher for detection and classification tasks. This example suggests immense wastage in the overall design of conventional cameras.

Closing the gap: CPU and FPGA Trends in sustainable floating-point BLAS performance
Field programmable gate arrays (FPGAs) have long been an attractive alternative to microprocessors for computing tasks — as long as floating-point arithmetic is not required. Fueled by the advance of Moore’s Law, FPGAs are rapidly reaching sufficient densities to enhance peak floating-point performance as well. The question, however, is how much of this peak performance can be sustained. This paper examines three of the basic linear algebra subroutine (BLAS) functions: vector dot product, matrix-vector multiply, and matrix multiply. A comparison of microprocessors, FPGAs, and Reconfigurable Computing platforms is performed for each operation. The analysis highlights the amount of memory bandwidth and internal storage needed to sustain peak performance with FPGAs. This analysis considers the historical context of the last six years and is extrapolated for the next six years.

Auditory Component Analysis Using Perceptual Pattern Recognition to Identify and Extract Independent Components From an Auditory Scene
The cocktail party effect, our ability to separate a sound source from a multitude of other sources, has been researched in detail over the past few decades, and many investigators have tried to model this on computers. Two of the major research areas currently being evaluated for the so-called sound source separation problem are Auditory Scene Analysis (Bregman 1990) and a class of statistical analysis techniques known as Independent Component Analysis (Hyvärinen 2001). This paper presents a methodology for combining these two techniques. It suggests a framework that first separates sounds by analyzing the incoming audio for patterns and synthesizing or filtering them accordingly, measures features of the resulting tracks, and finally separates sounds statistically by matching feature sets and making the output streams statistically independent. Artificial and acoustical mixes of sounds are used to evaluate the signal-to-noise ratio where the signal is the desired source and the noise is comprised of all other sources. The proposed system is found to successfully separate audio streams. The amount of separation is inversely proportional to the amount of reverberation present.

HIERARCHICAL MOTION ESTIMATION FOR EMBEDDED OBJECT TRACKING
This paper presents an algorithm developed to provide automatic motion detection and object tracking embedded within intelligent CCTV systems. The algorithm development focuses on techniques which provide an efficient embedded systems implementation with the ability to target both FPGA and DSP devices. During algorithm development constraints on hardware implementation have been fully considered resulting in an algorithm which, when targeted at current FPGA devices, will take full advantage of the DSP resource commonly provided in such devices. The hierarchical structure of the proposed algorithm provides the system with a multi-level motion estimation process allowing low resolution estimation for motion detection and further higher resolution stages for motion estimation. An initial MATLAB prototype has demonstrated this algorithm capable of object motion estimation while compensating for camera motion, allowing a moving object to be tracked by a moving camera.

Code Acquisition using Smart Antennas with Adaptive Filtering Scheme for DS-CDMA Systems
Pseudo-noise (PN) code synchronizer is an essential element of direct-sequence code division multiple access (DS-CDMA) system because data transmission is possible only after the receiver accurately synchronizes the locally generated PN code with the incoming PN code. The code synchronization is processed in two steps, acquisition and tracking, to estimate the delay offset between the two codes. Recently, the adaptive LMS filtering scheme has been proposed for performing both code acquisition and tracking with the identical structure, where the LMS algorithm is used to adjust the FIR filter taps to search for the value of delay-offset adaptively. A decision device is employed in the adaptive LMS filtering scheme as a decision variable to indicate code synchronization, hence it plays an important role for the performance of mean acquisition time (MAT). In this thesis, only code acquisition is considered. In this thesis, a new decision device, referred to as the weight vector square norm (WVSN) test method, is devised associated with the adaptive LMS filtering scheme for code acquisition in DS-CDMA system. The system probabilities of the proposed scheme are derived for evaluating MAT. Numerical analyses and simulation results verify that the performance of the proposed scheme, in terms of detection probability and MAT, is superior to the conventional scheme with mean-squared error (MSE) test method, especially when the signal-to-interference-plus-noise ratio (SINR) is relatively low. Furthermore, an efficient and joint-adaptation code acquisition scheme, i.e., a smart antenna coupled with the proposed adaptive LMS filtering scheme with the WVSN test method, is devised for applying to a base station, where all antenna elements are employed during PN code acquisition. This new scheme is a process of PN code acquisition and the weight coefficients of smart antenna jointly and adaptively. Numerical analyses and simulation results demonstrate that the performance of the proposed scheme with five antenna elements, in terms of the output SINR, the detection probability and the MAT, can be improved by around 7 dB, compared to the one with single antenna case.