
Closing the gap: CPU and FPGA Trends in sustainable floating-point BLAS performance
Field programmable gate arrays (FPGAs) have long been an attractive alternative to microprocessors for computing tasks — as long as floating-point arithmetic is not required. Fueled by the advance of Moore’s Law, FPGAs are rapidly reaching sufficient densities to enhance peak floating-point performance as well. The question, however, is how much of this peak performance can be sustained. This paper examines three of the basic linear algebra subroutine (BLAS) functions: vector dot product, matrix-vector multiply, and matrix multiply. A comparison of microprocessors, FPGAs, and Reconfigurable Computing platforms is performed for each operation. The analysis highlights the amount of memory bandwidth and internal storage needed to sustain peak performance with FPGAs. This analysis considers the historical context of the last six years and is extrapolated for the next six years.

BLAS Comparison on FPGA, CPU and GPU
High Performance Computing (HPC) or scientific codes are being executed across a wide variety of computing platforms from embedded processors to massively parallel GPUs. We present a comparison of the Basic Linear Algebra Subroutines (BLAS) using double-precision floating point on an FPGA, CPU and GPU. On the CPU and GPU, we utilize standard libraries on state-of-the-art devices. On the FPGA, we have developed parameterized modular implementations for the dot product and Gaxpy or matrix-vector multiplication. In order to obtain optimal performance for any aspect ratio of the matrices, we have designed a high-throughput accumulator to perform an efficient reduction of floating point values. To support scalability to large data-sets, we target the BEE3 FPGA platform. We use performance and energy efficiency as metrics to compare the different platforms. Results show that FPGAs offer comparable performance as well as 2.7 to 293 times better energy efficiency for the test cases that we implemented on all three platforms.

Biosignal processing challenges in emotion recognition for adaptive learning
User-centered computer based learning is an emerging field of interdisciplinary research. Research in diverse areas such as psychology, computer science, neuroscience and signal processing is making contributions to take this field to the next level. Learning systems built using contributions from these fields could be used in actual training and education instead of just laboratory proof-of-concept. One of the important advances in this research is the detection and assessment of the cognitive and emotional state of the learner using such systems. This capability moves development beyond the use of traditional user performance metrics to include system intelligence measures that are based on current theories in neuroscience. These advances are of paramount importance in the success and wide spread use of learning systems that are automated and intelligent. Emotion is considered an important aspect of how learning occurs, and yet estimating it and making adaptive adjustments are not part of most learning systems. In this research we focus on one specific aspect of constructing an adaptive and intelligent learning system, that is, estimation of the emotion of the learner as he/she is using the automated training system. The challenge starts with the definition of the emotion and the utility of it in human life. The next challenge is to measure the co-varying factors of the emotions in a non-invasive way, and find consistent features from these measures that are valid across wide population. In this research we use four physiological sensors that are non-invasive, and establish a methodology of utilizing the data from these sensors using different signal processing tools. A validated set of visual stimuli used worldwide in the research of emotion and attention, called International Affective Picture System (IAPS), is used. A dataset is collected from the sensors in an experiment designed to elicit emotions from these validated visual stimuli. We describe a novel wavelet method to calculate hemispheric asymmetry metric using electroencephalography data. This method is tested against typically used power spectral density method. We show overall improvement in accuracy in classifying specific emotions using the novel method. We also show distinctions between different discrete emotions from the autonomic nervous system activity using electrocardiography, electrodermal activity and pupil diameter changes. Findings from different features from these sensors are used to give guidelines to use each of the individual sensors in the adaptive learning environment.

Gauss-Newton Based Learning for Fully Recurrent Neural Networks
The thesis discusses a novel off-line and on-line learning approach for Fully Recurrent Neural Networks (FRNNs). The most popular algorithm for training FRNNs, the Real Time Recurrent Learning (RTRL) algorithm, employs the gradient descent technique for finding the optimum weight vectors in the recurrent neural network. Within the framework of the research presented, a new off-line and on-line variation of RTRL is presented, that is based on the Gauss-Newton method. The method itself is an approximate Newton’s method tailored to the specific optimization problem, (non-linear least squares), which aims to speed up the process of FRNN training. The new approach stands as a robust and effective compromise between the original gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL.

Wavelet Denoising for TDR Dynamic Range Improvement
A technique is presented for removing large amounts of noise present in time-domain-reflectometry (TDR) waveforms to increase the dynamic range of TDR waveforms and TDR based s-parameter measurements.

Bilinear Transformation Made Easy
A formula is derived and demonstrated that is capable of directly generating digital filter coefficients from an analog filter prototype using the bilinear transformation. This formula obviates the need for any algebraic manipulation of the analog prototype filter and is ideal for use in embedded systems that must take in any general analog filter specification and dynamically generate digital filter coefficients directly usable in difference equations.

FUZZY LOGIC BASED CONVOLUTIONAL DECODER FOR USE IN MOBILE TELEPHONE SYSTEMS
Efficient convolutional coding and decoding algorithms are most crucial to successful operation of wireless communication systems in order to achieve high quality of service by reducing the overall bit error rate performance. A widely applied and well evaluated scheme for error correction purposes is well known as Viterbi algorithm [7]. Although the Viterbi algorithm has very good error correcting characteristics, computational effort required remains high. In this paper a novel approach is discussed introducing a convolutional decoder design based on fuzzy logic. A simplified version of this fuzzy based decoder is examined with respect to bit error rate (BER) performance. It can be shown that the fuzzy based convolutional decoder here proposed considerably reduces computational effort with only minor BER performance degradation when compared to the classical Viterbi approach.

Method to Calculate the Inverse of a Complex Matrix using Real Matrix Inversion
This paper describes a simple method to calculate the invers of a complex matrix. The key element of the method is to use a matrix inversion, which is available and optimised for real numbers. Some actual libraries used for digital signal processing only provide highly optimised methods to calculate the inverse of a real matrix, whereas no solution for complex matrices are available, like in [1]. The presented algorithm is very easy to implement, while still much more efficient than for example the method presented in [2]. [1] Visual DSP++ 4.0 C/C++ Compiler and Library Manual for TigerSHARC Processors; Analog Devices; 2005. [2] W. Press, S.A. Teukolsky, W.T. Vetterling, B.R. Flannery; Numerical Recipes in C++, The art of scientific computing, Second Edition; p52 : “Complex Systems of Equations”;Cambridge University Press 2002.

Fully Programmable LDPC Decoder Hardware Architectures
In recent years, the amount of digital data which is stored and transmitted for private and public usage has increased considerably. To allow a save transmission and storage of data despite of error-prone transmission media, error correcting codes are used. A large variety of codes has been developed, and in the past decade low-density parity-check (LDPC) codes which have an excellent error correction performance became more and more popular. Today, low-density parity-check codes have been adopted for several standards, and efficient decoder hardware architectures are known for the chosen structured codes. However, the existing decoder designs lack flexibility as only few structured codes can be decoded with one decoder chip. In consequence, different codes require a redesign of the decoder, and few solutions exist for decoding of codes which are not quasi-cyclic or which are unstructured. In this thesis, three different approaches are presented for the implementation of fully programmable LDPC decoders which can decode arbitrary LDPC codes. As a design study, the first programmable decoder which uses a heuristic mapping algorithm is realized on an field-programmable gate array (FPGA), and error correction curves are measured to verify the correct functionality. The main contribution of this thesis lies in the development of the second and the third architecture and an appropriate mapping algorithm. The proposed fully programmable decoder architectures use one-phase message passing and layered decoding and can decode arbitrary LDPC codes using an optimum mapping and scheduling algorithm. The presented programmable architectures are in fact generalized decoder architectures from which the known decoders architectures for structured LDPC codes can be derived.

Design of a Scalable Polyphony-MIDI Synthesizer for a Low Cost DSP
In this thesis, the design of a music synthesizer implementing the Scalable Polyphony-MIDI soundset on a low cost DSP system is presented. First, the SP-MIDI standard and the target DSP platform are presented followed by review of commonly used synthesis techniques and their applicability to systems with limited computational and memory resources. Next, various oscillator and filter algorithms used in digital subtractive synthesis are reviewed in detail. Special attention is given to the aliasing problem caused by discontinuities in classical waveforms, such as sawtooth and pulse waves and existing methods for bandlimited waveform synthesis are presented. This is followed by review of established structures for computationally efficient time-varying filters. A novel digital structure is presented that decouples the cutoff and resonance controls. The new structure is based on the analog Korg MS-20 lowpass filter and is computationally very efficient and well suited for implementation on low bitdepth architectures. Finally, implementation issues are discussed with emphasis on the Differentiated Parabole Wave oscillator and MS-20 filter structures and the effects of limited computational capability and low bitdepth. This is followed by designs for several example instruments.

Active control of automobile cabin noise with conventional and advanced speakers
Recently much research has focused on the control of enclosed sound fields, particularly in automobiles. Both Active Noise Control (ANC) and Active Structural Acoustic Control (ASAC) techniques are being applied to problems stemming from power train noise and road noise (noise due to the interaction of the tires with the surface of the road). Due to the low frequency characteristics of these noise problems, large acoustic sources are required to obtain efficient control of the sound field. This creates demand in the automobile industry for compact lightweight sources. This work is concerned with the application of active control to power train noise, as well as road noise in the interior cabin of a sport utility vehicle using advanced, compact lightweight piezoelectric acoustic sources. First, a test structure approximately the same size as the automobile was built to study the principles of active noise control in a cavity. A finite element model of the cavity was created in order to optimize the positions of the error sensors and the control sources. Experimental work was performed with the optimized actuator and sensor locations in order to validate the model, and draw conclusions regarding the conditions to obtain global control of the sound field. Second, a broad-band feedforward filtered-X LMS algorithm was used to control power train noise. Preliminary power train noise tests were conducted using arrangements of four microphones and up to four commercially available speakers for control. Attenuation of seven decibel (dB) at the error sensors was measured in the 40-500 Hz frequency band. The dimensions of the zone of quiet generated by the control were measured, and show that noise reductions were obtained for a large volume surrounding the error sensors. Next, advanced speakers were implemented for active control of power train noise. The results obtained with different arrangements of these speakers were very similar to those obtained with the commercially-available speakers. These advanced speakers use piezoelectric devices to induce the displacement of a speaker membrane, which radiates sound. Their lighter weight and compact dimensions are a significant advantage over conventional speakers, for their application in automobile. Third, preliminary results were obtained for active control of road noise. The controller used an optimized set of four reference signals to control the noise at one error sensor using one control source. Two sets of tests were conducted. The first set of tests was performed on a dynamometer, which simulates the effects of the road on the tires. The second set of tests was performed on a rough road. Reduction of two to four decibel of the sound pressure level at the error sensor was obtained between 100 and 200 Hz.

Teaching MODEM Concepts and Design Procedure with MATLAB Simulations
MATLAB simulation is used as the primary tool to illustrate concepts, to validate MODEM designs, and to vent' operation of the subsystems employed in DSP based transmitters and receivers presented in a pair of classes on MODEM Design and Digital Receiver Design. The whole gamut of subsystems found in conventional and experimental modem designs are simulated and assembled to form a full end-to-end simulation of an operating MODEM. This paper describes the philosophy used to guide class involvement and assess the experience and the learning value to student participants.

Filter a Rectangular Pulse with no Ringing
To filter a rectangular pulse without any ringing, there is only one requirement on the filter coefficients: they must all be positive. However, if we want the leading and trailing edge of the pulse to be symmetrical, then the coefficients must be symmetrical. What we are describing is basically a window function.

Specifying the Maximum Amplifier Noise When Driving an ADC
I recently learned an interesting rule of thumb regarding the use of an amplifier to drive the input of an analog to digital converter (ADC). The rule of thumb describes how to specify the maximum allowable noise power of the amplifier.

Introduction to Compressed Sensing
Chapter 1 of the book: "Compressed Sensing: Theory and Applications".

Hilbert Transform and Applications
Section 1: reviews the mathematical definition of Hilbert transform and various ways to calculate it.
Sections 2 and 3: review applications of Hilbert transform in two major areas: Signal processing and system identification.
Section 4: concludes with remarks on the historical development of Hilbert transform

Voice Activity Detection. Fundamentals and Speech Recognition System Robustness
An important drawback affecting most of the speech processing systems is the environmental noise and its harmful effect on the system performance. Examples of such systems are the new wireless communications voice services or digital hearing aid devices. In speech recognition, there are still technical barriers inhibiting such systems from meeting the demands of modern applications. Numerous noise reduction techniques have been developed to palliate the effect of the noise on the system performance and often require an estimate of the noise statistics obtained by means of a precise voice activity detector (VAD). Speech/non-speech detection is an unsolved problem in speech processing and affects numerous applications including robust speech recognition, discontinuous transmission, real-time speech transmission on the Internet or combined noise reduction and echo cancellation schemes in the context of telephony. The speech/non-speech classification task is not as trivial as it appears, and most of the VAD algorithms fail when the level of background noise increases. During the last decade, numerous researchers have developed different strategies for detecting speech on a noisy signal and have evaluated the influence of the VAD effectiveness on the performance of speech processing systems. Most of the approaches have focussed on the development of robust algorithms with special attention being paid to the derivation and study of noise robust features and decision rules. The different VAD methods include those based on energy thresholds, pitch detection, spectrum analysis, zero-crossing rate, periodicity measure, higher order statistics in the LPC residual domain or combinations of different features. This chapter shows a comprehensive approximation to the main challenges in voice activity detection, the different solutions that have been reported in a complete review of the state of the art and the evaluation frameworks that are normally used. The application of VADs for speech coding, speech enhancement and robust speech recognition systems is shown and discussed. Three different VAD methods are described and compared to standardized and recently reported strategies by assessing the speech/non-speech discrimination accuracy and the robustness of speech recognition systems.

STUDY OF DIGITAL MODULATION TECHNIQUES
Modulation is the process of facilitating the transfer of information over a medium. Typically the objective of a digital communication system is to transport digital data between two or more nodes. In radio communications this is usually achieved by adjusting a physical characteristic of a sinusoidal carrier, either the frequency, phase, amplitude or a combination thereof . This is performed in real systems with a modulator at the transmitting end to impose the physical change to the carrier and a demodulator at the receiving end to detect the resultant modulation on reception. Hence, modulation can be objectively defined as the process of converting information so that it can be successfully sent through a medium. This thesis deals with the current digital modulation techniques used in industry. Also, the thesis examines the qualitative and quantitative criteria used in selection of one modulation technique over the other. All the experiments, and realted data collected were obtained using MATLAB and SIMULINK

Design of a Scalable Polyphony-MIDI Synthesizer for a Low Cost DSP
In this thesis, the design of a music synthesizer implementing the Scalable Polyphony-MIDI soundset on a low cost DSP system is presented. First, the SP-MIDI standard and the target DSP platform are presented followed by review of commonly used synthesis techniques and their applicability to systems with limited computational and memory resources. Next, various oscillator and filter algorithms used in digital subtractive synthesis are reviewed in detail. Special attention is given to the aliasing problem caused by discontinuities in classical waveforms, such as sawtooth and pulse waves and existing methods for bandlimited waveform synthesis are presented. This is followed by review of established structures for computationally efficient time-varying filters. A novel digital structure is presented that decouples the cutoff and resonance controls. The new structure is based on the analog Korg MS-20 lowpass filter and is computationally very efficient and well suited for implementation on low bitdepth architectures. Finally, implementation issues are discussed with emphasis on the Differentiated Parabole Wave oscillator and MS-20 filter structures and the effects of limited computational capability and low bitdepth. This is followed by designs for several example instruments.

Real Time Implementation of Multi-Level Perfect Signal Reconstruction Filter Bank
Discrete Wavelet Transform (DWT) is an efficient tool for signal and image processing applications which has been utilized for perfect signal reconstruction. In this paper, twenty seven optimum combinations of three different wavelet filter types, three different filter reconstruction levels and three different kinds of signal for multi-level perfect reconstruction filter bank were implemented in MATLAB/Simulink. All the filters for different wavelet types were designed using Filter Design Analysis (FDA) and Wavelet toolbox. Signal to Noise Ratio (SNR) was calculated for each combination. Combination with best SNR was then implemented on TMS320C6713 DSP kit. Real time testing of perfect reconstruction on DSP kit was then carried out by two different methods. Experimental results accede with theory and simulations.