Cortex-M for Beginners

ARM

An overview of the Arm Cortex-M processor family and comparison


How to reduce the bill of material costs with digital signal processing

ARM

The need to decrease the bill of material (BOM) costs in embedded products is being driven by the need for high volume, low-cost sensor systems. As IoT devices become more sophisticated, they require developers to utilize digital signal processing to handle more features within the product, such as device provisioning. In this paper, we will examine how digital signal processing (DSP) can be used to reduce a product’s cost.


Stereophonic Amplitude-Panning: A Derivation of the "Tangent Law"

Rick Lyons

This article presents a derivation of the "Tangent Law"


A Brief Introduction To Romberg Integration

Rick Lyons

This article briefly describes a remarkable integration algorithm, called "Romberg integration." The algorithm is used in the field of numerical analysis but it's not so well-known in the world of DSP.


An IIR 'DC Removal' Filter

Rick Lyons
2 comments

It seems to me that DC removal filters (also called "DC blocking filters") have been of some moderate interest recently on the dsprelated.com Forum web page. With that notion in mind I thought I'd post a little information, from Chapter 13 of my "Understanding DSP" book, regarding infinite impulse response (IIR) DC removal filters.


Two Easy Ways To Test Multistage CIC Decimation Filters

Rick Lyons
1 comment

This article presents two very easy ways to test the performance of multistage cascaded integrator-comb (CIC) decimation filters. Anyone implementing CIC filters should take note of the following proposed CIC filter test methods.


FFT Interpolation Based on FFT Samples: A Detective Story With a Surprise Ending

Rick Lyons
1 comment

This blog presents several interesting things I recently learned regarding the estimation of a spectral value located at a frequency lying between previously computed FFT spectral samples. My curiosity about this FFT interpolation process was triggered by reading a spectrum analysis paper written by three astronomers.


An Efficient Linear Interpolation Scheme

Rick Lyons
3 comments

This article presents a computationally-efficient linear interpolation trick that requires at most one multiply per output sample.


Sinusoidal Frequency Estimation Based on Time-Domain Samples

Rick Lyons
6 comments

The topic of estimating a noise-free real or complex sinusoid's frequency, based on fast Fourier transform (FFT) samples, has been presented in recent blogs here on dsprelated.com. For completeness, it's worth knowing that simple frequency estimation algorithms exist that do not require FFTs to be performed . Below I present three frequency estimation algorithms that use time-domain samples, and illustrate a very important principle regarding so called "exact" mathematically-derived DSP algorithms.


Algorithms, Architectures, and Applications for Compressive Video Sensing

Richard G. Baraniuk

The design of conventional sensors is based primarily on the Shannon-Nyquist sampling theorem, which states that a signal of bandwidth W Hz is fully determined by its discrete-time samples provided the sampling rate exceeds 2W samples per second. For discrete-time signals, the Shannon-Nyquist theorem has a very simple interpretation: the number of data samples must be at least as large as the dimensionality of the signal being sampled and recovered. This important result enables signal processing in the discrete-time domain without any loss of information. However, in an increasing number of applications, the Shannon-Nyquist sampling theorem dictates an unnecessary and often prohibitively high sampling rate. (See Box 1 for a derivation of the Nyquist rate of a time-varying scene.) As a motivating example, the high resolution of the image sensor hardware in modern cameras reflects the large amount of data sensed to capture an image. A 10-megapixel camera, in effect, takes 10 million measurements of the scene. Yet, almost immediately after acquisition, redundancies in the image are exploited to compress the acquired data significantly, often at compression ratios of 100:1 for visualization and even higher for detection and classification tasks. This example suggests immense wastage in the overall design of conventional cameras.