Signed serial-/parallel multiplication
Struggling with costly wide adders for signed multiplication on FPGAs? Markus Nentwig unpacks a neat bit-level trick that turns two's-complement signed-signed multiplication into a serial-parallel routine using only a one-bit wider adder. Learn how flipping sign bits and a small, controlled constant cancel lets you avoid full sign-extension, and get a parametrized Verilog RTL plus synthesis notes to try it yourself.
A Remarkable Bit of DFT Trivia
Rick Lyons highlights a surprising equality: the DFT's worst-case scalloping loss equals 2/π, the same probability that a toothpick crosses a floorboard seam in Buffon's needle problem when the toothpick equals board width. The post sketches the DFT bin-intersection derivation and connects the math to the classic probability puzzle, offering a playful insight that sharpens intuition about bin responses.
Understanding and Preventing Overflow (I Had Too Much to Add Last Night)
Integer overflow is stealthier than you think, and in embedded systems it can break control loops or corrupt data. Jason Sachs walks through the usual culprits, including addition, subtraction, multiplication, shifting and Q15 fixed-point traps, plus C-specific pitfalls such as undefined signed overflow and INT_MIN edge cases. He then lays out practical defenses: prefer fixed-width types, widen and saturate intermediates, enable wraparound where appropriate, and reason about modular congruence for compound arithmetic.
Finding the Best Optimum
Optimization is seductive but often misleading, especially when mathematical models don't match messy reality. Tim Wescott shares stories from circuits and communications to show how chasing the theoretical global optimum can waste time and money. He recommends framing 'best' in practical terms, validating models, and optimizing for cost and impact so products ship on time and actually work in the real world.
Computing Translated Frequencies in Digitizing and Downsampling Analog Bandpass Signals
Textbooks rarely give ready formulas for tracking where individual spectral lines land after bandpass sampling or decimation. Rick Lyons provides three concise equations, with Matlab code, that compute translated frequencies for analog bandpass sampling, real digital downsampling, and complex downsampling. Practical examples show how to place the sampled image at fs/4 and how to translate a complex bandpass to baseband for efficient demodulation.
Goertzel Algorithm for a Non-integer Frequency Index
Rick Lyons demonstrates how to run the Goertzel algorithm with a non-integer frequency index k, letting you target DTFT frequencies that do not align with DFT bin centers. He interprets Rajmic and Sysel's generalization, provides a simple implementation, and presents a real-valued reformulation that reduces the final multiplies for real inputs. Example Matlab code is included to reproduce and adapt the technique.
Is It True That j is Equal to the Square Root of -1 ?
A viral YouTube video claimed that saying j equals the square root of negative one is wrong. Rick Lyons shows the apparent paradox comes from misusing square-root identities with negative arguments, not from the usual definition of j. He argues it is safer to define j by j^2 = -1 and illustrates how careless root operations produce contradictions in two appendices.
Signal Processing Contest in Python (PREVIEW): The Worst Encoder in the World
Jason Sachs previews a hands-on Python contest to find the best velocity estimator for a noisy, low-cost quadrature encoder. The post explains the Estimator API, submission constraints, and a 5 second, 10 kHz evaluation harness that uses a simulated "Lucky Wheel" encoder with realistic manufacturing timing errors. Jason also includes a simple baseline estimator and discusses the practical tradeoff between noise reduction and phase lag in velocity estimation.
A Table of Digital Frequency Notation
Rick Lyons compiles a compact, practical table that untangles the many algebraic frequency notations used in DSP. The reference lines up continuous and discrete sinusoid forms, shows the frequency variable names and units, and lists valid ranges and conversions like Ω = 2πf and normalized forms with fs. A printable PDF of the table is available for easy desk reference.
Shared-multiplier polyphase FIR filter
One multiplier and a dual-port RAM can implement an arbitrary m/n polyphase FIR resampler on an FPGA, Markus Nentwig demonstrates. The post focuses on practical implementation details, including a parametrized Verilog design, pipelined MAC control, and a Matlab testbench for verification. It shows how bank indexing and pipeline delay compensation let you multiplex many coefficient banks efficiently for resource-constrained FPGA designs.
Matlab Code to Synthesize Multiplierless FIR Filters
Learn how to build multiplierless FIR lowpass filters in Matlab using Canonic Signed-Digit coefficients. The post explains converting Parks-McClellan floating-point taps to scaled integers, then to exact CSD digits, and includes two m-files that search maintap scaling to minimize signed digits while preserving the filter response. Practical notes cover external gain compensation, the 2/3 full-scale CSD limit, and sensitivity to pass/stop edges.
TI DSP Predictions
Jeff Brower lays out two bold predictions for Texas Instruments that could reshape the DSP developer ecosystem. He argues TI will offer a supported real-time Linux on their C6x DSPs now that legal obstacles have eased, and that TI may acquire an FPGA company to own the board space around its chips. Read to weigh the technical and strategic impact.
Accelerating Matlab DSP Code on the GPU
Seth Benton spent a few days testing Jacket to accelerate MATLAB on NVIDIA GPUs, and found it surprisingly easy to speed up DSP code. He ran 2D FFT and interp2 benchmarks on a MacBook Air with a GeForce 9400M, seeing impressive speedups for large images while hitting GPU memory and precision limits at high sizes. The post shares practical tips on casting to GPU types, minimizing CPU-GPU transfers, and when GPU acceleration is most useful.
Radio Frequency Distortion Part II: A power spectrum model
Markus Nentwig presents a power-spectrum model that predicts RF nonlinear distortion from spectral power values instead of time-domain signals. The model computes distortion as repeated convolutions with a frequency-reversed replica and uses an FFT/IFFT trick with real-valued arithmetic for very high efficiency, making it suitable for system-level simulations and interference-aware radios. It is accurate for OFDM-like, Gaussian-amplitude signals when spectral binning is sufficiently fine; narrowband cases require denser bins.
Differentiating and integrating discrete signals
Think DSP's new chapter digs into discrete differentiation and integration, using first differences, convolution, and FFTs to compare time and frequency domain views. The author reproduces diff via convolution then explores cumsum as its inverse and runs into two puzzling mismatches: noisy FFT amplitude ratios for nonperiodic data, and a time-domain convolution that does not reproduce cumsum for a sawtooth despite matching frequency responses. The post includes IPython notebooks and invites troubleshooting.
Make Hardware Great Again
US weakness in 5G and the coming AI race stems from a deeper problem, hardware decline and lack of CPU innovation. Jeff Brower argues that the software-only narrative has hollowed out semiconductor leadership, leaving only a few chipmakers and blocking vital R&D. He calls for targeted government action, funding for neural-net chips, and an industrial Hardhattan Project to rebuild CPU and hardware capabilities.
Exact Frequency Formula for a Pure Real Tone in a DFT
Cedron Dawg derives an exact closed form formula to recover the frequency of a pure real sinusoid from three DFT bins, challenging the usual teaching that it is impossible. The derivation solves for cos(alpha) in a bilinear form and gives a computationally efficient implementation (eq.19), with practical notes on implicit Hann-like weighting and choosing the peak bin for robustness.
Through the tube...
Markus Nentwig explores whether RF power amplifier modeling tricks work for audio tube preamps by modeling a 12AX7 preamp in Matlab. He records input and output with a two-channel reference, fits a simple Wiener-type model, and compares the modeled output to the real tube sound. The model explains over 99 percent of output power and leaves only small residual distortion to investigate further.
Benford's law solved with DSP
Steve Smith shows that standard DSP tools give a clean, intuitive explanation of Benford's law by treating leading-digit counts as signals on the number line and using convolution and Fourier analysis. He publishes the full derivation as an online chapter after traditional journals showed little interest. The result highlights how time- and spatial-domain DSP techniques can be applied to numeric distributions.
How Not to Reduce DFT Leakage
Rick Lyons debunks a proposed 'data-flipping' fix for DFT spectral leakage, demonstrating with MATLAB that it can produce higher sidelobes and a troubling mainlobe dip for some input frequencies. He explains that windowing's goal is to reduce amplitude discontinuities in a periodic extension, not merely to force end samples to zero, and concludes the method is frequency-dependent and not recommended.
Modelling a Noisy Communication Signal in MATLAB for the Analog to Digital Conversion Process
Practical signal modeling treats receiver noise as a fixed power source, not something tied to the transmitted waveform. Parth demonstrates why using MATLAB's awgn(sig,SNR,'measured') can misrepresent an analog front end and provides a short function that scales your signal so the added AWGN produces the desired receiver noise variance. This prepares realistic inputs for upcoming ADC simulations.
A Direct Digital Synthesizer with Arbitrary Modulus
Need exact sampled tones on a coarse grid without a huge sine table? This post shows how to build a Direct Digital Synthesizer with an arbitrary modulus so the output frequency is exactly k·fs/L, using a look-up table as small as 20 entries for the 10 MHz/0.5 MHz-step example. It also explains fixed-point LUT rounding, accumulator bit sizing, and how to produce quadrature outputs when L is multiple of 4.
The Zeroing Sine Family of Window Functions
A previously unrecognized family of DFT window functions is introduced, built from products of shifted sines that deliberately zero out tail samples and control nonzero support. Cedron Dawg presents recursive and semi-root constructions, runnable code, and numerical examples, and shows that the odd-N member L=(N-1)/2 numerically matches a discrete Hermite-Gaussian DFT eigenvector. The post highlights practical properties, an even-N fix, and applications to spectrograms and tone decomposition.
Coefficients of Cascaded Discrete-Time Systems
Multiplying discrete-time transfer functions is just polynomial multiplication, and polynomial multiplication is convolution. Neil Robertson shows that the numerator and denominator coefficients of cascaded systems come from convolving the individual coefficient vectors, then demonstrates the idea with MATLAB code and a 2nd-order IIR cascade that yields a 4th-order response. The approach makes computing time and frequency responses straightforward.
"Neat" Rectangular to Polar Conversion Algorithm
Rick Lyons revisits a clever slide-rule era trick for estimating the magnitude of a complex number without computing a square root. He highlights a neat identity, prompted by a Jerry Avins post, that converts the sqrt problem into forward and inverse trigonometric operations plus ratios. The post invites readers to derive Eq. (2) and see why a seemingly complex idea is actually simple and practical.
A Two Bin Solution
Cedron Dawg shows how a real sinusoid's frequency, amplitude and phase can be recovered from only two adjacent DFT bins. The article derives exact two-bin formulas, gives a clear Gambas reference implementation, and demonstrates that accurate parameters can be obtained with very few samples when the tone lies between the bins. It also explains when the method breaks down and how the real-valued unfurling improves robustness.
Autocorrelation and the case of the missing fundamental
A short hands-on exploration shows why we perceive the fundamental pitch even when it's absent from the spectrum. Using saxophone recordings, high-pass filtering, and autocorrelation plots, the post demonstrates that the highest ACF peak often predicts perceived pitch rather than the strongest spectral line. The experiments also show that removing high harmonics eliminates the effect, and that autocorrelation is a useful but incomplete model of pitch perception.
A Wide-Notch Comb Filter
Traditional comb filters make very narrow stopband notches, which limits their ability to suppress broader interfering tones. Rick Lyons presents a linear-phase comb filter that produces wider stopband notches than the conventional design while preserving linear-phase behavior. The post also reviews the traditional cascaded recursive running-sum architecture, its co-located dual poles and zeros on the z-plane, and the placement of nulls at integer multiples of fs/D.
Phase and Amplitude Calculation for a Pure Complex Tone in a DFT
Cedron Dawg derives compact, exact formulas to recover the phase and amplitude of a single complex tone from a DFT bin when the tone frequency is known. The paper turns the complex bin value into closed-form expressions using a sine-fraction amplitude correction and a simple phase shift, and includes working code plus a numeric example for direct implementation.
How the Cooley-Tukey FFT Algorithm Works | Part 3 - The Inner Butterfly
At the heart of the Cooley-Tukey FFT algorithm lies a butterfly, a simple yet powerful image that captures the recursive nature of how the FFT works. In this article we discover the butterfly’s role in transforming complex signals into their frequency components with efficiency and elegance. Starting with the 2-point DFT, we reveal how the FFT reuses repeated calculations to save time and resources. Using a divide-and-conquer approach, the algorithm breaks signals into smaller groups, processes them through interleaving butterfly diagrams, and reassembles the results step by step.


















