Digital Envelope Detection: The Good, the Bad, and the Ugly

Rick Lyons April 3, 201616 comments

Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.

Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...


Autocorrelation and the case of the missing fundamental

Allen Downey January 21, 201610 comments

[UPDATED January 25, 2016:  One of the examples was broken, also the IPython notebook links now point to nbviewer, where you can hear the examples.]

For sounds with simple harmonic structure, the pitch we perceive is usually the fundamental frequency, even if it is not dominant.  For example, here's the spectrum of a half-second recording of a saxophone.

The first three peaks are at 464, 928, and 1392 Hz.  The pitch we perceive is the fundamental, 464 Hz, which is close to...


Generating pink noise

Allen Downey January 20, 20161 comment

In one of his most famous columns for Scientific American, Martin Gardner wrote about pink noise and its relation to fractal music.  The article was based on a 1978 paper by Voss and Clarke, which presents, among other things, a simple algorithm for generating pink noise, also known as 1/f noise.

The fundamental idea of the algorithm is to add up several sequences of uniform random numbers that get updated at different rates. The first source gets updated at...


Amplitude modulation and the sampling theorem

Allen Downey December 18, 20156 comments

I am working on the 11th and probably final chapter of Think DSP, which follows material my colleague Siddhartan Govindasamy developed for a class at Olin College.  He introduces amplitude modulation as a clever way to sneak up on the Nyquist–Shannon sampling theorem.

Most of the code for the chapter is done: you can check it out in this IPython notebook.  I haven't written the text yet, but I'll outline it here, and paste in the key figures.

Convolution...


60 numbers

Mahadevan Srinivasan November 30, 20152 comments

This blog title is inspired from the Peabody award-winning Radiolab episode 60 words. Radiolab is well known for its insightful stories on Science with an amazing sound design. Today's blog is about decoding Radiolab's theme music (actually, just a small "Mmm Newewe" part of it hereafter called the Radiolab sound). I have been taking this online course on Audio Signal Processing where we are taught how to analyze sounds...


Multilayer Perceptrons and Event Classification with data from CODEC using Scilab and Weka

David E Norwood November 25, 2015

For my first blog, I thought I would introduce the reader to Scilab [1] and Weka [2].  In order to illustrate how they work, I will put together a script in Scilab that will sample using the microphone and CODEC on your PC and save the waveform as a CSV file.  Then, we can take the CSV file and open it in Weka.  Once in Weka, we have a lot of paths to consider in order to classify it.  I use the term classify loosely since there are many things you can do with data sets...


Python scipy.signal IIR Filtering: An Example

Christopher Felton May 19, 2013
Introduction

In the last posts I reviewed how to use the Python scipy.signal package to design digital infinite impulse response (IIR) filters, specifically, using the iirdesign function (IIR design I and IIR design II ).  In this post I am going to conclude the IIR filter design review with an example.

Previous posts:


Beat Notes: An Interesting Observation

Rick Lyons March 13, 20137 comments

Some weeks ago a friend of mine, a long time radio engineer as well as a piano player, called and asked me,

"When I travel in a DC-9 aircraft, and I sit back near the engines, I hear this fairly loud unpleasant whump whump whump whump sound. The frequency of that sound is, maybe, two cycles per second. I think that sound is a beat frequency because the DC-9's engines are turning at a slightly different number of revolutions per second. My question is, what sort of mechanism in the airplane...


ICASSP 2011 conference lectures online (for free)

Sami Aldalahmeh July 5, 2011

For the first time, the oral presentations of the International Conference on Accoustics, Speech, and Signal Processing (ICASSP) were recorded and posted online for free. This conference is the best in signal processing and it's diverse as well.

It has a bit speech processing, communication signal processing, and some interesting stuff like bio-inspired signal processing, where Prof. Sayed modeled the behaviour of a group of predetors attacking a herd of preys using distributed least mean...


Fitting Filters to Measured Amplitude Response Data Using invfreqz in Matlab

Julius Orion Smith III October 11, 20102 comments

This blog post has been moved to the code snippet section and can now be found HERE.  Please update your bookmark.  Thanks!


Autocorrelation and the case of the missing fundamental

Allen Downey January 21, 201610 comments

[UPDATED January 25, 2016:  One of the examples was broken, also the IPython notebook links now point to nbviewer, where you can hear the examples.]

For sounds with simple harmonic structure, the pitch we perceive is usually the fundamental frequency, even if it is not dominant.  For example, here's the spectrum of a half-second recording of a saxophone.

The first three peaks are at 464, 928, and 1392 Hz.  The pitch we perceive is the fundamental, 464 Hz, which is close to...


Fitting Filters to Measured Amplitude Response Data Using invfreqz in Matlab

Julius Orion Smith III October 11, 20102 comments

This blog post has been moved to the code snippet section and can now be found HERE.  Please update your bookmark.  Thanks!


Adaptive Beamforming is like Squeezing a Water Balloon

Christopher Hogstrom January 9, 20214 comments

Adaptive beamforming was first developed in the 1960s for radar and sonar applications. The main idea is that signals can be captured using multiple sensors and the sensor outputs can be combined to enhance the signals propagating from specific directions and attenuate (null out) signals from other directions. It has grown immensely in recent years as processors have become faster and cheaper. Today, adaptive beamforming applications include smart speakers (like the Amazon Echo),...


The Phase Vocoder Transform

Christian Yost February 12, 2019
1 Introduction

I would like to look at the phase vocoder in a fairly ``abstract'' way today. The purpose of this is to discuss a method for measuring the quality of various phase vocoder algorithms, and building off a proposed measure used in [2]. There will be a bit of time spent in the domain of continuous mathematics, thus defining a phase vocoder function or map rather than an algorithm. We will be using geometric visualizations when possible while pointing out certain group theory...


Exploring Human Hearing Range

Stephen Morris October 31, 20202 comments
Human Hearing Range

In this post, I'll look at an interesting aspect of Audacity – using it to explore the threshold of human hearing. In my book Digital Signal Processing: A Gentle Introduction with Audio Examples, I go into this topic and I include a side note on the amazing hearing range of our canine companions.

Creating a Test Audio File

Audacity allows for the generation of a variety of test signals. If you click the Generate->Tone menu, it looks something like...


Through the tube...

Markus Nentwig September 15, 20073 comments

Hello all,

something completely different...

there was some recent discussion on the forum about modeling guitar amplifiers.I have been wondering for quite a while, whether the methods that I use to model radio frequency power amplifiers might also work for audio applications.

It's been a rainy day, so I found the time and energy for some experiments. Just for fun.

The device-under-test is a preamplifier with a single 12AX7 tube:

My good ol' Kurzweil (not in the picture) serves as "signal...


Components in Audio recognition - Part 1

Prabindh Sundareson November 20, 20076 comments

Audio recognition is defined as the task of recognizing a particular piece of audio (could be music, ring-tone, and speech as well), from a given sample set of audio tracks.

The Human Auditory System (HAS) is unique in that the tasks of "familiarisation" of unknown tracks, and finding "similar" tracks come naturally to us. Tunes from the not-so-recent past can still haunt the human brain many years later, when triggered by a similar tune. The way the brain stores and...


Digging into an Audio Signal and the DSP Process Pipeline

Stephen Morris March 9, 20206 comments
In this post, I'll look at the benefits of using multiple perspectives when handling signals.A Pre-existing Audio File

Let's say we have an audio file of interest. Let's load it into Audacity and zoom in a little (using View → Zoom → Zoom In, multiple times). The figure illustrates the audio signal: just a basic single-tone signal.

By continuing to zoom into the signal, we eventually get to the point of seeing individual samples as illustrated below. Notice that I've marked one...


ICASSP 2011 conference lectures online (for free)

Sami Aldalahmeh July 5, 2011

For the first time, the oral presentations of the International Conference on Accoustics, Speech, and Signal Processing (ICASSP) were recorded and posted online for free. This conference is the best in signal processing and it's diverse as well.

It has a bit speech processing, communication signal processing, and some interesting stuff like bio-inspired signal processing, where Prof. Sayed modeled the behaviour of a group of predetors attacking a herd of preys using distributed least mean...


60 numbers

Mahadevan Srinivasan November 30, 20152 comments

This blog title is inspired from the Peabody award-winning Radiolab episode 60 words. Radiolab is well known for its insightful stories on Science with an amazing sound design. Today's blog is about decoding Radiolab's theme music (actually, just a small "Mmm Newewe" part of it hereafter called the Radiolab sound). I have been taking this online course on Audio Signal Processing where we are taught how to analyze sounds...